Resume uploads

Jun 3, 2015 at 1:09 PM
I'm wondering if Casablanca allows for resumable uploads? I'm specifically talking about uploading by creating a request and settings its body to a stream (because I'm uploading large files).

The documentation doesn't actually say anything about the set_body or creating a stream. I'm not sure if it's possible to seek in a (c++ rest) stream or what effects it has.

Finally, I'm doing this in response to uploads that time out. I'm not sure if casablanca actually handles timeouts internally, but I'm guessing there's some kind of support for that seeing as http is built on tcp. Nevertheless, I'm getting timeouts and I want to fix them, if possible.
Jun 3, 2015 at 4:53 PM
Hi Essentia,

What exactly would a resumable HTTP request mean here? Its seems like whether or not this would be possible would depend on how the server is implemented. In our http_client, underneath the body of requests/responses are entirely treated as a stream of bytes. For example if you look at the http_request::set_body(...) overloads some of them take in input stream. You also can pass an input stream to http_client::request(...). However in all of these cases if the request fails or times out for some reason we will not re-attempt to make the request again.

In regards to hitting a time out, you can adjust the time out setting when you construct the http_client, take a look at http_client_config::set_timeout(...).

Jun 3, 2015 at 5:40 PM
Edited Jun 3, 2015 at 5:40 PM
By resumable, I mean that if I manage to upload N bytes and then the connection times out, then I can later just upload again, but starting at the N + 1'th byte, i.e. resume the upload at the place it was aborted at last time. I mean, obviously there needs to be server support, but I'm looking at the client side here. As far as I know, the server should support it.

By looking some at the code, it seems that actually seeking in the stream before attaching it to a request seems to work? Something like

auto File = CreateAsyncIStream(...); + 1);
auto Request = http_request(..., File);
Client.request(Request, ...);
Jun 3, 2015 at 6:04 PM
So there isn't anything in the library that is going to automatically resume/resubmit the request, but yes our streams APIs have seeking functionality. This perhaps could be done yourself by doing a new the request and seeking to the correct point in your stream. The one piece that is a challenging is to figure out how much has already been uploaded. We have an API, http_request::set_progress_handler(...), that allows you to register a callback to receive information as chunks of data are read from the request stream and send out. Any registered progress handler callback will be executed after the underlying stack (WinHttp, Boost.Asio) has informed us the chunk has successfully been written. Keep in mind this doesn't mean the server side application has successfully processed the data so I'm not entirely sure if would work in all cases.

I'm wondering if you have really large amount of data that it just might make sense to preemptively break down your one large request into smaller ones and make those individually.

Jun 3, 2015 at 6:30 PM
Edited Jun 3, 2015 at 6:31 PM
Yes, I'm uploading files to youtube, so the sizes tend to range from around 4 - 7 GB. I'm aware of set_progress_handler (already using it), so I guess I should make a try and see how it goes.

Not sure how I would break the requests down in this case, though. I don't think it's possible since it's one large file I'm uploading. Or many, as it is.

It's not a huge problem, but it causes some annoyances when files do not upload correctly as I have to go in manually and correct things. The point of my application is to automate things.
Jun 3, 2015 at 6:34 PM
Since this is youtube I recommend you take a look at their documentation here:

It describes two different options very similar to what we've discussed here.

Jun 3, 2015 at 6:41 PM
Great! I can take take of that.

I just needed to know if seeking would cause the API to simply start sending the bytes from the position I was seeking to, and it seems from the response that this would be the case.

It's already similar to what I've doing, so incorporating this should not take that much effort, I think.
Jun 3, 2015 at 6:43 PM
The request will start reading from wherever the 'read' or 'input' head is located on the stream. This allows for example to reuse the same stream across multiple requests.

Jun 4, 2015 at 5:56 AM

I managed to implement resumable uploads with little trouble.