the right way to read JSON from a file and POST it

Dec 15, 2014 at 6:59 PM
Hi,

I'm using the following code to read JSON payloads from files, and POST them to the server. This works well, but I was wondering if there's a better way.
pplx::task<void> HTTPPostAsync(web::http::client::http_client* httpclient, std::string jsonfile)
{
   std::shared_ptr<concurrency::streams::istream> fileStream = std::make_shared<concurrency::streams::istream>();
   return concurrency::streams::fstream::open_istream(utility::conversions::to_utf16string(jsonfile))
   .then([=](concurrency::streams::istream inFile)
      {
         *fileStream = inFile;
         http_request postrequest(methods::POST);
         utility::size64_t length = fileStream->streambuf().size();
         postrequest.set_body(*fileStream, length, U("application/json"));
         postrequest.set_request_uri(U("api/documents"));
         return httpclient->request(postrequest);
      })
   .then([=](http_response response)  // Handle response headers arriving.
      {
         std::cout << "Received response status code: " << response.status_code();
         //do some other handling here as well
      })
   .then([=]()
      {
         return fileStream->close();
      });
} 
Coordinator
Dec 16, 2014 at 1:30 AM
Hi damienhocking,

What you have looks correct. You also could use one of the http_client::request(...) helper functions to collapse 4 or 5 of those lines making the request if you want. Something like the following:
return http_client->request(methods::POST, U("api/documents"), *fileStream, length, U("application/json"));
One other small note. There is no reason to hold the fileStream in a shared_ptr. Internally we use a pointer to implementation pattern so the copy constructor/assignment operator doesn't have deep copy semantics. You can just pass it around between your tasks without the shared_ptr. This will save a heap allocation. Unfortunately we don't have this documented well :(.

Steve
Dec 16, 2014 at 3:51 PM
Thanks Steve. So I can just use concurrency::streams::istream directly? Excellent (we use that idiom in our code too, it's way nicer to read).

We're packaging our requests up with data from a couple of different places as part of an experiment, hence the separate bits of construction instead of one blurt into the request.

Cheers,

Damien
Dec 16, 2014 at 4:00 PM
Just FYI for anyone else, you have to change the Lambda capture from value to reference as well:
  pplx::task<void> HTTPPostAsync(web::http::client::http_client* httpclient, std::string jsonfile)
{
   concurrency::streams::istream fileStream;
   return concurrency::streams::fstream::open_istream(utility::conversions::to_utf16string(jsonfile))
   .then([&](concurrency::streams::istream inFile)
      {
         fileStream = inFile;
         http_request postrequest(methods::POST);
         utility::size64_t length = fileStream.streambuf().size();
         postrequest.set_body(*fileStream, length, U("application/json"));
         postrequest.set_request_uri(U("api/documents"));
         return httpclient->request(postrequest);
      })
   .then([=](http_response response)  // Handle response headers arriving.
      {
         std::cout << "Received response status code: " << response.status_code();
         //do some other handling here as well
      })
   .then([=]()
      {
         return fileStream.close();
      });
} 
Dec 16, 2014 at 5:51 PM
Erm, this actually crashes now, after the fileStream.close() call. There's an assert on _M_pResetChain == 0 in event.cpp. In the debugger, the close() call is only hit once. If I comment out fileStream.close(), it runs fine. Debugging into fileStream.close() though, pplx::task<void> close() checks is_valid() which returns false. I think fileStream goes out of scope in that task chain after it's assigned to inFile, so if you don't specifically call close() at the end of the chain it won't crash, but it never closes correctly either because there's no destructor that has the clean up code (unless I missed it).

If you call close() right before the httpclient->request call, the request doesn't work because the input stream isn't open anymore and that request call only reads the stream right at the point of the call (not at set_body). You can't return the close() task instead of the httpclient->request, because then you can't pass the http_response to the continuation.

So you do have to use the pointer approach at the top to get the lifetimes right through the task chain and then clean up after yourself. Yes?

BTW I'm still on the 2.2 release.

Damien
Coordinator
Dec 17, 2014 at 1:01 AM
Hi Damien,

You shouldn't change the fileStream in your lambda to be captured by reference that is very dangerous. What is happening is when your lambda executes you are referring to the address of the fileStream that was on the stack, which as since been unwound. I'm guessing that is why you are hitting asserts and equally likely could be crashes. You should capture the fileStream by value. I'm guessing what you are getting complains from the compiler about calling a non const function on the fileStream. Instead the correct thing to do is to make your lambda as mutable.

Steve
Marked as answer by damienhocking on 12/16/2014 at 9:20 PM
Dec 17, 2014 at 4:25 AM
Argh. That was exactly the compiler error I was getting. I should have looked further for const vs non-const, but I've never actually run into that before with lambdas. We live and learn. Thanks for the help. For anyone else, the code that is actually correct is :
pplx::task<void> HTTPPostAsync(web::http::client::http_client* httpclient, std::string jsonfile)
{
   concurrency::streams::istream fileStream;
   return concurrency::streams::fstream::open_istream(utility::conversions::to_utf16string(jsonfile))
   .then([=](concurrency::streams::istream inFile) mutable
      {
         fileStream = inFile;
         http_request postrequest(methods::POST);
         utility::size64_t length = fileStream.streambuf().size();
         postrequest.set_body(fileStream, length, U("application/json"));
         postrequest.set_request_uri(U("api/documents"));
         return httpclient->request(postrequest);
      })
   .then([=](http_response response)  // Handle response headers arriving.
      {
         std::cout << "Received response status code: " << response.status_code();
         //do some other handling here as well
      })
   .then([=]()
      {
         return fileStream.close();
      });
} 
Feb 19, 2015 at 2:36 PM
I am trying this exact code (copy&past from damienhocking - Dec 17, 2014 at 6:25 AM) with cpprest 2.4.0.1 on windows 8.1 and visual studio 2013 linking against the v120 files.

Only adding a http_client
http_client httpclient{ U("http://myserver.azurewebsites.net/") };
and a file containing json in utf8.

It will only work if i send less than 1000 bytes (did not check exact number, but 950 bytes was ok and 1000 bytes was not ok)

If i send more than 1000 bytes, it will always raise a http_exception: WinHttpSendRequest chunked: 87
Same exception occurs if i set the body in any other way (bytestream, std::vector<>, std::string)

Is there any way to send more data?
Coordinator
Feb 19, 2015 at 4:42 PM
Hi Holger_0000,

I've never heard of this happening before. Do you have a repro that works on a public server I can hit? Can you please share the full 'exact' code you are hitting the problem with? If you don't want to publically share on the forums here you can email me the code at stgates at Microsoft dot com.

Thanks,
Steve
Feb 20, 2015 at 8:22 AM
Edited Feb 20, 2015 at 9:25 AM
Hi Steve,

I could locate the problem.

It seems it does not work if i set the global locale to anything but "C"
auto a2 = std::locale("en-us");
std::locale::global(a2);
If i do this for my application, the post request will raise the exception.

QuickFix1: (not really a fix)
If i change my "Digit grouping" (in Region settings) from = "123.456.7892 to "123456789" it will work again.

QuickFix2:
After setting the body of the request, I remove the Content-Length header and add it again with a correctly formatted value
postrequest.set_body(fileStream, length, U("application/json"));
postrequest.headers().remove(U("Content-Length"));
postrequest.headers().add(U("Content-Length"), U("91335"));
Coordinator
Feb 23, 2015 at 4:36 PM
Hi Holger_0000,

Yes I see the problem. A stringstream is being used to write the Content-Length header which inherits the global locale. I'm working on a fix and will update this thread again once it is checked into the development branch.

Steve
Coordinator
Feb 27, 2015 at 12:05 AM
Hi Holger_0000,

Ok I've fixed the problem in the 'development' branch here on CodePlex. The fix will be in our NuGet package at the next release 2.5.0. If you get a chance could you perhaps try building from source in the development branch and make sure you aren't hitting any other issues?

Thanks,
Steve