rejetto forum

httpauth challenge HTTP headers - bug

0 Members and 1 Guest are viewing this topic.

Offline delmote

  • Occasional poster
  • *
    • Posts: 1
    • View Profile
The HTTP headers sent by HFS when issuing a httpauth challenge (HTTP 401 Unauthorized) do not contain a "Content-Length" header.  This causes unpredictable behavior, and sometimes data corruption, in different clients.

For example, in Firefox, when visiting a password-protected HFS URL, the browser immediately prompts the user for the username and password, which is the expected behavior.  But then, Firefox does not send the user's username+password to HFS because Firefox was not able to decide if or when the end of the HTTP 401 response arrived.  So Firefox waits forever, until HFS times out and closes the TCP connection.  Only after the TCP connection is dropped by HFS does Firefox know the 401 response body is finished, at which point it re-connects and sends its second GET with httpauth info.  The result of all this is that the user experiences a long delay (over a minute sometimes) after entering his user/pass.

Another example: wget emulates browsers when making an httpauth request by first sending a "dummy" GET without httpauth followed as soon as possible by a second GET with httpauth.  Unlike Firefox, however, wget does not wait forever after the first (401) server response if that response contains no Content-Length HTTP header.  Instead, wget tries to guess when the 401 response has been fully sent by watching for a pause in the I/O flow from the server.  (It does this as an "imperfect solution" to servers that send no Content-Length HTTP header in their 401 responses.)  Well, unfortunately, if coincidental internet lag occurs during this 401 reply, wget's guessing mechanism is triggered too early, and wget sends its second (httpauth'ed) GET before the full 401 reply body has been received.  As a result, some or all of the 401 reply's body gets prepended to the output file by wget.  Let me make all this clear with screenshots:
https://i.imgur.com/e9edjSD.png  <-- wget sending second GET too early due to an internet lag (packet sniffer)
https://i.imgur.com/iXvLg8b.png   <-- how wget looks at command prompt (note "no headers, assuming HTTP/0.9")
https://i.imgur.com/UNDLA4V.png  <-- what wget wrote to the output file (HTTP 401's response body + actual mp4 file contents)

And yet other clients misbehave in yet other ways due to the lack of Content-Length.  Etc., etc., etc.

To fix all these issues, HFS simply needs to send a Content-Length header in the HTTP headers of its 401 responses.

If for some reason HFS cannot predict the length of its 401 responses (and therefore cannot insert a Content-Length header), then there is another way to solve this bug:

1.  When HFS sends a 401 response with HTTP/1.1, HFS can send the response with "chunked" transfer encoding
2.  When HFS sends a 401 response with HTTP/1.0 (where "chunked" transfer encoding is not allowed), HFS can instead insert a "Connection: close" HTTP header in its 401 reply's HTTP headers (regardless of what the client requested), and then immediately close the TCP connection at the end of the 401 response body.

If sending "chunked" would be impossible too, then it would also be fine to simply #2 (for HTTP/1.0 replies) on HTTP/1.1 replies as well.

Anyway, thank you so much for writing HFS.  It is amazing!


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13523
    • View Profile
hi delmote, thanks for your contribution.
I've now spent some minutes investigating this but i need some more time before saying something.
I'll keep you posted.


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13523
    • View Profile
Thanks a lot for your detailed report.
Solving it was very easy, and i've oped for adding the content-length.
I will soon publish the fix officially, but in the while you can download a preview build at https://drive.google.com/open?id=1ei90QPS5pG9Nm3yKRtWppclhtJLnNY6n