rejetto forum

Unlimited Speed Hang

0 Members and 1 Guest are viewing this topic.

Offline everettt

  • Occasional poster
  • *
    • Posts: 12
    • View Profile
Setting HFS to default unlimited speed results in either a disconnect or hang. Server uses w g e t command and if I rate limit w g e t there is no issue (w g e t ... --limit-rate=8000k ...). However if I do not rate limit the w g e t command or rate limit HFS the transfer quits without any indication on how or why. The point at which it stops is also not consistent.
When unlimited HFS graph reports speeds like 2873980 kbps and
Log reports speeds anywhere from 3MB/s to 68MB/s

Can anyone help suggest what the problem might be and/or how to debug what the failure mechanism is?

Windows 7 Pro 64bit, SP1
Intel Core i5-2400 3.10 GHz
Ethernet Server Adapter X520-2 #2


Offline bmartino1

  • Tireless poster
  • ****
    • Posts: 910
  • I'm only trying to help i mean no offense.
    • View Profile
    • My HFS Google Drive Shared Link
To diagnose a bit further, especially with w g e t... (R u using Windows gnu w g e t?) Or a Linux box w g e t?...

Either way wire shark the test again and watch the packet. They will tell you where and why it disconnected...

Wire shark would help diagnose that further...

http://filehippo.com/download_wireshark/32/
« Last Edit: September 15, 2017, 07:47:11 PM by bmartino1 »
Files I have snagged and share can be found on my google drive:

https://drive.google.com/drive/folders/1qb4INX2pzsjmMT06YEIQk9Nv5jMu33tC?usp=sharing


Offline everettt

  • Occasional poster
  • *
    • Posts: 12
    • View Profile
RHEL 6.5, 6.9, 7.3 versions "w g e t"
I will give Wireshark a try to see if I can capture the failure.
Thanks


Offline bmartino1

  • Tireless poster
  • ****
    • Posts: 910
  • I'm only trying to help i mean no offense.
    • View Profile
    • My HFS Google Drive Shared Link
i remember experimenting with that command w g e t http://manpages.ubuntu.com/manpages/wily/man1/w g e t.1.html

https://tinyurl.com/ya45fhq2

and to make it run on hfs, i had to add a null speed option. something i posted onthe forum a  long time ago.

it was an issues with w g e t, not hfs. and how it talked to the web site.

 i think i had to force html with the option -f, there are also some w g e t option that can help diagnoise such as a log file...


----------------
now i rember, it was the no clober option, as hfs keeps sending the file...

-nc
       --no-clobber
           If a file is downloaded more than once in the same directory,
           W get's behavior depends on a few options, including -nc.  In
           certain cases, the local file will be clobbered, or overwritten,
           upon repeated download.  In other cases it will be preserved.

           When running W get without -N, -nc, -r, or -p, downloading the same
           file in the same directory will result in the original copy of file
           being preserved and the second copy being named file.1.  If that
           file is downloaded yet again, the third copy will be named file.2,
           and so on.  (This is also the behavior with -nd, even if -r or -p
           are in effect.)  When -nc is specified, this behavior is
           suppressed, and W get will refuse to download newer copies of file.
           Therefore, ""no-clobber"" is actually a misnomer in this
           mode---it's not clobbering that's prevented (as the numeric
           suffixes were already preventing clobbering), but rather the
           multiple version saving that's prevented.

           When running W get with -r or -p, but without -N, -nd, or -nc, re-
           downloading a file will result in the new copy simply overwriting
           the old.  Adding -nc will prevent this behavior, instead causing
           the original version to be preserved and any newer copies on the
           server to be ignored.

           When running W get with -N, with or without -r or -p, the decision
           as to whether or not to download a newer copy of a file depends on
           the local and remote timestamp and size of the file.  -nc may not
           be specified at the same time as -N.

           Note that when -nc is specified, files with the suffixes .html or
           .htm will be loaded from the local disk and parsed as if they had
           been retrieved from the Web.
« Last Edit: September 17, 2017, 03:07:14 AM by bmartino1 »
Files I have snagged and share can be found on my google drive:

https://drive.google.com/drive/folders/1qb4INX2pzsjmMT06YEIQk9Nv5jMu33tC?usp=sharing


Offline everettt

  • Occasional poster
  • *
    • Posts: 12
    • View Profile
Wireshark results:
Wireshark hangs with "not responding".
HFS hangs with no message, connections remain open
Linux server hangs in the middle of a file, issuing a CRLF results in a new blank line as the job has not completed
The state appears to be indefinite, or as long as I was willing to wait
If I switch off HFS, the server terminates the running job but Wireshark remains stuck and had to be terminated with the task manager. There was some erratic and intermittent behavior where it appeared to be trying to display more packets but I
could not put up with it any longer.
w g e t options example:
-r -nH -l1 --no-verbose --no-parent --directory-prefix=/abc --reject "index.html*" --accept rpm http://192.168.0.14/HP/deliverables/AddtlRpms/

The issue is still open and unexplained.



Offline everettt

  • Occasional poster
  • *
    • Posts: 12
    • View Profile
This appears to be where it all starts in the log file:
Read error (Connection reset by peer) in headers.

After which I get a long list of "connection refused" messages.
"-F" made no difference.

Still looking for answers.


Offline everettt

  • Occasional poster
  • *
    • Posts: 12
    • View Profile
Getting closer but still no solution.
What I find is that the server sends a request, but HFS never answers. If I use F4 to stop and start HFS, the transfer continues. However it is not long before I get another failure. I suspect that the default connection timeout is much longer than I ever wanted to wait so I never realized what was happening. Now that I have had a chance to experiment I can make it worse but I cannot seem to make it much better.
I also suspect that because the unlimited rate is very high that the Windows OS contributes to the fact that HFS may not be getting every single request.
So the question is:
How do I set the get command parameters such that I am resending requests to HFS in a way that is not confused or locked up due to connections not being terminated and restarted as both the get command and HFS would understand?

I have followed a few examples I found using Google but as I said, I can make it worse but I can't seem to make it better.



Offline bmartino1

  • Tireless poster
  • ****
    • Posts: 910
  • I'm only trying to help i mean no offense.
    • View Profile
    • My HFS Google Drive Shared Link
Hmm, many things are going on here, usually this is where mars/ other hop in.

Need a few more details for testing...

What I'd recommend next is this w get option:
--retry-connrefused
           Consider "connection refused" a transient error and try again.
           Normally W get gives up on a URL when it is unable to connect to the
           site because failure to connect is taken as a sign that the server
           is not running at all and that retries would not help.  This option
           is for mirroring unreliable sites whose servers tend to disappear
           for short periods of time.
Files I have snagged and share can be found on my google drive:

https://drive.google.com/drive/folders/1qb4INX2pzsjmMT06YEIQk9Nv5jMu33tC?usp=sharing


Offline bmartino1

  • Tireless poster
  • ****
    • Posts: 910
  • I'm only trying to help i mean no offense.
    • View Profile
    • My HFS Google Drive Shared Link
I need your full w get line that your trying to download.
Specifically the option and file name/type

Are you using an edited template j. Hfs? Or using default config?

Finally, on Linux is you download path write capable/ just download, not running over a stream(as this would require more/other stuff)...

The error you started to post vs the error you have now(includes Wireshark issue) leads me to believe this is a win pc problem.

I'm surprised that Wireshark is giving you issues...
Files I have snagged and share can be found on my google drive:

https://drive.google.com/drive/folders/1qb4INX2pzsjmMT06YEIQk9Nv5jMu33tC?usp=sharing


Offline everettt

  • Occasional poster
  • *
    • Posts: 12
    • View Profile
Already tried "--retry-connrefused", did not work.
Already posted complete command "-r -nH -l1 --no-verbose --no-parent --directory-prefix=/abc --reject "index.html*" --accept rpm http://192.168.0.14/HP/deliverables/AddtlRpms/".
Wireshark was a waste of time because it too had trouble keeping up with the high data rate.
My best troubleshooting tool was to set verbose and watch the activity on the Linux side.
The closest I have been to a working setup was when I figured out that using "F4" to stop then restart HFS allowed HFS to restart and eventually finish (many "F4"s).
If I insert the rate limit option in the command line "--limit-rate=8000k", I have no issues at all except the entire transfer takes ~45 minutes.
« Last Edit: September 23, 2017, 11:44:31 AM by everettt »


Offline everettt

  • Occasional poster
  • *
    • Posts: 12
    • View Profile
Adding:
"The error you started to post vs the error you have now"
The issue has not changed "using unlimited rate results in hang".
I have tried many things including changes to "w get" and changes to the HFS configuration.
I have been able to make it worse, but I have not yet been able to make it any better.

Here is my current theory:
Linux send "w get" request.
Transfers begin, but at some point PC/HFS is so busy it is not able to respond to a request.
Linux "w get" timeout after request is 900 seconds. This is the "hang".
Recently I found that if I reset HFS, Linux realizes the connection drop and reissues the request and transfers begin again.

I realize that this is something that can happen. I also realize that this is something that can be managed like any other data transport method "start - fail - retry - loop until done".
In this case I just think that the failure is statistically common instead of statistically rare. In a statistically rare case the defaults work because a failure should not occur. This is why the rate limit case works flawlessly. In a statistically common case the defaults no longer work (waiting 900 seconds is completely unrealistic). If I knew about all of the controls and how they work, for both "w get" and HFS, I might be able to find a combination that works. This is what I am looking for, some assistance in finding a combination of controls that will retry immediately "making the assumption that HFS missed the request and will never respond so the request needs to terminate and retry". I would be willing to share a remote session with anyone willing to provide this level of assistance.

Some history: I ran into this issue more than a year ago using an older version of HFS. I made many similar attempts to get it working and after every attempt failed I was OK with the rate limit option because the total transfer size was small enough. This issue has been experienced in many locations using different PCs (Windows 7 Pro) running different versions of HFS and using different Linux servers running different versions of RHEL (6.5, 6.9. 7.2, 7.3).


Offline bmartino1

  • Tireless poster
  • ****
    • Posts: 910
  • I'm only trying to help i mean no offense.
    • View Profile
    • My HFS Google Drive Shared Link
I will be looking into it, I'm currently away from a pc. From your w get line, it looks like you were targeting the whole folder, not just a specific file.
Files I have snagged and share can be found on my google drive:

https://drive.google.com/drive/folders/1qb4INX2pzsjmMT06YEIQk9Nv5jMu33tC?usp=sharing


Offline everettt

  • Occasional poster
  • *
    • Posts: 12
    • View Profile
Yes, targeting a directory with about 33GB of data. This is why I am interested in solving the unlimited speed issues. Ultimately there will be as many as 20 servers running in parallel that will all want to get files from HFS.


Offline bmartino1

  • Tireless poster
  • ****
    • Posts: 910
  • I'm only trying to help i mean no offense.
    • View Profile
    • My HFS Google Drive Shared Link
Ok, I think mars / rejetto would be better at explaining why hfs targeting a directory cause this issue. I would have you test with the *.* Athe the end so it tagetets all the files not the folder. Still going to test when I can. Apachae / windows folder share (special tech word used in maping network drives over internet) would be better..


Hfs is a file server not a web server. Hfs uses it's file protcals thought Pascal http coding. I don't know if an easy fix.
Files I have snagged and share can be found on my google drive:

https://drive.google.com/drive/folders/1qb4INX2pzsjmMT06YEIQk9Nv5jMu33tC?usp=sharing


Offline everettt

  • Occasional poster
  • *
    • Posts: 12
    • View Profile
Using *.* at the end.
-r -nH -l1 --no-verbose --no-parent --directory-prefix=/abc --reject "index.html*" --accept rpm http://192.168.0.14/HP/deliverables/AddtlRpms/*.*

Connecting to 192.168.0.14:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2017-09-26 12:32:43 ERROR 404: Not Found.

Warning: wildcards not supported in HTTP.

I'll see if I can find the HTTP verses other mode options to see if there are any alternatives that might work.