rejetto forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - psyxakias

Pages: 1
1
Beta / Re: Testing build #160
« on: December 04, 2007, 06:14:58 PM »
Did it for me too, first downloaded build 150, then 160.
Same here. Updater was saying latest is #160 but it was offering me to download #150 - I accepted it and it downloaded #150, then I ran Updater again which was saying again that #160 is the latest and was offering me to download #150, but this time it downloaded #160.

Thanks :)

2
Bug reports / Re: 2 Possible BUGs (URL encoding + Recursive file listing)
« on: November 23, 2007, 10:38:40 PM »
Thanks for the quick response, rejetto.

I don't know if that should actually be a feature request: The reason I wanted unencoded passwords in recursive listing is that I have been trying to make a list of mp3s located into an HFS web server (with authentication), in order to put them into a .m3u and play them in Winamp but Winamp doesn't play the encoded ones, that's why I reported it as bug. Is there any easier way to do that or I better check some HFS plugin (I think I had seen something for mp3s) for such usage?

3
Bug reports / 2 Possible BUGs (URL encoding + Recursive file listing)
« on: November 22, 2007, 06:00:41 PM »
Hello,

I have noticed the following issues which both are occuring with recursive file listing:

1) If I enable "Include passwords in pages", the links are indeed showed normally (http://user:pass@rest-url) when clicking the URLs inside a folder. BUT when I click "File List" from inside a folder to recursive list all URLs, it generates a non-sense URL which of course doesn't work: http://path/http://user:pass@rest-url (yah that's not a typo from me, two http:// in a single URL)

Re-production:
- Enable "Include password in pages" (Menu > URL encoding > [CHECKED] Include password in pages)
- Going to http://1.2.3.4:2000/Dir1/Dir2/Dir3/ (after authenticating with testuser/testpass)
- Clicking "File List" which redirects me to: http://1.2.3.4:2000/Dir1/Dir2/Dir3/?tpl=list&recursive
- Generated URLs are like that (yes with two http:// !):
http://1.2.3.4:2000/Dir1/Dir2/Dir3/http://testuser:<encodedpass>@1.2.3.4:2000/Dir1/Dir2/Dir3/file1.jpg
http://1.2.3.4:2000/Dir1/Dir2/Dir3/http://testuser:<encodedpass>@1.2.3.4:2000/Dir1/Dir2/Dir3/file2.jpg
http://1.2.3.4:2000/Dir1/Dir2/Dir3/http://testuser:<encodedpass>@1.2.3.4:2000/Dir1/Dir2/Dir3/file3.jpg
http://1.2.3.4:2000/Dir1/Dir2/Dir3/http://testuser:<encodedpass>@1.2.3.4:2000/Dir1/Dir2/Dir3/file4.jpg
http://1.2.3.4:2000/Dir1/Dir2/Dir3/http://testuser:<encodedpass>@1.2.3.4:2000/Dir1/Dir2/Dir3/file5.jpg

2) If I disable "Unreadable passwords in URLs", the links are indeed showing readable passwords when I check the URLs inside a folder. BUT when I click "File List" to recursive list all URLs, the passwords are unreadable with encoded (%XX) characters.

Re-production:
- Enable "Include password in pages" (Menu > URL encoding > [CHECKED] Include password in pages)
- Disable "Unreadable passwords in URLs" (Menu > URL encoding > [UNCHECKED] Unreadable passwords in URLs)
- Going to http://1.2.3.4:20000/Dir1/Dir2/Dir3/ (after authenticating with testuser/testpass)
- Clicking "File List" which redirects me to: http://1.2.3.4:2000/Dir1/Dir2/Dir3/?tpl=list&recursive
- Generated URLs have encoded passwords:
http://1.2.3.4:2000/Dir1/Dir2/Dir3/http://testuser:%6D%6D%6D%6D@1.2.3.4:2000/Dir1/Dir2/Dir3/file1.jpg (%6D is an example)
http://1.2.3.4:2000/Dir1/Dir2/Dir3/http://testuser:%6D%6D%6D%6D@1.2.3.4:2000/Dir1/Dir2/Dir3/file2.jpg (%6D is an example)
http://1.2.3.4:2000/Dir1/Dir2/Dir3/http://testuser:%6D%6D%6D%6D@1.2.3.4:2000/Dir1/Dir2/Dir3/file3.jpg (%6D is an example)

Thank you.

PS: I am using HFS 2.3 beta #144 and I have even tried to restart HFS after enabling/disabling these options, no change.

4
Beta / Re: Testing build #133
« on: November 01, 2007, 12:57:27 AM »
Heya rejetto,

Here are my results:

[Without experimental]
1 connection: 46 KB/sec
2 connections: 90 KB/sec
5 connections: 220 KB/sec
10 connections: 420 KB/sec

[With experimental]
1 connection: 45-400 KB/sec (quite unstable, average: 153 KB/sec)
2 connections: 88-520 KB/sec (quite unstable, average: 287 KB/sec)
5 connections: 200-850 KB/sec (quite unstable, average: 516 KB/sec)
10 connections: 350-950 KB/sec (quite unstable, average: 680 KB/sec)

------------------

[-b 1024]
1 connection (without experimental): 45-46 KB/sec
1 connection (with experimental): 50-400 KB/sec at first, then goes down to 4-5 KB/sec

[-b 2048]
1 connection (without experimental): 45-46 KB/sec
1 connection (with experimental): 45-920 KB/sec at first, then goes down to 20-25 KB/sec

[-b 65536]
1 connection (without experimental): 45-47 KB/sec at first, then 170-260 KB/sec (unstable #1)
1 connection (with experimental): 50-1000 KB/sec (unstable #2)

[-b 131072]
1 connection (without experimental): 45-47 KB/sec at first, then 10-300 KB/sec (unstable #1)
1 connection (with experimental): 60-300 KB/sec (unstable #2)

[-b 1048576]
1 connection (without experimental): 45-46 KB/sec at first, then 170-380 KB/sec (unstable #1)
1 connection (with experimental): 40-150 KB/sec (unstable #2)

------------------

[-B 1024]
1 connection (without experimental): 8-9 KB/sec
1 connection (with experimental): 8-9 KB/sec

[-B 2048]
1 connection (without experimental): 15-17 KB/sec
1 connection (with experimental): 15-17 KB/sec

[-B 65536]
1 connection (without experimental): 50-330 KB/sec (unstable #2)
1 connection (with experimental): 50-350 KB/sec (unstable #2)

[-B 131072]
1 connection (without experimental): 60-700 KB/sec (unstable #2)
1 connection (with experimental): 60-690 KB/sec (unstable #2)
2 connections (without experimental): 100-1050 KB/sec (unstable #2)
5 connections (without experimental): 92-1810 KB/sec (unstable #2 -- it max'ed my ADSL's downstream at times)

Unstable #1: The speed is quite unstable, up/down all the time
Unstable #2: The download starts from slow speed, speed increases with an additional 20-25 KB/sec every second, then suddenly drops. Then starts from slow speed again, speed increases the same way as before and then it suddenly drops.

Overall, whenever I use "experimental high speed handling" (with no command-line parameters) OR use the -B parameter, Unstable #2 occurs. While when I use -B parameter, experimental option doesnt make any difference whether it's on or off. About the increasing and then drop issue, although I have no technical knowledge about how exactly it works, it makes me feel that it fills some buffer and once it's full, speed drops.

I have to admit that I'm a bit confused with the -b and -B, like which settings are better and which ones are not. Especially on "Unstable #2", it may take up to 1-2 minutes to complete the increase/drop, so taking a 60-secs average won't help me choose.

If you need any further tests or give me some specific requests to make tests (like specific -b/-B or something), feel free to ask.

Thanks.

PS: I've done all these tests with #137
PS2: I'm thinking of doing some combination of -b and -B - I just realized how it works, I should be setting both -b and -B and dont use experimental so it won't auto-change their values.

5
Beta / Re: Testing build #133
« on: October 20, 2007, 11:33:16 AM »
Rejetto,

The "Experimental high speed handling" doesn't seem to work to me. Since I upgraded to newer build than #131, I feel the performance downgraded again.

Do you remember which ICS/socket buffer settings #131 build was using? So I can test values and compare with them. Even if you don't, I will do some tests soon.

Special Thanks :))

6
(HFS 2.2 #120)
1 connection: 40.9 kB/sec

(HFS 2.2a #131)
1 connection: 280.9 kB/sec
3 connections: 771.6 kB/sec

DAMN You rock, so far the difference is HUGE!!!! I will test it more a little later... Just by curiosity, what was it (if you want to share it with us) and are there any options related to it to adjust optimization even more?

SPECIAL THANKS rejetto, I'm even more impressed now.  :) :) :)

7
i may contact you there to submit possible test builds.
i would use the email you supplied with your forum account, but you can give me another one if you wish.
Feel free to e-mail me anytime at the e-mail address I've signed up, truly sorry for the delayed response too  ;D

i think i found the problem.
please test last build.
I will surely check it later today, thanks for your interest!  :)

8
i didn't have the time to read your report yet, but in the while i want to let you know that ATM i have no idea what exactly is this buffer that would affect performance.
since communication is layered, there are many buffers just for sending.
If you ever get the chance to develop some debugging version that would allow me to test by setting each one of them in different values, I really would love to test it out (even if it's 100 different buffers ;D).

Alternatively, would it help if we check the other daemon's source since their "IO buffer" setting makes a lot of difference in my case (and many other people's in my area) or it wouldn't help at all due to different coding or it's not politically correct to track someone else's source code (even though it's open-source as well) to improve HFS?  ???

Thank you.

9
Quote from: psyxakias
PC/CONNECTION INFORMATION
HOST connection: 100 Mbps up/down (shared into a LAN)
DOWNLOAD connection: 16 Mbps down / 512 Kbps up
What is the internet connection attached to your host computers lan? more importntly what is the upload for that connection?
I'd also just like to say that youve provided lots of detailed info.
Host is connected to a leased circuit link with at least 1 Gbps upstream that gets shared between several PCs. The performance inside and outside the host's network is pretty good, but the performance between download's network and host's network is kinda slow unless I optimize host's daemons. Obviously there must be some bottleneck into the ISPs between them.

I want to clear that I have no doubts that if the connection gets improved, HFS's performance will compete shttpd's, but at the moment I'd be interested (maybe other people too) to have the option to optimize HFS in order to succeed maximum speeds in every connection.

Thanks both of you for your interest :)

10
Hello rejetto,

I would like to thank you for your response and your interest. I have done some tests hoping they may help. Feel free to request me any further information.

Regards,
psyxakias.



*** I will name "host" the connection/pc that HFS is installed and "download" the connection/pc that performs the downloads tests ***

1 & 2) Host machine is connected into a 100 Mbps LAN (shared connection) and download into a 20 Mbps broadband (which is currently ADSL synchronized at 16 Mbps for more stability)

3) I receive 38.4 kB/sec with HFS and 1 single connection and up to 350-400 kB/sec with 10 connections
4) I receive 236 kB/sec with shttpd (with modified buffer!) and 1 connection up and up to 1.41 mB/sec with 10 connections into the other daemon
5) I monitored CPU usage accurately into all the tests and host machine never had higher than 4-5% CPU (average 1-3%) and download machine never had higher than 16% (average 8-14%)

PC/CONNECTION INFORMATION
HOST connection: 100 Mbps up/down (shared into a LAN)
DOWNLOAD connection: 16 Mbps down / 512 Kbps up
HOST CPU usage (during transfer): 1-2%
DOWNLOAD CPU usage (during transfer): 5-8%
Latency between HOST & DOWNLOAD: minimum/200ms, maximum/205ms, average/202ms
Duration of each test: 30 seconds

Notes
1. All programs were closed into both machines, keeping them completely idle
2. Host's connection is shared with other PCs (so it wasn't idle), while Download's connection was not used by other PCs



[ TESTS with 128480 RWIN on DOWNLOAD machine ]

HOST to DOWNLOAD via HFS (format= connections: speed)
1 connection: 38.4 kB/sec
2 connections: 85.0 kB/sec
3 connections: 120.1 kB/sec
4 connections: 163.2 kB/sec
5 connections: 202.7 kB/sec
6 connections: 246.1 kB/sec
7 connections: 278.5 kB/sec
8 connections: 312.6 kB/sec
9 connections: 368.8 kB/sec
10 connections: 407.4 kB/sec
NOTES: In all tests, the speed was very stable (-/+ 5 kB/sec, no spikes)

HOST to DOWNLOAD via shttpd using 1 connection (format= buffer: speed)
16384 buffer: 42.0 kB/sec
32768 buffer: 139.4 kB/sec
65536 buffer: 303.0 kB/sec
131072 buffer: 171.3 kB/sec
262144 buffer: 172.9 kB/sec
Notes: Even if I increase the buffer to anything higher, the average is about 172 kB/sec with 1 single connection

HOST to DOWNLOAD using shttpd at 131072 buffer (format= connections: speed)
2 connections: 287.7 kB/sec
3 connections: 535.1 kB/sec
4 connections: 609.1 kB/sec
5 connections: 745.9 kB/sec
6 connections: 914.2 kB/sec
7 connections: 860.9 kB/sec
8 connections: 1.27 mB/sec
9 connections: 1.29 mB/sec
10 connections: 1.20 mB/sec -- Notes: somehow it slowed down here, might was related with my connection
Notes: there were some spikes but all samples were taken only once the speed stabilized to avoid inaccurancy

HOST to download using shttpd at 262144 buffer and multiple connections (format= connections: speed)
10 connections: 1.32 mB/sec



[ TESTS with 511104 RWIN on DOWNLOAD machine ]

HOST to DOWNLOAD using multiple daemons/settings
1 connection with shttpd (262144 buffer): 236.9 kB/sec -- Notes: stable speed (performance improved from 172 kB/sec before the RWIN adjustment)
1 connection with HFS: 42.5 kB/sec -- Notes: stable speed
-
10 connections with HFS: 353.5 kB/sec -- Notes: almost stable speed
10 connections with shttpd (131072 buffer): 1.20 mB/sec -- Notes: no spikes, stable speed
10 connections with shttpd (524288 buffer): 1.41 mB/sec -- Notes: stable speed, almost maxing my connection
-
3 files x 10 connections with shttpd (524288 buffer): 1.67 mB/sec -- Notes: stable speed, maxing my connection
3 files x 10 connections with HFS: 595.6 kB/sec -- Notes: unstable speed



My conclusions
There is no doubt that there is some connection bottleneck between download and host connection (not HFS' fault at all), because I've seen some people that are physically closer to Host pc to succeed almost maximum speeds but all the people in my area have same exactly the same slow speeds as me, unless I use shttpd with modified buffer which helps me and others with similar results.

Since there isn't much I can do to improve the connection (further than complaining to ISPs for problems into specific routes, which I already did and did not really help), I was wondering if there could be some similar setting into HFS that would be "auto" by default (to have it as now) and "manual" to set my own buffer settings like I do with the other daemon. If you can even make a test version and wanna see if there's a difference, feel free to contact me, I'll be glad to test it.

Thanks again for your time looking into this.  :)

11
Hello,

I have been using HFS for personal usage (after a friend's suggestion) for a while and I have been very satisfied. I really like the user-friendly interface, the features and especially the low resources needed & easy installation. Instead of spending a lot of of time for installing, configuring and optimizing another daemon (like Apache or IIS), HFS is ideal for personal usage when you want to share a few files with some friends.

However, there is something that has been bothering me (and my friend who suggested me HFS) for a while and I'm surprised that HFS hasn't covered it yet. I have not found a way (if there is, please call me dumb and show me) to configure receive/send buffer inside the daemon in order to optimize file transfers, like most daemons do. Several daemons (especially file-transfer related), like FTP and WEB servers, have some setting (in bytes) to manually adjust the receive and the send buffer of the TCP connections which REALLY helps people with high-latency issues in order to succeed maximum performance with just 1-2 TCP connections.

Using a 20 Mbps broadband connection, I installed HFS and another daemon into two different regions far away from the downloader's connection. The download speeds were very low with just 1 TCP connection but they were max'ing with 10-20 TCP connections. Using the other daemon (that I found yesterday, named shttpd, tiny one but almost no settings at all) the performance was exactly the same at default settings but once I adjusted shttpd's I/O buffer from 16.384 to 435.600 bytes, the performance increasement was unbelievable. I was capable to succeed maximum speed with just 1 TCP connection and I am absolutely sure that the difference was made by that setting as I performed long tests for hours, adjusting it again and again.

Finally, I'm aware that TCP connections' performance is directly related with TCP's receive window (RWIN) and you may easily analyze and optimize your RWIN using a registry editor and/or some utilities like speedguide's (http://www.speedguide.net:8080) but I still would love to see some buffer setting on HFS, as it would really help me and possibly others. Because before or even after I optimize my network's card RWIN to 435600, HFS performance still does not give me much maximum performance into a single TCP connection, as it does with daemons that have an inside setting.

Please inform me if you're planning to implement such setting into a future beta version and I'd love to test and benchmark it for you.

Thank you for your awesome work!

Regards,
psyxakias.

PS: I forgot to mention that I have already checked documentation, todo-list and didn't find anything related to my issue. While I'd like to clarify that some small donation I did earlier, has nothing to do with my feature request. I just felt to do it showing my appreciation for this software, even if I don't see my feature request implemented.

Pages: 1