rejetto forum

What are the real limitations of HFS on simultaneous file downloads?

0 Members and 1 Guest are viewing this topic.

Offline vladimirov70

  • Occasional poster
  • *
    • Posts: 22
    • View Profile
    • Concept of Public Security
Hi! What are the real limitations of HFS on simultaneous file downloads? After 15 users simultaneously download one file over the Internet, the connection is lost on my server. Changed the router, then changed the provider. The situation has improved, but the problem is not completely solved. How much load can HFS handle ? Thanks.
The end of times or the beginning of a new one? https://kob-alt.ru/kob-perevodi/


Offline danny

  • Tireless poster
  • ****
    • Posts: 273
    • View Profile
Hi! What are the real limitations of HFS on simultaneous file downloads? After 15 users simultaneously download one file over the Internet, the connection is lost on my server. Changed the router, then changed the provider. The situation has improved, but the problem is not completely solved. How much load can HFS handle ? Thanks.
40+ simultaneous RQ's which could be done by 1 to 40+ users. 
Try the simultaneous downloads limit (set at 2 if internet faster than 150 megabit) or netlimiter or set properties of ethernet card to 100 megabit to reduce flooding.   (hfs speed limit is worse.  hfs connections limit is worse.)

See my WatchCat script to regain connectivity automatically.

See my Stripes-Oneshot template for lower RQ cost.

And this.

If your router supports iptables then: 
iptables -I INPUT -d 192.168.1.200 -m connlimit --connlimit-above 40 -j REJECT
whereby 200 would be changed to your server's lan ip
whereby 40 would be changed much lower if incoming rate is flooding at more than 150 megabits.
Weird exception REJECT used for wan even though we should use DROP for strangers; and, that's because, for http server we do want seamless retry that REJECT makes possible (not the black hole of DROP).  If the INPUT chain won't do it, try the FORWARD chain.
Alternative for iptables-crippled ('diet' iptables) routers is use hashlimit instead (dd-wrt, etc...).

HFS limits menu (if fast internet)
simultaneous downloads limit = good
inbuilt speed limit = worse* (use netlimiter or ethernet port speed setting)
inbuilt connections limit = worse* (use iptables connlimit or hashlimit)

*On fast internet with your single thread Apache/HFS/Nginx for Windows, by the time a flood has reached the http server, it was too late; so, the flood has to be throttled before then.   If 100 megabit or lower, a single thread server can do many simultaneous downloads.  Or, if gigabit/flooding then 1 or 2. 
« Last Edit: March 23, 2021, 06:00:12 AM by danny »


Offline vladimirov70

  • Occasional poster
  • *
    • Posts: 22
    • View Profile
    • Concept of Public Security
Thank you, friend! This is valuable information for me. I will apply it.
The end of times or the beginning of a new one? https://kob-alt.ru/kob-perevodi/


Offline danny

  • Tireless poster
  • ****
    • Posts: 273
    • View Profile
Tmeter free version http://www.tmeter.ru/en/ can do up to 4 filters, such as speed limit HFS (to reduce flooding).
 I see bandwidth shaping aka speed control, but don't see bruteforce protection

------------------
also
Netbalancer https://netbalancer.com/download versions older than 9.3 would allow 3 free filters (including speed limit); and the April 2016 v9.2.7.839 version looks like it is meant for Window 7. 
Likewise I see speed limit but don't see bruteforce protection

------------------
I wonder if there are other free ways to do bandwidth-shaping with Windows? 


P.S.  There is another: 
Linux virtual machine running HFS on wine (because running iptables in front of HFS).