My biggest problem is still the overload of connections crashing the program (it freezes perminantely with the 'non responding' message). When i say overload i mean 200+. Even if i limit the server for 1 connection per IP, anyone with a download manager could easily send 100+ requests (that are rejected) at the same time and still make it crash. What i would ask for would probably be a little more resistance from the program, otherwise the only solution would be using secondary programs like firedaemon which will reboot hfs every 'x' hours.
Sadly, I don't expect HFS to be very robust. It was designed for personal use, and not for such heavy loads. Indeed, I addressed mainly usability problems in place of performance issues.
About your problem, my idea is that the connections you say "rejected", i guess are actually served with a page reporting the "overload" in place of the requested file. Is it so?
If yes, we may workaround your problem by just discarding those connections.
The quickest method that comes at hand is to put a {.disconnect.} command inside the [overload] section of the template.
This is not the fastest way, but may already be enough. Please, give me a feedback about this.