Heh, cool! I'd love to see people use HFS on a large scale. It's literally the only decent web server (on ANY platform) with real-time status information - which gives it a HUGE edge over shit like IIS or Apache. If you (or someone) can get PHP working in it - and I think I read somewhere that it'd already been done - then you've got yourself a full featured, under-one-hood, web server... and something
seriously huge... on your hands
(Bravo for that, by the way. Excellent freaking work. I'd've donated by now if it were actually making me any money, but I'm unemployed and dirt poor...
)
As for logging, wow. I could've sworn I saw the list topping off and rotating off the top of the page when it reached a certain limit... maybe the log control itself was capping it off (on screen). I'd say to go for a similar number of lines as in a command prompt window, which appears to be 300 lines. 5000 is just way overkill, and would take far too much CPU power to maintain.
I would imagine you have more experience in this than I do, but if you're not... entirely... sure, I've got a basic idea of how to implement it without memory leaks through the system control or otherwise:
pointer (int) = current position in stack
stack (1) = line of log of a fixed size
stack (...) = lines of log of a fixed size
stack (300) = line of log of a fixed size
stack (301) = EOF
Write the newest log entry at the position in the log array indicated by pointer, and EOF the (pointer+1) with rollover, i.e. "2", or back to "1" if it was 301. On a timer, cycle through the log buffer (stack) to screen until it reaches EOF (no human needs to read each log entry in realtime; writing to screen on each request takes a lot more processing than just "thinking" it, then writing the screen on a timer). This way, no additional memory
ever needs to be allocated to the log, which logically eliminates any possible memory leaks relating to the log
(And it can also be adjustable, by setting an option for log buffer size)
As for persistent connections, I disabled that because people don't actually browse the site - they just connect to grab one file that was on some website they visited, and never need any additional files. I can see how this would come in handy if someone was browsing a website, like this forum, that actively requests content and would otherwise have to make lots of the same connections, but my site is rather different - nobody ever browses the actual site; most, if not all of the connections, are from websites with one of the files embedded from my site. Also, I think Apache is already holding persistent connections, so doing double work is just... euh. The benefit is extraordinarily minimal. But I love that my WRT54GL with Tomato can handle it without too much of a problem; its QoS is excellent.
PO'd = Pissed Off
Oh, and yeah, I had been using HFS on my home server previously, but had no problems with it. It wasn't until I "hooked it up" to Hostfile, that the floodgates opened and I ended up straining the heck out of HFS. BTW, if you want to use Remote Desktop or VNC to check out how HFS is performing on the server, you're more than welcome to PM me for the details!