@~GeeS~
Please show me which browser (without plugins) that can (recursively) traverse the page structure as it is created by HFS, and download the linked files in a way that reflects (a part of) the (virtual) file system indexed by HFS. I use Mozilla and IE and I have not yet found that feature. I am sure that there are site leechers that can achieve this, but what would be the point?
In my opinion, the main purpose of HFS is to make files sharable in the easiest possible manner and indeed, it works nearly perfect for this. If I want to download something from behind any computer from this is possible, as long as it has a browser (even lynx would do). I won't be lured into the discussion what a "decent browser" is ;-) , but changes are, the browser available is IE. Off course, you could first download firefox/netscape/opera/teleport etc, but in that case you may just as well use a FTP/SHHD server and download a SFTP/SCP client!
However, I do agree with you that this feature would add (significant?) server load, and it would be great if the HFS administrator would be able to turn the feature on and off. For me, the number of downloads is fairly low. I use HFS to share parts of my disk with a few not-so-tech-savy persons and for me this feature would be great.
@Druidor
This is very close to my javascript suggestion: In order for your idea to work, a javascript should pass all files to your browser. However, it would be up to the user to individually save the files and put them in the correct directory. For multiple levels of directories and perhaps 100-1000's of files, this would become tedious. An archive would require only a single download and could contain the full directory structure.
Anyway, I will stop defending this idea. Anyone can have his/her opinion about it and it is up to rejetto to see if he thinks it is worth implementing (or anybody else, for that matter).