rejetto forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - mastabog

Pages: 1 2
1
Bug reports / browse css and js lost if root not browsable
« on: April 28, 2013, 03:18:38 AM »
I added a subdirectory under root and made it browsable. There are no restrictions per user in either root or this dir. If the root dir / is not made browsable itself, then I can't see any of the styling (css) and neither of the javascript functions work. HFS issues a "403 forbidden":

Code: [Select]
GET /?mode=section&id=lib.js HTTP/1.1
Code: [Select]
GET /?mode=section&id=style.css HTTP/1.1
Code: [Select]
HTTP/1.1 403 Forbidden
Content-Type: text/html
Accept-Ranges: bytes
Server: HFS 2.3 beta
Content-Encoding: gzip

This is on beta #283

2
Not sure this is the right place to post this, but I did bind root to a real folder and configured the root / to be browsable. However, goign there with the browser shows an empty page with just the link "go to root", as if the directory is not browsable.

I have not managed to make root browsable when it's bound to a real folder. If I unbind it (and add some folders to it) then it is browsable.

3
HFS ~ HTTP File Server / testing 2.1
« on: August 02, 2006, 02:58:56 PM »
A small bug report:

When "Add to HFS" is used from within the operating system's context menu (right click) and multiple files are selected then all files are added correctly into HFS but only the last file's URL is copied into clipboard. The expected behaviour is for all urls to be copied to clipboard. Right now one has to open HFS, manually select all files in question and then either press ctrl-c or select "Copy URL" from within the context menu of HFS. Not a killer but annoying nonetheless.

Thanks

4
HFS ~ HTTP File Server / Fingerprints support
« on: July 05, 2006, 09:40:58 AM »
Quote from: "rejetto"
we can't have 2 commands, one that saves to disk and one that keeps the MD5 in memory: concerning the GUI it would be too intrusive.

Well, why would you want to implement two commands? :) You would either save to file or to memory, but not both. I would say to save the md5 hash to file as it will be there later at subsequent launches of HFS and it wouldn't need to be computed again, unless instructed to by the user.

Quote from: "rejetto"
anyway, i don't think it is a good idea to have this feature that calculates MD5 at addition.

Please allow me to differ, I think its a great idea to have the hashes computed at addition time *provided* that there is a global option in the menu to limit the maximum file size for which the md5 hash should be automatically computed on addition. It could be by default set to a very low value (e.g. 4 MB) and the user can adjust it to his liking or enable/disable it for all files, regardless of their size.

Your call, as always, but I think it would make a great addition.

5
HFS ~ HTTP File Server / Fingerprints support
« on: July 04, 2006, 03:25:28 PM »
Offtopic

Ack! :(  the board logged me out again. It was me in the post above. Have you setup the board so it expires the cookies after a number of days? My cookie for this board seems to expire after some interval ...

6
HFS ~ HTTP File Server / Fingerprints support
« on: July 03, 2006, 08:50:48 PM »
Quote from: "maverick"
Quote from: "mastabog"
maybe you have a super machine and/or super HDD :) but for a 700 MB file on my P4 HT 2.4 GHz the MD5 hashing takes about 10 seconds. For slower machines it might take more.

A super machine, far from it.  Actually those tests were done on a P3 900 mhz.  I'll do more testing on a faster system when I get home.  (btw that post of mine above was edited a few times with new results as I finished additional tests).

Regardless, it's up to rejetto to decide on what changes he would like to make, if anything.


Read your edited post and yeah, HFS could automatically compute the MD5 hashes for small files (e.g. less than 32 MB ). Usually people check bigger files against MD5 hashes but your idea makes sense nonetheless.

This could be another global option in HFS - automatically compute MD5 hashes for files smaller than <user editable value here> MB. That would be really neat! :)

7
HFS ~ HTTP File Server / Fingerprints support
« on: July 03, 2006, 03:37:57 PM »
Quote from: "maverick"
I just don't understand it.  You guys are telling us that it can take quit a bit of time to get the md5 fingerprint of a large file.

I just tested a 140 mb rar archive and I got the md5 fingerprint in much less than a second using md5sum.  I also tested with another md5 utility with the same results.  I can't see no reason why HFS can't do it on-the-fly.

Am I missing something?


Well, if you read my message I said HFS could (and should) do it by itself without reading external files generated by 3rd party tools.

however, maybe you have a super machine and/or super HDD :) but for a 700 MB file on my P4 HT 2.4 GHz the MD5 hashing takes about 10 seconds. For slower machines it might take more.

Regardless of that, it's not a good idea to compute the MD5 hash on-the-fly for all files that you add into HFS as it may generate too much hdd and cpu activity. In my opinion, the best solution would be to have a global option (disabled by default) that computes MD5 hashes of all files when added to HFS and an entry in the context menu of each file where the user can copy the URL with the MD5 hash and that would instruct HFS to compute the md5 hash when clicked.

Cheers

8
HFS ~ HTTP File Server / Fingerprints support
« on: July 03, 2006, 11:17:14 AM »
Quote from: "rejetto"
Quote from: "mastabog"
Well, I can only say it again :). The MD5 hash should be computed only when the user clicks the "copy URL with md5 hash"

so, clicking on it would mean to
1. if loaded md5 is older than file or doesn't exists, create md5
2. copy url

this can take minutes. i should display a dialog warning the user for the long waiting.


Exactly, and I think I got you right :)

That way you don't need to rely on external tools or external files (an md5 hash is a string of only 32 bytes long and can be stored in HFS' memory). And certainly, if the user is hashing a big file (e.g. 100 Mb) then a small warning box should be displayed. For the time being you could use a simple message box saying something like "Hashing file, this can take a while ...". A progress bar would be nice of course, but not critical at all (after all, md5sum.exe doesn't say anything until it finishes :P)

You can also dump the md5 hash into a file and use the file for future uses. You can also add an option to force re-hashing the file in case the file has changed but there is an older md5 file sitting there.

Thanks again

9
HFS ~ HTTP File Server / Fingerprints support
« on: July 03, 2006, 03:15:19 AM »
Quote from: "rejetto"
it takes too much time, it can't be done on-the-fly.
most files swapped with HFS are big (10-100-1000 MB), and any fingerprint require reading the whole file before.
all i can do is to add a command to create md5 files.


Well, I can only say it again :). The MD5 hash should be computed only when the user clicks the "copy URL with md5 hash" and not whenever a file is added into HFS or when its URL is copied. Hashing a file takes the same amount of time whether it is being done by HFS or by an external tool - it makes no difference.

So, once more, it is on-demand only and not whenever a file is added in HFS. When I said "on-the-fly" I meant you need only 1 click of a mouse. Only when the user selects the copy url with md5 hash from the context menu (you even rename it to "compute md5 hash and copy url" to make it clear). You can even remember the MD5 hash if the file modification date and size have not change ...

Using an external tool to hash the file seems like too much manual work for this feature to be useful or attractive, kind of defeats its purpose.

10
HFS ~ HTTP File Server / testing 2.1
« on: July 02, 2006, 01:42:28 PM »
Quote from: "rejetto"
www.rejetto.com/temp/hfs2.1beta14.zip
what's new
+ Menu -> Virtual File System -> "Support Link Fingerprints"
+ File Menu -> "Copy URL with fingerprint"


I've posted some feedback on the new fingerprints support here: http://www.rejetto.com/forum/viewtopic.php?p=1017055#1017055

Thanks again for looking into it.

11
HFS ~ HTTP File Server / Fingerprints support
« on: July 02, 2006, 01:37:06 PM »
That was me in the message above.

I've tested the new beta with support for link fingerprints. Thanks again for implementing it.

However, I was talking about a full support for link fingerprints, where HFS would compute the MD5/SHA1 hashes on the fly when instructed to copy the link with MD5/SHA1 fingerprint. Maybe you are implementing this in the future beta version, I'm not sure.

Using a 3rd party tool like md5sum to create md5 files and place them next to the shared files is ok but it is a cumbersome job nonetheless. Most users would probably avoid using it as it involves a lot of manual work for each file.

It would be a whole lot better if the "Copy URL with fingerprint" was always visible and when clicked, HFS would parse the file in question and compute the MD5 hash, adding it to the URL and forming the full link with fingerprint. This way, everything can be done in one simple click without needing to run 3rd party tools and creating aditional files. I am sorry if I have not been clear regarding this in my initial request. Let me know if i was clear now.

MD5 is a public alghorithm so you will easily find functions or libraries for Delphi (if HFS is coded in delphi).

Thanks

12
HFS ~ HTTP File Server / testing 2.1
« on: June 17, 2006, 10:20:04 AM »
link fingerprints was my suggestion (i wrote a long and detailed post above in this thread, here)

File verification through md5 or sh1 checks are not linked to mirror downloading but to broken downloads. When downloading big files or when having a slow connection, disconnection problems can lead to broken downloads. md5/sha1/sfv/etc files are necessary means to verify the download was ok.

A lot of websites provide md5 files in the same directory as the files to download (check any linux distro, apache, etc). Link fingerprints automates this process without the need for additional files. It just adds the md5 hash as a link anchor to the link of the original file. If you want to understand file verification better then please refer to http://microformats.org/wiki/hash-examples. Link fingerprints is a new idea and it is getting popular. You can read about it more in my post above.

13
HFS ~ HTTP File Server / testing 2.1
« on: June 16, 2006, 11:44:04 AM »
Another suggestion well worth looking into in my opnion: link fingerprints to automate file verification.

It started being supported by download managers (e.g. GetRight), browser extensions (e.g. mdhashtool for firefox) and others. It would be nice if this was an option in HFS so that URLs are copied with the hash anchors but also available in directory file listings.

I was thinking of adding a command in the right click menu, say, "Copy URL with MD5 hash", or a checkbox option so that whenever the user double-clicks the MD5 hash is computed and added to the link (both in clipboard and in HFS's address bar). You might even redesign a "Copy URL options" submenu and add the option in there.

Here is what i was thinking of:

 or  

or

Being a link anchor (#), it doesn't create problems for browsers or download managers that do not support link fingerprints. However, I don't think it's a good idea to have this enabled at all times as large files will cause heavy disk activity and cpu usage. You could also have a global option for this, like the ones in the global "Menu > IP address" menu ... some users sharing smaller files might want it globally enabled (not my case though).

Many of the friends I share large files with could benefit from this as they have slow connections and I have to use external tools to create separate md5 files and have them download those as well in order to verify the files locally on their machines using other 3rd party tools. With link fingerprints and the firefox extension or getright for instance they could do it all in one click and at the download finish they would get prompted if the file hash verification failed ... no more thirt party tools and separate files to be hosted and downloaded by neither me or my users.

I think this would make a great addition to HFS and it would probably be one of the first web servers (if not the first ever) to support link fingerprints natively ;)

Anyway, thanks for reading :)

14
HFS ~ HTTP File Server / testing 2.1
« on: June 07, 2006, 01:23:28 PM »
A very small bug-like/unexepected behaviour report.

In the HFS window, when multiple entries are selected (say with ctrl+click or shift+click) single clicking on a selected entry does nothing, they all remain selected. Normal Windows behaviour would automatically deselect all selected entries and select the clicked one.

In HFS, in such a situation you have to either click an unselected entry or click somewhere outside the entries space in order to select just one of the multiple selected entries. It's nothing major, but mildly annoying :)

15
HFS ~ HTTP File Server / testing 2.1
« on: June 06, 2006, 05:18:24 PM »
I have another small suggestion that is very easy to implement.

When using a separate tray icon for each connection, it would be nice if the per-connection tray icons responded to double clicks. Right now double-clicking on a per-connection tray icon does nothing. It would be nice if they brought up HFS main window (eventually highlighting the clicked connection in the connection info window area of HFS).

Not an important change but a welcome one nonetheless :)

Pages: 1 2