rejetto forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Falcon4

Pages: 1
Bug reports / custom content-disposition
« on: December 22, 2011, 10:41:57 AM »
Found a bit of a recent show-stopper bug in HFS (in my setup)...

I'd been running HFS with an "" script as follows:
Code: [Select]
{.add header|Content-Disposition: {.^disposition.}; filename="{.urlvar|fullname.}";.}

The preprocessing script (on my Apache/PHP front-end) attaches a query variable "fullname" that contains the original name of the file (that couldn't otherwise be part of the usual URL) - it does that via a database lookup as it updates the hit counter. So when you click "download" which has "?attach=1", the script may provide the following header:

Content-disposition: attachment; filename="My Long Filename.docx";

Unfortunately, HFS adds its own Content-disposition header with the existing filename. It's worked fine to allow both headers to just co-exist peacefully, but recently Firefox started "full-stop" blocking downloads/use of files that return two Content-disposition headers. Then, instead of changing their silly behavior, Chrome actually picked up the SAME "blocking" behavior as well!

Now both Firefox and Chrome are broken on my site (thanks, guize). :/

I found the option "No content-disposition" under the Debug menu, and that worked for a while. But for no explainable reason, it kept switching itself back off. Now, no matter what I do, it's a "broken switch"... I can flip it however I like, but it just ignores me and sends duplicate headers...

This is with the checkbox ticked:
Code: [Select]
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

C:\Users\Falcon>cd Documents\tools

C:\Users\Falcon\Documents\tools>tinyget -srv:"" -port:13370 -uri:"/not_occupying_not_living/1366_F-16FightingFalconvol4.jpg?fullname=1366_F-16+Fighting+Falcon+vol4.jpg&attach=1" -h
HTTP/1.1 200 OK
Content-Type: image/jpeg
Content-Length: 210897
Accept-Ranges: bytes
Server: HFS 2.3 beta
Set-Cookie: HFS_SID=0.316301819169894; path=/
Content-Disposition: attachment; filename="1366_F-16 Fighting Falcon vol4.jpg";
Last-Modified: Tue, 29 Nov 2011 19:05:43 GMT
Content-Disposition: filename="1366_F-16FightingFalconvol4.jpg";

The one HFS sends is at the bottom.

I guess this should be a relatively easy bug to fix - it's just not "sticking". If that can be fixed, HFS will still work great :D

Sorry for the delayed reply... after I got it working I kinda just let it fall off the radar.

Thanks, first and foremost, for your help with this! I certainly don't mean to act like "give give give, fix fix fix, this is broken, omg omg"... I just forget to emphasize the things that are working - that is, everything but what I've mentioned :) The wiki is helpful in letting me locate the macros I need to use, and explaining how to use them (even if they're not each written in fully proper syntax, I can figure it out most of the time).

As a result of that, plus debug logging - which I put to some use in realizing that oh-crap, my PHP training had me using "{.if param|param|param.}", instead of "{.if|param|param|param.}" - I now have a working "sorting-and-redirection" event!

Behold, my first "hello world" script! :D


Yeah, I realize that first example isn't functionally proper, but when I did a full file-redirect to the new URL, the clients (BitTorrent "web seeds") were botching the URL, as I guess uTorrent has a broken redirect-handler... so when they were redirected from, to (spent a while debugging this with tinyget to read the returned headers), they would actually request the non-existent file from HFS: Seems that it uses the redirect as the new "seed root" and appends the file name no matter what... and since I couldn't find a way to filter against User-Agent (BtWebClient/xxyy being the trouble one), and since it's not a "critical" file to be served (merely a convenience), I just broke the redirect as such.

The second filter is an odd one... for some reason, I seem to have users that are hard-linking to the redirected file path, outside the main Apache file-handler (which is now configured to use the port 281 server). So instead of just silently redirecting these trouble files, I figure I'd direct a little more traffic through the site instead... so if it catches a URL using the old data path, it'd strip off the URL and send them to the "viewfile" page that provides a proper link through the Apache redirector. Strange that people would do that, since hits aren't logged outside the PHP redirection script, and if the hits aren't logged, the file is deleted for inactivity. Stupid people. :D

try  {.replace|:280|:281|%url%.}
Ah-HAH! I knew these wiki pages were lying to me... %url% isn't listed in the HFS symbols page: - I was looking for that, too! Sure enough, %url% works fine... I tried "{.notify|got %url%.}" on "[request]" and it now pops up the path just like I was looking for... bah, documentation. :P

it's a way, but not so obvious: if you are actually serving a page, or an image, you would get a mess merging page and debug data.
Yeah, but it's the way it's always done with PHP and other web-languages... even HTML itself is a mess of data and commands. I guess it's fair enough, though... any output could very well break the file server, and the whole point of HFS is to be a file-server, but it should have some way to indicate to the admin that "hey dumdum, something in your script is trying to tell you something", maybe in the log view window ;)

maybe {.disconnection reason.} can do for you.
Actually, {.notify|blahblah.} has been coming most in handy, as it lets me see the value that the HFS-macros are seeing within the program, so that's probably what I'll end up using in place of echo and friends. :)

all pages' output is supposed to stay in the template.
your case would go in section [special:begin]
Aye, but I haven't even begun dabbing in templates yet... since my use of HFS lies solely in HFS serving direct links to files, I haven't had a purpose to mess with templates, nor do I know where they would come into play outside of a HFS-generated page (not a file)... but I guess they could start to be useful, the more I look into how it works.
maybe we should output to the browser console, like firebug extension. That would be a great option! inside the browser but not messing
That could work if you can find a way to output it... never used firedebug, but I'd think it would use HTTP-headers?

Well, the "take this and move it there" I want to do is:
incoming: server:280/any/path/to/file.txt
redirects to: server:281/any/path/to/file.txt

Figure it's pretty simple, but without being able to read "%item-name%" from [request], how can this be done? What good is [request] without being able to read the... um... request, the %item-name%? :-\

As for the event output, I saw the "debug" menu but it wasn't very useful to me if I can't test it against... you know, actual requests and data... it lets me write a temporary script and read its output but that doesn't do me any good if I'm testing in a non-functional context. That is, how can I test if a URL is being parsed and processed properly, and the browser is receiving the right headers, if I have neither a browser nor a request to work with?

"to output WHERE?" - well, that's easy! Dump it into the response! That's what makes sense to me... if there's something in [request] being output somewhere, it should be output to the client... say, for example, I want to add some kind of page header to every request in an HTML folder... I could add that to the [request] event. Or if I'm debugging, like here, I could use the "non-functions" to dump some debug info out to the browser, like the return value of some macros I'm experimenting with and don't know how they behave. Since I can't test live requests with the "debug" window, that'd be the only way I could do it... unless of course I use {.add to log.} (I'm guessing that's the actual syntax?). But then I'd have to go back to the HFS window to view the response instead of just looking at the browser window I've already got open ;)

Ah, I see... I'll try and help out there however I can with the navigation, maybe I can unify all the various pages with a tree-level navigation at the top of each article. Still haven't been able to get the simple "take this and move it there" system working, though... try as I may, I simply cannot find a way to implement "debugging" in the event scripts, such that I can get HFS to put anything on the screen other than what was requested. That is, if I write:

i like %item-name%
{.notify|do you like %item-name%?.}

... all I get is a balloon popup saying, literally, "do you like %item-name%?". If I'm lucky, that is. I don't get any "i like" output on the browser, even if the file exists, it's like it's ignoring the non-scripting-commands completely. And even if it is by design that event-statements don't output any extra data, shouldn't there be some kind of "echo" function/macro? Also tried the other "{.if ... hey... .}" thing posted in the original "events" thread, and I just couldn't get that to work with my "file.txt" either... :-/
{.if|{.is substring|%item-name%|file.}| {:
  {.notify|got %item-name%.}

I feel kinda dumb here... I mean, I look around and it seems like I'm the only guy that just doesn't "get it". Or am I just the only one that's actually tried to get a start from scratch?  :-\ Maybe I could understand this easier if I had some examples of working event scripts to go on... that's one thing the PHP docs is really strong with: "here's an example to show what you can do with this". Wiki docs are almost completely void of actual syntax-valid examples (most aren't even properly written - e.g. "after the list | A" - OK, so how would I write that as an actual macro? I'm guessing "{.after the list|parameters.}"?)... I can't be the only one totally lost on this stuff, come on, I'm generally a pretty smart cookie :D

Woohoo! I set up a test HFS on my main PC, with a simple test structure of virtual/real files/folders... then I made "". I added the "[connect] {.disconnect.}" tag to the file, and BAM, it worked perfectly. Then I changed "[connect] to [request]", and "{.disconnect.}" to "{.redirect|}", and BAM, I got redirected to Google.

I think I'm getting the hang of this! :)

Yeah, the reason I didn't find those different wiki pages, is because they're not referenced together... the Wiki structure is kinda deficient in that area - for example, on the PHP documentation site, you can see the "topic tree" in a side navigation bar, so I can just go "up" and learn more about "scripting" itself, if I was reading about "scripting commands"... or I can go to the next topic and read about "scripting events"... but on the Wiki, I have "tunnel vision", there are no links to other subjects within the same area. Really makes it kinda hard to learn without editing the URL and hoping I land on a useful page :/

Thanks for the info! Now I can finally get the load off my DSL connection and moved over to the secondary... :D

edit: speaking of wiki, currently having difficulty locating the "what variables are available in the scripting" page, e.g. to locate the command for "give me the requested filename". Yeah, the wiki is really, really, really a PITA to navigate without navigation links...

edit edit: Yeah, I'm lost. There are no links anywhere to any of the scripting commands or anything... see? - there's nothing about scripting, and no back-references from the scripting pages to "see more in this category"... so I'm lost again. Can you please, please, please post some list of Wiki articles about scripting? :(
edit edit edit: OK, found it: - but I had to run a "search" for "symbols" after I read that the term "symbols" is what I'm looking for instead of "variables". But if there was just a link to "Macros", "Symbols", "Events", etc., I would've just clicked on one of 'em :P

Sooo... I've been using HFS for some years now... and I still haven't really dug into how to work with scripting.

I have a simple function in mind that I would love HFS to take care of: providing an alternative return ("save as", or "content-disposition") file name, different from the URL filename.

HFS has been providing the "engine" behind my hosting site, and it does the job pretty well. I love that I can monitor the current requests and transfers with infinitely more granularity and control than the Apache+PHP+MySQL backend that the site runs on. It actually inspired me to bring the site back online for new registrations and uploads again. Cool.

But the site is now missing one function that was provided by its old (and very buggy) PHP-based file-server. That old system would chunk out the file's data inside the PHP script, generating and sending all the file-related headers (MIME, content-length, content-range I/O, modified, etc... I had to generate it all in the script). But it did one thing right: gave me control over choosing if the browser will save (content-disposition: attachment) or open (content-disposition: inline) the file... and what the "Save As" filename was (content-disposition: That way I can actually serve the original filename back to requesting clients... for example, someone uploads "My Great Video (2-11-2011).mkv", the script strips and stores the filename as "mygreatvideo2112011.mkv" and provides that as the file URL, but when someone goes to download the file, it'll be downloaded as the original filename once again.

So what I wanted to do was pretty simple: pass the "original filename" and attach/inline mode into HFS using a query string, and HFS will serve the file accordingly.

Since I could find absolutely zero information whatsoever online (probably thanks to Google's search relevance meltdown), I have all the commands I need, but that page doesn't even touch on where to actually utilize any of those commands. HFS just serves files, so how do I create an "interpreted script" file? Heck, for the security of my site, I'm a little concerned that would even be possible... so how on earth do I even use those script commands? I don't have to write my own template, do I? :/

Bug reports / Re: memory usage with update check
« on: October 21, 2008, 05:16:09 AM »
Hey, I think I caught it... ever since I left the "Update info was loaded from local file" dialog untouched, I'd been watching memory usage. Memory usage has started going through the roof, and as requests are coming in, I watch Task Manager's "mem usage" for hfs.exe. It seems to increase by 16kb for each completed request!

Take a look at it on VNC if you're still around, I emailed and PM'd you the info  ;)

Bug reports / Re: memory usage with update check
« on: October 21, 2008, 01:22:33 AM »
Heh, I haven't even gone to school or anything, so I didn't even know that was a classic ;)

I don't log to file either, I actually hate logging. I just figured logging to screen would actually take more processing power than logging to file (especially in the case of remote desktop)... I just figure that deleting a portion (after writing it) may not release all the allocated memory back to the OS. Then again, I don't really know how it's coded internally, so... hey.

There's nothing spectacular going on at the moment, but I'll PM you the VNC info and you can enjoy the fireworks show on the server as requests flash in and out of the queue in perfect symphony!

Bug reports / Re: memory usage with update check
« on: October 21, 2008, 12:50:56 AM »
Heh, cool! I'd love to see people use HFS on a large scale. It's literally the only decent web server (on ANY platform) with real-time status information - which gives it a HUGE edge over shit like IIS or Apache. If you (or someone) can get PHP working in it - and I think I read somewhere that it'd already been done - then you've got yourself a full featured, under-one-hood, web server... and something seriously huge... on your hands ;)
(Bravo for that, by the way. Excellent freaking work. I'd've donated by now if it were actually making me any money, but I'm unemployed and dirt poor... :( )

As for logging, wow. I could've sworn I saw the list topping off and rotating off the top of the page when it reached a certain limit... maybe the log control itself was capping it off (on screen). I'd say to go for a similar number of lines as in a command prompt window, which appears to be 300 lines. 5000 is just way overkill, and would take far too much CPU power to maintain.

I would imagine you have more experience in this than I do, but if you're not... entirely... sure, I've got a basic idea of how to implement it without memory leaks through the system control or otherwise:
pointer (int) = current position in stack
stack (1) = line of log of a fixed size
stack (...) = lines of log of a fixed size
stack (300) = line of log of a fixed size
stack (301) = EOF
Write the newest log entry at the position in the log array indicated by pointer, and EOF the (pointer+1) with rollover, i.e. "2", or back to "1" if it was 301. On a timer, cycle through the log buffer (stack) to screen until it reaches EOF (no human needs to read each log entry in realtime; writing to screen on each request takes a lot more processing than just "thinking" it, then writing the screen on a timer). This way, no additional memory ever needs to be allocated to the log, which logically eliminates any possible memory leaks relating to the log ;)
(And it can also be adjustable, by setting an option for log buffer size)

As for persistent connections, I disabled that because people don't actually browse the site - they just connect to grab one file that was on some website they visited, and never need any additional files. I can see how this would come in handy if someone was browsing a website, like this forum, that actively requests content and would otherwise have to make lots of the same connections, but my site is rather different - nobody ever browses the actual site; most, if not all of the connections, are from websites with one of the files embedded from my site. Also, I think Apache is already holding persistent connections, so doing double work is just... euh. The benefit is extraordinarily minimal. But I love that my WRT54GL with Tomato can handle it without too much of a problem; its QoS is excellent. ;)

PO'd = Pissed Off :)

Oh, and yeah, I had been using HFS on my home server previously, but had no problems with it. It wasn't until I "hooked it up" to Hostfile, that the floodgates opened and I ended up straining the heck out of HFS. BTW, if you want to use Remote Desktop or VNC to check out how HFS is performing on the server, you're more than welcome to PM me for the details! :)

Bug reports / Re: memory usage with update check
« on: October 20, 2008, 11:34:12 PM »
I moved the site over to my home server less than a month ago (beginning of October), and noticed problems immediately, regarding the logging to screen, it does a very slow and poor job of dropping off old entries while adding new ones, often eating up 80-100% CPU (2.4GHz P4) and losing connections, after a couple hours of serving files unattended. I shut off all logging except "uploads", "other events", and "browsing" and that fixed that problem. Of course I also shut off persistent connections, that was a huge problem. I noticed the wild memory usage (with the update popup) shortly after starting to use it. But it didn't bug me much, it only happened once every few days and until today, I blamed it on being bottlenecked while 5 people were downloading large files (like FLVs) at once, and it turning down everyone's connections but maybe not releasing some memory. But today was different - after getting PO'd at FLV's tying up the connection for hours at a time, I decided to delete all FLVs, and that seems to have solved that problem (Apache handles the 410 errors by the MySQL database; links to files in HFS isn't linked directly anywhere on the internet, so I can change the URLs on the fly with zero downtime). But it still crashed this morning, which led me to realize it's probably related to that "use new version?" message.

FWIW, I noticed that memory usage has actually climbed from 460mb to 472mb with no new programs, but that margin in this amount of time isn't really in line with such a huge amount of memory being used that causes a crash. I also haven't seen it popup an update message either (I changed the latest build to "215"). Might have to wait until tomorrow, and if it does crash or hang, I'll know what to do in order to recover it. ;)

edit: This may be of some use to you:
Code: [Select]
# default: no

# default: yes

# default:

# default: yes

# default: yes

# default: yes

# default:

# default: yes

# default: yes

# default: yes

# default: 0

# default: downloads

# default: 0

# default: download

# default:

# default:

# default:

# default: yes

# default: 0

# default: 0

# default: 0

# default: 0

connections-columns=IP address;120|File;207|Status;180|Speed;60|Time left;55|Progress;403|
# default: IP address;120|File;180|Status;180|Speed;60|Time left;55|Progress;70|

edit edit: Oh, and by "memory usage", I'm not referencing that number for HFS' in-RAM usage, I'm referring to "PF Usage" in Task Manager, the quintessential figure of how much memory is in use on the system. :)

Bug reports / Re: memory usage with update check
« on: October 20, 2008, 10:39:49 PM »
200... 300... 400MB... I usually catch the problem by noticing that the web server (Apache) responds to requests, but connections to HFS (port 81) time out. I then check the server and head straight for Task Manager (before even trying to activate HFS from the taskbar), and see memory usage around 850-900mb when there's only 512mb RAM in the server, and memory usage is usually around 460mb (like it is now).

I'll try dumping that file in the folder and check the server again later tonight, trying my best to mimic a real update that I didn't see until hours later. Hopefully it does the same thing. :)

edit: It may also be relevant that on the server, I use Firefox (to write this post now), uTorrent, Apache, MySQL, "Macallan Mail Solution" mail server, and access the server completely by Remote Desktop (it has no monitor)

Bug reports / Re: memory usage with update check
« on: October 20, 2008, 06:07:14 PM »
Ugh, it looked like when you moved it, I couldn't reply as a guest anymore, so I registered. It didn't tell me my password needed to be over 8 characters (my normal one is 7 and perfectly secure), which is a little annoying as well. Then I came back and noticed that one of the two buttons was actually "reply". *facepalm*

Anyway, it's happened a few times now, and each time, now that I've seen it for maybe a third time, it had the "You are invited to use the new version" message on the screen. So I'm inclined to think it has something to do with the update mechanism, or maybe the message... or something to do with the fact that it pops up, and memory usage goes up ;)

Pages: 1