rejetto forum

version 2.4

rejetto · 398 · 65941

0 Members and 1 Guest are viewing this topic.

Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13275
    • View Profile
Leo, why you are suggesting different limits for uploads and downloads when there is no limit at all for uploads and downloads? :)
guys, this mechanism only affects file listing.
If you want to serve 20 thumbnails at once, or 200, it doesn't matter, there is no limit.
HFS doesn't try to understand if you are attacked, just limits a single address on a very specific operation, because it's heavy.


Offline danny

  • Tireless poster
  • ****
    • Posts: 166
    • View Profile
Leo, why you are suggesting different limits for uploads and downloads when there is no limit at all for uploads and downloads? :)  guys, this mechanism only affects file listing.
I think it works well.  The busy error message is proactive/automated.  I like that! 
The self-fix error message and list limiter are of especially high quality.

You know how the throwback 404 does: display 404, wait a sec, redirect ../ (loops Up the tree until valid page is found)
Yes, this is the sort of thing that I like. 

And, the thumbnails thing isn't all about limits; but, just exactly the opposite (coding the page so that limits aren't required).
For example, Throwback14 for HFS2.3 required connection limits range 5 to 9, to prevent a flood of requests. 
However, Throwback 14 for HFS2.4 has timers to reduce requests-per-second, so that limiter is not required. 
The limiting factor is not particular to HFS but rather exactly the same for every single-thread server, including Nginx for windows.

P.S.  Leo realized a good enhancement to the limiter menu = requests per second. 


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13275
    • View Profile
i think we should have a timeout on file listing operation, like: the list is built but if it's not finished after a minute, it stops and gives you what has been found.


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13275
    • View Profile

Offline danny

  • Tireless poster
  • ****
    • Posts: 166
    • View Profile
https://github.com/rejetto/hfs2/releases/tag/v2.4-rc06

- some security fixes
- limit file listing operations to 1 minute
Can you make an exception for recursive search? 

Sometimes I would do a search through a library, and the result was still wanted, even if it takes more than a minute. 

I checked and search broke in a minute, with only a page-full of results.  Ouch.
Also checked with rc5 and my search needs almost 7 minutes, because I have a quarter-million files.   Please repair RC6

P.S. Time limit didn't prevent archive hangs; so, Feature request: archive limit to 4gB.
Note:  List limits are unnecessary for Takeback and Throwback because those don't block reflow.
Also consider:  Remove minute limit and replace with kB limit because connection speeds/time vary. 


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13275
    • View Profile
i will remove the limit on the search and consider it just as an option.
the amount of results can be also zero if you search billions of files, the point was for how long you will keep the server busy.
of course if it's you with your computer you don't want limits.
sadly if you launch a very long request but decide to stop it, Just closing the tab won't do, because the server receives no communication about it. (tested with Chrome)
This can keep you unable to do anything else.

I'm not sure what you mean by "archive hangs", and I don't know how to reproduce your problem with archives' size: i just tested getting a 11GB archive (32 files), and the server was still working after it. If it's not saturating its single-core CPU it will also still respond and let me browse the rest of the files while downloading.

I just made another test: 9.6G archive with 7k+ files in it, all ok. You should find out what's really is causing this 4G limit on your side, because sometimes things are not what they seem. I don't know...maybe you are saving to a FAT32 drive ? Or with a very old browser?

I'm having a serious problem with archives but doesn't match the description you sent me privately
Quote
A minute was long enough to try 2TB, but only 4GB is feasible so the server quits responding to connections, even though the ui is alive and set to on.   Manually clicking off and then on, restores connectivity.

what i get instead is that too many files are in the archive (and not their size) hang HFS forever, no UI and no WEB and i can only kill it.
On my computer it was ok up to 50k files, but since this number probably depends on the specific machine I thought it was better to limit on the time it takes to collect them.
That's why a "kb" limit wouldn't help.
« Last Edit: June 30, 2020, 09:32:20 AM by rejetto »


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13275
    • View Profile
i should always let you preview new release when i introduce strange changes :)
this is a preview with just the limit on the search removed
https://drive.google.com/file/d/1YaqQO5mQpDZPXUvcqzgEGdM60J-jEcUI/view?usp=sharing
i'll be back on it asap but now have to work
« Last Edit: July 01, 2020, 11:31:03 AM by rejetto »


Offline danny

  • Tireless poster
  • ****
    • Posts: 166
    • View Profile
i should always let you preview new release when i introduce strange changes :) this is a preview with just the limit on the search removed https://drive.google.com/file/d/1hqz3kO-LdnQIa5ARVwKInB_N3--L00TO/view?usp=sharing i'll be back on it asap but now have to work
THANKS!  The security of RC7 is maximized.  But, the cost was that I can't get a file list.  Maybe it is a version change cleaning labor?  brb... (cleaning cache and rebooting)... Well, no files...
P.S.  With RC6, if the minute(s) limit was in the UI limits menu and hfs.ini, then it could have been good.
P.P.S. RC5 is still best-yet. 
...I'm having a serious problem with archives but doesn't match the description you sent me privately.  what i get instead is that too many files are in the archive (and not their size) hang HFS forever, no UI and no WEB and i can only kill it...
This isn't bad, because the UI is alive.  You can do, at interval, check if responsive, if not, cycle off/on. 
Convenience:  No need for minutes-limit.  Just if not-responsive then cycle off/on.


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13275
    • View Profile
sorry to have wasted your time, i missed a basic check
https://drive.google.com/file/d/1YaqQO5mQpDZPXUvcqzgEGdM60J-jEcUI/view?usp=sharing

P.P.S. RC5 is still best-yet. This isn't bad, because the UI is alive. 

nope, it's not alive, that's what i meant with "no UI and no WEB and i can only kill it".
That IS bad :) And when the app is freezezd the app cannot recover by itself.
« Last Edit: July 01, 2020, 11:33:25 AM by rejetto »


Offline danny

  • Tireless poster
  • ****
    • Posts: 166
    • View Profile
Our archive testing is going differently.   When I ask it for too much, and then cancel; the ui is alive, but can't connect. 
For me, archive of any number of files is fine, so long as the sum is less than 4gb (I believe that terabytes archive is not feasible from home internet connections).  And, for users, .tar file isn't desirable; so, current trouble is not very relevant until we go for .zip

I think that if given {.top-speed.} information, then it may be possible to limit both kb and time for archive. 
Or
Could use the single thread standard of divert--every x kb or every x millisecond, divert to update the ui (keyscan mousescan screenupdate)--a sort of scheduler.  That sort of interleaving (task swap/scheduler) is standard for single thread applications, to keep-alive the ui while it is also doing other work.  I see that it has one for downloads because it can download several files simultaneously (actually with task-swap interleaved action).   So, it exists, but probably should be modified to include the ui? 
aka:  instead of time out after too long and then stop, time out far more often and then continue (handoff to ui keepalive).
...[recursive archive]...That IS bad...
It could be a little more simple, because the user didn't want.tar files.  So, I suggest triage (non-recursive + size limit) for .tar files; and, then go for .zip later.

I'm really impressed by the new HFS2.4RC7, which I tested with Throwback14.  RC7 goes much faster than expected.  That doesn't quite knacker the recursive .tar archive problem of the inbuilt template.  But, the really fast speed has me thinking--this is candy.
2.4 is ready to go!!!


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13275
    • View Profile
guys, in order to give other tpls an easy way to have the login feature (something i want for 2.4), i'm trying to make the default tpl work without jquery. I'll let you know.


Offline dj

  • Tireless poster
  • ****
    • Posts: 217
  • 👣 🐾
    • View Profile
    • PWAs
perhaps this is a good start
Code: [Select]
const $ = document.querySelector.bind(document);
Also remember
« Last Edit: July 06, 2020, 09:46:24 AM by dj »


Offline NaitLee

  • Occasional poster
  • *
    • Posts: 82
  • Computer brained boy
    • View Profile
Is the "archive selected files" missing in 2.4? I cannot achieve that...
Thanks for noticing me :D , I'm just someone normal like others here :D
But don't forget to check out my template ;P


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13275
    • View Profile
the button is the same, "archive", but first select some files


Offline danny

  • Tireless poster
  • ****
    • Posts: 166
    • View Profile
Feature redaction request:
Speed limit
Speed limit from single address
Max connections
Max connections from single address

I'm asking that these be deleted from HFS2.4's menu, because they should be a validation data, not a limiting factor.