rejetto forum

version 2.4

rejetto · 449 · 89049

0 Members and 1 Guest are viewing this topic.

Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13308
    • View Profile
that feature is enabled by default, you should see the original ip if the reverse proxy is on the same machine (localhost)


Offline danny

  • Tireless poster
  • ****
    • Posts: 192
    • View Profile
i don't know how to reproduce your problem, i get no white screen...
The various javascript workarounds to catch server response, can't take into account all browser differences. 
When the phone gets stuck on a white screen, one can long-touch the url, edit the url, backspace over ?mode-login, touch the ok/go button to try again. 
That is a rather lengthy ordeal. 
Since the response is too small to read, it is likely that the user assumes the server is broken/offline. 
I would rather see a web page instead of these:


Offline MarkV

  • Tireless poster
  • ****
    • Posts: 763
    • View Profile
that feature is enabled by default, you should see the original ip if the reverse proxy is on the same machine (localhost)
That works for IPv4, but not at all for IPv6. The headers contain the correct IPv6 address, though. Maybe HFS needs to be adjusted a bit?

Btw., nginx, by default, uses HTTP/1.0 for the reverse proxy. Add the following line under [server]:

proxy_http_version 1.1;
http://worldipv6launch.org - The world is different now.


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13308
    • View Profile
When the phone gets stuck on a white screen, one can long-touch the url, edit the url, backspace over ?mode-login, touch the ok/go button to try again. 

those are not pages, those are responses intended for javascript (ajax), and that's how 2.4 works.
It's not as simple as it was 2.3 but this was already common 10 years ago.
If you are showing those pages then your tpl is doing something wrong that should be fixed. You can study what the default tpl does, or ask for help in this topic https://rejetto.com/forum/index.php?topic=13326.0
« Last Edit: June 21, 2020, 08:40:28 AM by rejetto »


Offline danny

  • Tireless poster
  • ****
    • Posts: 192
    • View Profile
those are not pages, those are responses intended for javascript (ajax), and that's how 2.4 works. It's not as simple as it was 2.3 but this was already common 10 years ago. If you are showing those pages then your tpl is doing something wrong that should be fixed. You can study what the default tpl does, or ask for help in this topic https://rejetto.com/forum/index.php?topic=13326.0
Is it possible to get a "fallback" action of wait 1.5s then redirect ../ so that the white screen doesn't stay overlong? 
Because, not all browsers do the same thing (a little worse on mobile).


Offline NaitLee

  • Tireless poster
  • ****
    • Posts: 120
  • Computer brained boy
    • View Profile
Is it possible to get a "fallback" action of wait 1.5s then redirect ../ so that the white screen doesn't stay overlong?

After the HFS macros are executed, their results are mostly sent to client as HTML.
For redirect you can simply add a refresh meta to changepwd ajax. But this is not a recommended fix to your problem.
See this would help.
Busy in school until late June, 2021.
Check out my template ;)


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13308
    • View Profile
Is it possible to get a "fallback" action of wait 1.5s then redirect ../ so that the white screen doesn't stay overlong? 

user should never see those messages, or the white screen, even for 0.1s, if things are handled properly.
Anything the default tpl does, other tpls can do to. One can just go by modifying the default tpl or writing by scratch if he is skilled enough.
We should definitely have an easy way to integrate the standard login form in other tpls.


Offline danny

  • Tireless poster
  • ****
    • Posts: 192
    • View Profile
We should definitely have an easy way to integrate the standard login form in other tpls.
That would be fantastic! 
For sure, updates would work much better (because the login would update to match). 


Offline danny

  • Tireless poster
  • ****
    • Posts: 192
    • View Profile
In preparation for 2.4 mainline release, I've been doing some testing. 

--------------------------------------------------------------------------------
Testing with the default template, I did nothing other than log in and then click Archive.
In half an hour, the server crashed out.  The archive function was not confined to doable.
At the top of the screen, there is one click for crash.
Patch is:  Previously published in https://rejetto.com/forum/index.php?topic=13060.msg1065272#msg1065272
Result:  high impact / please revise before 2.4 mainline

-------------------------------------------------------------------------------
Forced template change is still in effect.  Cost is that some server owners may have to ban/disable updates to prevent automatic/unattended template change (there are some specialty use applications that could break).  It is possible to sporadically alter the server while the owner is absent from the console.
At the minimum, we need to make sure that change can't happen until After the pop-up message is closed/acknowledged/consent.
Result:  medium impact / please revise before 2.4 mainline

==================================================
I didn't test *everything; but, it looks like 2.4 is ready for mainstream, except you need to, clamp the archive to a doable scope, and postpone automatic change until after a message box is manually closed/acknowledged. . . because the server-owner is not always present at the console (a difference between testing versus real use). 



Offline danny

  • Tireless poster
  • ****
    • Posts: 192
    • View Profile
Just in case it was needed, I've made a favicon alternative, by manually compressing it to 178 bytes and then converting to base64.
Code: [Select]
<link rel="shortcut icon" href="data:image/gif;base64,R0lGODlhEAAQALMAABYAABMmTDZXa2ZkVRxeqzl7zkZvmkJ71m6DLWKPsah+Ko5zV5WUZ8alc/zsKtPU1CH5BAEAAAEALAAAAAAQABAAAARfMMhJq00vv2GlYVrWDBxlGOFzjAhiGkd2EOSALNNJEEVvkK0FLrArHhKGVrBU3B2VAiGzSUgIrjVBAMDter9cRxjgKIsV5jF53RV7zWKzAoB2zNlxMH7sDpfvehJgEQAAOw==">However, for [login], [signin], [overload], [unauth] we could 'turn off' the browsers request for favicon.ico file. 
Because, it is best practice to avoid external dependencies on those pages.
Code: [Select]
<link rel="icon" href="data:,"> or {.if not|%user%|{:<link rel="icon" href="data:,">:}.}
If login is required for / then there is no access to file before login. In that case, cached contents of favicon.ico file has text login, inside, not an image; that is cached, and then 'sometimes' favicon file can't show after login. 

A related error can happen for a direct-download, such as message/response/login is the insides of your file instead of the expected content. 

At login, my browser was actually fetching 2 html pages, one displayed on-screen, and an identical copy inside of the favicon.ico file.  I guess, if I had only turned off favicon request for the login page, then it would display normally after logging in. 
Edit:  Tested, and that guess was true--if any of the data links (above) are used for the login page, then the hfs ico file works normally after login.

I had a similar, except hilarious glitch on the [overload] page when it tried to fetch the favicon file by additional request, which counts against the metric.  oops!  :)

The base64 link looks like this:


Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13308
    • View Profile
It is possible to sporadically alter the server while the owner is absent from the console.
At the minimum, we need to make sure that change can't happen until After the pop-up message is closed/acknowledged/consent.

it is already so, the "update automatically" option is off by default.
Unless you are talking about the message that version 2.4 displays after it is installed.



Offline rejetto

  • Administrator
  • Tireless poster
  • *****
    • Posts: 13308
    • View Profile
Testing with the default template, I did nothing other than log in and then click Archive.
In half an hour, the server crashed out.  The archive function was not confined to doable.
At the top of the screen, there is one click for crash.
Patch is:  Previously published in https://rejetto.com/forum/index.php?topic=13060.msg1065272#msg1065272
Result:  high impact / please revise before 2.4 mainline

i surely have to take care of the 'archive' feature.
I studied your proposal, and it amounts to this
      {.if|{.%connections% < 50.}|{:{.if|{.%total-kbytes% <= 4000000 .}|{:{.if|{.get|can archive.}|
      <button id='archiveBtn' class='pure-button' onclick='ask("{.!Download these files as a single archive?.}", function() { submit({ selection: getSelectedItemsName() }, "{.get|url|mode=archive.}") })'>

Please guys, it can be lengthy to dig into a huge file just to search for the changes you are vaguely referring to. Report them directly instead.
You are making the sub-folders not working, for a start. That doesn't seem a great solution and it's not what people expect. You want the archive especially in cases where you want to download whole trees.
Also, the kbytes should not affect in anyway. The operation gets heavier with the number of files instead.
I think the archives should be subject to the anti-dos mechanism.


Offline danny

  • Tireless poster
  • ****
    • Posts: 192
    • View Profile
you are not satisfied with just reading %speed-out% ?...
I haven't featured out how to read peak/max/top speed data for use in the template.
But it exists. 
HFS has the top-speed data already, in the top-right corner of the screen (in the graph).

...Also, the kbytes should not affect in anyway. The operation gets heavier with the number of files instead.
Both, actually.  There is a functional size limit.  What I listed, isn't the maximum, but it is very close. 
I guess 4194304 is closest to breakage, Or a smaller figure works (if connection rate feasible). 

As far as validating the size that might be useful on a given connection speed, you'd have to know the top speed to do that; and, then you can calculate how many hours or days required to download the archive.

...it can be lengthy to dig into a huge file...
So true--it is gigantic. 

...You are making the sub-folders not working...
I couldn't figure out how to validate the archive function without doing a recursive search to predict the outcome of a recursive archive. 
But, it was easier to predict the outcome of a non-recursive archive. 

If recursive is wanted, I think it is possible to do a search for validation purposes, After the archive button is clicked--maybe the archive function would lead to its own page and section?   If it was in its own section, then maybe it would be efficient to do .zip instead of .tar?   

P.S.  Worth another mention:  For a good part of the validation, if the user had a timeframe estimate, I think they would click 'cancel' most of the time.


Offline LeoNeeson

  • Tireless poster
  • ****
    • Posts: 729
  • Status: On hiatus (sporadically here)
    • View Profile
    • twitter.com/LeoNeeson
the anti-dos mechanism.
@Rejetto: After doing some research on how a typical 'Denial-of-Service' (DoS) is done, which basically consists on overloading a server, I want to contribute with my overall opinion about the Anti-DoS feature.

IMHO, the current implementation is an overkill (I mean, is nice you have implemented some anti-DoS, but for me it's way too-much over-protective). My ideas are:

A) Have different limits for upload, than for download.
B) Give downloads a more relaxed limit than the current one.
C) Count how many requests were done every 5 sec (read below)
D) Limit uploads to only one per 1/second or one per 5/secs.
E) Limit the repetition of downloading the same file, 1 file per 10/secs.
F) Limit/slowdown the request of serving pages (not the internal elements)
G) Have a 'maintenance mode' for extreme cases, limiting everyone but the admin

Some points are self-explained.

About point "C" (counting how many request were done every 5 sec, for the same SessionID), if you give the server admin an option, the admin could configure his server according how many pages are typically needed to be requested (when doing normal usage). For example, seeing Danny's example, if he has a photo gallery, and his page needs to serve 20 thumbnails at once (20 is an invented number for this example). Then, it would be normal to have 20 requests in a 5 seconds time-frame.

About point "F", if a page needs to serve only 10 elements (1 html + 1 css + 2 js + 6 images), then it's normal to have 10 requests in 1 or 2 seconds. So, an even smarter (automatic way), would be HFS automatically counting how many elements the page have (when parsing the template), and applying on-the-fly the limits (if the requests exceed the elements found for the requested page). This would be a behavior limit, since HFS knows how many elements needs the page, and it would let HFS distinguish a legitimate user from an attacker. This, along with limiting how many pages could be served per second, could let us have a more relaxed download rate for elements (like images, css, js), but a more strict limit to request new pages, for example, when exploring folders, we could only serve 1 every 2 seconds, avoiding (or delaying) when the user opens several tabs at the same time.

About point "G", finally, if HFS detects is being attacked (in an very extreme/hard way), then the server could automatically go in 'maintenance mode' for 1 hour (applying a more strict request rate for everyone during that hour), and in that time-frame, it will only allows to login the admin (so he can take care of his server and review the configuration).

This is a good read I recommend about DoS. Also, please read THIS (it explains why 'Rate limitation is not the way to go').

Well, I leave you these ideas.
Do what you think is best... :)
Cheers,
Leo.-
HFS in Spanish (HFS en Español) / How to compile HFS (Tutorial)
» Currently taking a break, until HFS v2.4 get his stable version.