| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
This is as per the HTTP standards [1]. Thankfully, the REQUIRED
www-authenticate header is already generated by the code.
[1]: https://datatracker.ietf.org/doc/html/rfc7235#section-3.1
|
| |
|
|
|
|
|
| |
In order to allow monitoring. Especially as there's not much in the way
of hard numbers on how much space gets used.
|
|
|
|
|
| |
Error in util.human.units.format because of B(nil) when the global quota
is unset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, maximum storage usage (assuming all users upload as much as they
could) would depend on the quota, retention period and number of users.
Since number of users can vary, this makes it hard to know how much
storage will be needed.
Adding a limit to the total overall storage use solves this, making it
simple to set it to some number based on what storage is actually
available.
Summary job run less often than the prune job since it touches the
entire archive; and started before the prune job since it's needed
before the first upload.
|
|
|
|
|
|
|
|
|
| |
X-Frame-Options was replaced by the Content-Security-Policy
'frame-ancestors' directive, but Internet Explorer does not support that
part of CSP.
Since it's just one line it doesn't hurt to keep until some future
spring cleaning event :)
|
|
|
|
|
| |
Creates buckets up to the configured size limit or 1TB, whichever is
smaller, e.g. {1K, 4K, 16K, ... 4M, 16M}
|
|
|
|
|
|
|
| |
Turns out you can seek past the end of the file without getting an
error.
Also rejects empty range instead of sending the whole file.
|
|
|
|
|
|
|
|
| |
Only a starting point is supported due to the way response:send_file()
sends everything it gets from the provided file handle but does not have
any way to specify how much to read.
This matches what Conversations appears to be doing.
|
|
|
|
|
|
| |
Lets an external upload service know this so it can do expiry itself.
Could possibly have been calculated based on the token expiry or
issuance time, explicit > implicit.
|
|
|
|
|
| |
In case an external upload service wants to have the original creation
time, or calculate the token expiry itself.
|
| |
|
|
|
|
| |
util.error.coerce() doesn't work well with iolib
|
|
|
|
| |
It's annoying that Lua interpolates the filename into the error message.
|
|
|
|
| |
Maybe the original idea was that you would measure storage separately?
|
|
|
|
|
|
| |
Background: Found a few files in my store that did not match the size
recorded in the slot, so I needed a way to check which which those were.
As it was a bit too much to type into the shell I added it here instead.
|
|
|
|
|
|
|
| |
This just gave an unhelpful 500 error.
It would be nice to have some wrapper code that could untangle the
embedded filename in the io libs errors.
|
|
|
|
|
|
| |
This is neat, O(1) reporting, why don't we do this everywhere?
Gives you an idea of how much stuff is in the caches, which may help
inform decisions on whether the size is appropriate.
|
|
|
|
|
| |
In case none of the expired files could be deleted then it's a waste of
an API call to try to remove any of the metadata at all.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
deleted
If any of the expired files could not be deleted then we should not
forget about that, we should complain loudly and try again.
The code got this backwards and would have removed only the entries
referring to still existing files.
Test procedure:
1. Upload a file
2. chown root:root http_file_share/
3. In uploads.list, decrease 'when' enough to ensure expiry
4. Reload mod_http_file_share
5. Should see an error in the logs about failure to delete the file
6. Should see that the metadata in uploads.list is still there
7. chown http_file_share/ back to the previous owner
8. Reload mod_http_file_share
9. Should see logs about successful removal of expired file
10. Should see that the metadata in uploads.list is gone
11. Should see that the file was deleted
|
|
|
|
| |
attempt to index a nil value (local 'filetype') casued by the :gsub call
|
| |
|
| |
|
| |
|
|
|
|
|
| |
E.g. curl will ask for this when sending large uploads. Removes a delay
while it waits for an error or go-agead.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
'filetype' is optional, so having it last seems sensible.
'slot' is pretty important, so moving it earlier seems sensible.
|
|
|
|
|
| |
This should ensure that cache entries until the oldest file that counted
to the last 24h becomes older than 24h.
|
| |
|
| |
|
|
|
|
|
| |
Daily instead of total quotas, should be more efficient to calculate.
Still O(n), but a smaller n. Less affected by total retention period.
|
|
|
|
|
| |
Last comma or semicolon isn't required but makes the diffs nicer once
you add another item after it.
|
|
|
|
|
| |
No expired ... what? Could be inferred from the module logging it, but
better to be explicit.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Otherwise uploads taking longer than 5 minutes would be rejected on
completion, and that's probably annoying.
Thanks jonas’
|
|
|
|
|
|
| |
(thanks jonas’)
Otherwise people complain about browser 'Save as' dialog.
|
|
|
|
|
|
|
|
| |
For faster access by avoiding archive ID.
No benchmarks were harmed in the making of this commit.
... no benchmarks were performed at all.
|
|
|
|
|
| |
A step towards adding caching, which will unpack into the same
variables.
|
|
|
|
| |
scansion)
|
|
|
|
| |
Similar to the mod_mam cleanup job
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Distinct from '.dat' used by datamanager / internal stortage for Lua
object storage so that they can't easily be loaded by accident that way.
|
| |
|
|
|
|
| |
It belongs with the header more than the token itself
|