| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Before, maximum storage usage (assuming all users upload as much as they
could) would depend on the quota, retention period and number of users.
Since number of users can vary, this makes it hard to know how much
storage will be needed.
Adding a limit to the total overall storage use solves this, making it
simple to set it to some number based on what storage is actually
available.
Summary job run less often than the prune job since it touches the
entire archive; and started before the prune job since it's needed
before the first upload.
|
| | |
| | |
| | |
| | |
| | | |
Other tests don't require a running prosody and I forgot to start it
when testing.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This uses the (experimental) observe.jabber.network API to
perform external connectivity checks. The idea is to complement
the checks prosodyctl can already do with a (nearly) complete
s2s/c2s handshake from a remote party to test the entire stack.
|
| | |
| | |
| | |
| | | |
And to follow existing naming practices better than 'legacy_ssl' did.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Following the style of other options like (c2s|s2s)_require_encryption,
s2s_secure_auth etc.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Mirroring the c2s 'direct_tls'. Naming things is hard.
direct_tls_s2s_ports = { 5269+1 }
|
| | |
| | |
| | |
| | |
| | | |
This could be done with multiplexing, or a future additional port
definition.
|
| | |
| | |
| | |
| | |
| | | |
Goal is to call this if the connection is using Direct TLS, either via
multiplexing or a future Direct TLS S2S port.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Global modules aren't quite considered loaded onto hosts, which
causes confusion in some cases. They are also reported in the log as
being served on http://*:5280/foo which is also a bit confusing, and
can't be clicked.
Global modules also have to have their paths configured in the global
section, which could be confusing and unexpected.
This global+shared method should be the best of both worlds.
|
| | |
| | |
| | |
| | | |
(thanks mjk)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Examples in XEP-0060 suggest that items should be listed in
chronological order, but we get them from the archive in reverse
order.
However when requesting specific items by id the results keep that
order and we don't want to flip it again.
At some point it would likely be best to use the archive API directly
instead of this util.cache-compatible wrapper.
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Hopefully this will eventually be upgraded to RSM, which is why the
argument is called 'resultspec' and is a table.
|
| | |
| | |
| | |
| | | |
As suggested by RFC 7590
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Should no longer be used by anything since the conversion of mod_offline
to the archive API in 0.10.0, which was 4 years ago. The line clearing
the property is left for a bit longer in case someone has very old
offline messages or archived data.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
To be removed in the future, but not right now. Give the log warning a
chance to prod anyone who might have network_backend="select" in their
config first.
There's also things built on Verse which uses server_select.lua, which
will need to be updated somehow.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously it would have gone for server_select if util.poll was for
some reason not available, which should be never these days. And even if
it was, best to flush it out by throwing loud errors so users notice.
Then they can work around it by using select until we delete that one.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fixes that selecting libevent when unavaibalbe would fall back to select
instead of epoll, even if that's available.
This way, we only have to update it in once place when choosing a new
default.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In a case like this the timer would not be readded:
addtimer(1, function(t, id)
stop(id)
return 1
end);
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
MattJ on 09:34:24
> Zash: I think as a first step, offline messages should not be sent to
> clients that request MAM
https://chat.modernxmpp.org/log/modernxmpp/2021-08-31#2021-08-31-8518a542bd283686
|
| | |
| | |
| | |
| | |
| | |
| | | |
Otherwise a message archived by a remote server would be incorrectly
silently discarded. This should be safe from spoofing thanks to
strip_stanza_id earlier in the event chain.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Along with the previous commit, allows building the XML thing yourself,
should you wish to send it yourself or use it in a different context than
an iq reply.
API change: The 'reply' is removed from the event.
|
| | |
| | |
| | |
| | |
| | | |
This way you get the _prepared_ services and don't have to do that mapping
yourself.
|
| | |
| | |
| | |
| | | |
Please don't be accidentally quadratic.
|
| | | |
|
|\| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Likely affected rescheduling but have no reports of this.
After readding a timer, it would have been issued a new id. Rescheduling
would use the previous id, thus not working.
|
|\| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
POSIX is quite explicit regarding the precedence of AND-OR lists [0]:
> The operators "&&" and "||" shall have equal precedence and shall be
> evaluated with left associativity. For example, both of the following
> commands write solely `bar` to standard output:
> false && echo foo || echo bar
> true || echo foo && echo bar
Given that, `prosody.version` target behaves as
((((((test -f prosody.release && cp ...) ||
test -f ...) &&
sed ...) ||
test -f ...) &&
hexdump ...) ||
echo unknown > $@)
In the case of release tarballs, `prosody.release` does exist, so the
first AND pair is executed. Given that it's successful, then the first
`test -f` in the OR pair is ignored, and instead the `sed` in the AND
pair is executed. `sed` success, as `.hg_archival.txt` exists, making
the second `test -f` in the OR pair ignored, and `hexdump` in the AND
pair is executed. Now, given that `.hg` doesn't exist, it fails, so the
last `echo` is run, overwriting `prosody.version` with `unknown`.
This can be worked around placing `()` around the AND pairs. Decided to use
conditionals instead, as I think they better communicate the intention
of the block.
[0]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_03
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, if surrounding code was not configuring the TLS context
used default in net.http, it would not validate certificates at all.
This is not a security issue with prosody, because prosody updates the
context with `verify = "peer"` as well as paths to CA certificates in
util.startup.init_http_client.
Nevertheless... Let's not leave this pitfall out there in the open.
|
|\| | |
|
| |/
| |
| |
| | |
to offer
|
| | |
|
| |
| |
| |
| |
| | |
Since version 0.4 of XEP-0313, the <fin/> element is sent with the IQ
result and no longer has a queryid attribute.
|
| |
| |
| |
| |
| |
| | |
Only relevant because a "dirty" connection (with incoming data in
LuaSocket's buffer) does not count as "readable" according to epoll, so
special care needs to be taken to keep on processing it.
|
| |
| |
| |
| |
| | |
Could allow e.g. a XEP-0198 implementation to efficiently send ack
requests at optimal times without using timers or nextTick.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Allows sneaking in things in the write buffer just before it's sent to
the network stack. For example ack requests, compression flushes or
other things that make sense to send after stanzas or other things.
This ensures any additional trailing data sent is included in the same
write, and possibly the same TCP packet. Other methods used such as
timers or nextTick might not have the same effect as it depends on
scheduling.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Signals that any pending outgoing stanzas that were in the write buffer
have at least been sent off to the Kernel and maybe even sent out over
the network.
See 7a703af90c9c for mod_c2s commit
|
| |
| |
| |
| |
| |
| | |
Storage drivers may issue their own IDs tho none of the included ones do
this atm, but the 3rd party module mod_storage_xmlarchive has its
special format.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Should prevent further opportunistic write attempts after the kernel
buffers are full and stops accepting writes.
When combined with `keep_buffers = false` it should stop it from
repeatedly recreating the buffer table and concatenating it back into a
string when there's a lot to write.
|
| |
| |
| |
| |
| | |
Not currently used for anything, but allowed and could be used in the
future and might be used by other servers.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Makes it so that global values set in the environment are kept longer
than within one line, and thus can be used until the session ends. They
still don't pollute the global environment, which is an error anyway.
Thanks phryk for noticing.
|
| |
| |
| |
| |
| | |
This makes unlimited_jids also work for s2s connections, assuming the
remote server has been identified.
|
| |
| |
| |
| | |
Also enables reuse for s2s, which we will add next.
|