| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
E.g.
VirtualHost"example.com"
https_name = "xmpp.example.com"
|
|
|
|
| |
Cuts down on a ton of debug logs
|
|
|
|
| |
Right thing to do, rather than hardcoding '/'
|
| |
|
|
|
|
| |
Prevents a false positive match on files with fullchain.pem as suffix
|
|
|
|
|
| |
Originally added in 5b048ccd106f
Merged wrong in ca01c449357f
|
|
|
|
| |
See #533
|
|
|
|
|
| |
Makes it easier to reuse, e.g. for SSE or websockets or other custom
responses.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The metric subsystem of Prosody has had some shortcomings from
the perspective of the current state-of-the-art in metric
observability.
The OpenMetrics standard [0] is a formalization of the data
model (and serialization format) of the well-known and
widely-used Prometheus [1] software stack.
The previous stats subsystem of Prosody did not map well to that
format (see e.g. [2] and [3]); the key reason is that it was
trying to do too much math on its own ([2]) while lacking
first-class support for "families" of metrics ([3]) and
structured metric metadata (despite the `extra` argument to
metrics, there was no standard way of representing common things
like "tags" or "labels").
Even though OpenMetrics has grown from the Prometheus world of
monitoring, it maps well to other popular monitoring stacks
such as:
- InfluxDB (labels can be mapped to tags and fields as necessary)
- Carbon/Graphite (labels can be attached to the metric name with
dot-separation)
- StatsD (see graphite when assuming that graphite is used as
backend, which is the default)
The util.statsd module has been ported to use the OpenMetrics
model as a proof of concept. An implementation which exposes
the util.statistics backend data as Prometheus metrics is
ready for publishing in prosody-modules (most likely as
mod_openmetrics_prometheus to avoid breaking existing 0.11
deployments).
At the same time, the previous measure()-based API had one major
advantage: It is really simple and easy to use without requiring
lots of knowledge about OpenMetrics or similar concepts. For that
reason as well as compatibility with existing code, it is preserved
and may even be extended in the future.
However, code relying on the `stats-updated` event as well as
`get_stats` from `statsmanager` will break because the data
model has changed completely; in case of `stats-updated`, the
code will simply not run (as the event was renamed in order
to avoid conflicts); the `get_stats` function has been removed
completely (so it will cause a traceback when it is attempted
to be used).
Note that the measure_*_event methods have been removed from
the module API. I was unable to find any uses or documentation
and thus deemed they should not be ported. Re-implementation is
possible when necessary.
[0]: https://openmetrics.io/
[1]: https://prometheus.io/
[2]: #959
[3]: #960
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
E.g. `prosodyctl shell module reload disco example.com` becomes
equivalent to `prosodyctl shell 'module:reload("disco", "example.com")`.
Won't work for every possible command, but reduces the amount of shell
quoting problems for most common commands.
|
|
|
|
|
|
|
| |
Can happen in case opportunistic_writes is enabled and the session got
destroyed while writing that tag.
Thanks Ge0rG
|
|
|
|
|
|
|
| |
Should fix a traceback on attempted use after destruction, in case where
opportunistic_writes was in use.
Thanks Ge0rG
|
|
|
|
|
|
|
|
|
|
|
| |
Ge0rG)
Could happen with the 'opportunistic_writes' setting, since then the
stream opening is written directly to the socket, which can in turn
trigger session destruction if the socket somehow got closed just after
the other sent their stream header.
Error happens later when it tries to `hosts[session.host == nil].events`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If network_settings.opportunistic_writes is enabled then this would
previously have resulted in two socket writes, and possibly two packets
being sent. This caused some issues in older versions of Gajim, which
apparently expected the stream opening in the first packet, and thus it
could not connect.
With this change and opportunistic_writes enabled, the first packet
should contain both the xml declaration and the stream open tag.
Without opportunistic_writes, there should be no observable change.
Tested with Gajim 1.1.2 (on same machine). Unsure if loopback behaves
differently than the network here.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When set, no periodic statistics collection is done by
core.statsmanager, instead some module is expected to call collect()
when it suits. Obviously only one such module should be enabled.
Quoth jonas’
> correct way is to scrape the internal sources on each call to /metrics
> in the context of Prometheus
"manual" as opposed to "automatic", from the point of view of
statsmanager.
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| | |
This many returns deserve their own line.
`session["sasl_handler"]` style isn't used anywhere else.
|
| | |
|
| |
| |
| |
| |
| | |
We don't use the quoted table indexing style that often, it's not needed
here and it's enough to check for falsyness rather than `nil`.
|
| |
| |
| |
| | |
Unclear how this happens.
|
| |
| |
| |
| | |
Fixes #1515
|
| |
| |
| |
| | |
Fixes #1507
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Zash> Btw, this conditional and loop, shouldn't it be covered by the timing measurement?
Zash> Isn't that where all the util.statistics work is done?
MattJ> Yeah, it should
Zash> ("the", but there's two ... which one‽)
MattJ> Yeah... not sure :)
MattJ> Processing I guess
|
| |
| |
| |
| | |
Gone with s2sout.lib in 756b8821007a
|
| |
| |
| |
| |
| | |
s2sout.lib was removed in 756b8821007a along with srv_hosts and
srv_choice
|
| |
| |
| |
| |
| |
| | |
Lets an external upload service know this so it can do expiry itself.
Could possibly have been calculated based on the token expiry or
issuance time, explicit > implicit.
|
| |
| |
| |
| |
| | |
In case an external upload service wants to have the original creation
time, or calculate the token expiry itself.
|
| | |
|
| |
| |
| |
| | |
util.error.coerce() doesn't work well with iolib
|
| |
| |
| |
| | |
It's annoying that Lua interpolates the filename into the error message.
|
| |
| |
| |
| |
| |
| |
| | |
Use case: Enable module that provides a virtual occupant object for bots
Before, if there is no occupant then either some other part of MUC would
reject the message or `occupant.nick` would have caused an error.
|
| |
| |
| |
| |
| | |
Not sure why these were here to begin with, since it does use the 'self'
argument and did so since they were added.
|
| |
| |
| |
| | |
Maybe the original idea was that you would measure storage separately?
|
| |
| |
| |
| |
| |
| | |
Background: Found a few files in my store that did not match the size
recorded in the slot, so I needed a way to check which which those were.
As it was a bit too much to type into the shell I added it here instead.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
This just gave an unhelpful 500 error.
It would be nice to have some wrapper code that could untangle the
embedded filename in the io libs errors.
|
| |
| |
| |
| | |
Should help inform on whether the cache size should be increased.
|
| |
| |
| |
| |
| |
| | |
This is neat, O(1) reporting, why don't we do this everywhere?
Gives you an idea of how much stuff is in the caches, which may help
inform decisions on whether the size is appropriate.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes that luarocks defaults to installing the rock for its own runtime
version of Lua.
This only works with luarocks 3.x, it does nothing on 2.x as currently
available from Debian.
|
| | |
|
| |
| |
| |
| | |
Had a name, using attr() broke it.
|
| | |
|
| |
| |
| |
| |
| | |
This saves awkward fiddlery with varargs and also echoes the
signature of pcall/xpcall.
|
| | |
|