| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The metric subsystem of Prosody has had some shortcomings from
the perspective of the current state-of-the-art in metric
observability.
The OpenMetrics standard [0] is a formalization of the data
model (and serialization format) of the well-known and
widely-used Prometheus [1] software stack.
The previous stats subsystem of Prosody did not map well to that
format (see e.g. [2] and [3]); the key reason is that it was
trying to do too much math on its own ([2]) while lacking
first-class support for "families" of metrics ([3]) and
structured metric metadata (despite the `extra` argument to
metrics, there was no standard way of representing common things
like "tags" or "labels").
Even though OpenMetrics has grown from the Prometheus world of
monitoring, it maps well to other popular monitoring stacks
such as:
- InfluxDB (labels can be mapped to tags and fields as necessary)
- Carbon/Graphite (labels can be attached to the metric name with
dot-separation)
- StatsD (see graphite when assuming that graphite is used as
backend, which is the default)
The util.statsd module has been ported to use the OpenMetrics
model as a proof of concept. An implementation which exposes
the util.statistics backend data as Prometheus metrics is
ready for publishing in prosody-modules (most likely as
mod_openmetrics_prometheus to avoid breaking existing 0.11
deployments).
At the same time, the previous measure()-based API had one major
advantage: It is really simple and easy to use without requiring
lots of knowledge about OpenMetrics or similar concepts. For that
reason as well as compatibility with existing code, it is preserved
and may even be extended in the future.
However, code relying on the `stats-updated` event as well as
`get_stats` from `statsmanager` will break because the data
model has changed completely; in case of `stats-updated`, the
code will simply not run (as the event was renamed in order
to avoid conflicts); the `get_stats` function has been removed
completely (so it will cause a traceback when it is attempted
to be used).
Note that the measure_*_event methods have been removed from
the module API. I was unable to find any uses or documentation
and thus deemed they should not be ported. Re-implementation is
possible when necessary.
[0]: https://openmetrics.io/
[1]: https://prometheus.io/
[2]: #959
[3]: #960
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
E.g. `prosodyctl shell module reload disco example.com` becomes
equivalent to `prosodyctl shell 'module:reload("disco", "example.com")`.
Won't work for every possible command, but reduces the amount of shell
quoting problems for most common commands.
|
| |
| |
| |
| |
| |
| |
| | |
Can happen in case opportunistic_writes is enabled and the session got
destroyed while writing that tag.
Thanks Ge0rG
|
| |
| |
| |
| |
| |
| |
| | |
Should fix a traceback on attempted use after destruction, in case where
opportunistic_writes was in use.
Thanks Ge0rG
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Ge0rG)
Could happen with the 'opportunistic_writes' setting, since then the
stream opening is written directly to the socket, which can in turn
trigger session destruction if the socket somehow got closed just after
the other sent their stream header.
Error happens later when it tries to `hosts[session.host == nil].events`
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If network_settings.opportunistic_writes is enabled then this would
previously have resulted in two socket writes, and possibly two packets
being sent. This caused some issues in older versions of Gajim, which
apparently expected the stream opening in the first packet, and thus it
could not connect.
With this change and opportunistic_writes enabled, the first packet
should contain both the xml declaration and the stream open tag.
Without opportunistic_writes, there should be no observable change.
Tested with Gajim 1.1.2 (on same machine). Unsure if loopback behaves
differently than the network here.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When set, no periodic statistics collection is done by
core.statsmanager, instead some module is expected to call collect()
when it suits. Obviously only one such module should be enabled.
Quoth jonas’
> correct way is to scrape the internal sources on each call to /metrics
> in the context of Prometheus
"manual" as opposed to "automatic", from the point of view of
statsmanager.
|
|\| |
|
| | |
|
| |
| |
| |
| |
| |
| | |
This many returns deserve their own line.
`session["sasl_handler"]` style isn't used anywhere else.
|
| | |
|
| |
| |
| |
| |
| | |
We don't use the quoted table indexing style that often, it's not needed
here and it's enough to check for falsyness rather than `nil`.
|
| |
| |
| |
| | |
Unclear how this happens.
|
| |
| |
| |
| | |
Fixes #1515
|
| |
| |
| |
| | |
Fixes #1507
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Zash> Btw, this conditional and loop, shouldn't it be covered by the timing measurement?
Zash> Isn't that where all the util.statistics work is done?
MattJ> Yeah, it should
Zash> ("the", but there's two ... which one‽)
MattJ> Yeah... not sure :)
MattJ> Processing I guess
|
| |
| |
| |
| | |
Gone with s2sout.lib in 756b8821007a
|
| |
| |
| |
| |
| | |
s2sout.lib was removed in 756b8821007a along with srv_hosts and
srv_choice
|
| |
| |
| |
| |
| |
| | |
Lets an external upload service know this so it can do expiry itself.
Could possibly have been calculated based on the token expiry or
issuance time, explicit > implicit.
|
| |
| |
| |
| |
| | |
In case an external upload service wants to have the original creation
time, or calculate the token expiry itself.
|
| | |
|
| |
| |
| |
| | |
util.error.coerce() doesn't work well with iolib
|
| |
| |
| |
| | |
It's annoying that Lua interpolates the filename into the error message.
|
| |
| |
| |
| |
| |
| |
| | |
Use case: Enable module that provides a virtual occupant object for bots
Before, if there is no occupant then either some other part of MUC would
reject the message or `occupant.nick` would have caused an error.
|
| |
| |
| |
| |
| | |
Not sure why these were here to begin with, since it does use the 'self'
argument and did so since they were added.
|
| |
| |
| |
| | |
Maybe the original idea was that you would measure storage separately?
|
| |
| |
| |
| |
| |
| | |
Background: Found a few files in my store that did not match the size
recorded in the slot, so I needed a way to check which which those were.
As it was a bit too much to type into the shell I added it here instead.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
This just gave an unhelpful 500 error.
It would be nice to have some wrapper code that could untangle the
embedded filename in the io libs errors.
|
| |
| |
| |
| | |
Should help inform on whether the cache size should be increased.
|
| |
| |
| |
| |
| |
| | |
This is neat, O(1) reporting, why don't we do this everywhere?
Gives you an idea of how much stuff is in the caches, which may help
inform decisions on whether the size is appropriate.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes that luarocks defaults to installing the rock for its own runtime
version of Lua.
This only works with luarocks 3.x, it does nothing on 2.x as currently
available from Debian.
|
| | |
|
| |
| |
| |
| | |
Had a name, using attr() broke it.
|
| | |
|
| |
| |
| |
| |
| | |
This saves awkward fiddlery with varargs and also echoes the
signature of pcall/xpcall.
|
| | |
|
| |
| |
| |
| |
| |
| | |
Usage: promise.join(p1, p2, function (result1, result2)
[...]
end)
|
| | |
|
| |
| |
| |
| | |
mod_offline also already advertises this feature, so it's added twice.
|
| |
| |
| |
| |
| |
| | |
Since there is no way to distinguish an empty such array from a
zero-length array. Dropping it seems like the least annoying thing to
do.
|
| |
| |
| |
| |
| | |
Should the xml name/ns go on the array or the items schema? The later
apparently.
|
| |
| |
| |
| |
| |
| | |
Turns falsy values into nil instead of nothing, which ensures this
function always has 1 return value, or table.insert({}) complains. Would
still happen on some unexpected input, but that's actually a good thing.
|
| |
| |
| |
| |
| | |
Easier to see which timers are happening soon vs further in the future
if they are in some sensible order.
|
| |
| |
| |
| |
| |
| |
| | |
It was confusing that the connection would just close without much
explanation.
Wanted this while investigating https://github.com/conversejs/converse.js/issues/2438
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
So the problem is that xmlns is not inherited when building a stanza,
and then :get_child(n, ns) with an explicit namespace does not find that
such child tags.
E.g.
local t = st.stanza("foo", { xmlns = "urn:example:bar" })
:text_tag("hello", "world");
assert(t:get_child("hello", "urn:example:bar"), "This fails");
Meanwhile, during parsing (util.xmppstream or util.xml) child tags do
get the parents xmlns when not overriding them.
Thus, in the above example, if the stanza is passed trough
`t = util.xml.parse(tostring(t))` then the assert succeeds.
This change makes it so that it leaves out the namespace argument to
:get_child when it is the same as the current/parent namespace, which
behaves the same for both built and parsed stanzas.
|
| |
| |
| |
| | |
Since this was the last severely duplicated code left.
|