| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
If this seems backwards, that' because it is but the API isn't really
designed to easily pass along details from each resolution step onto the
next.
|
|
|
|
|
| |
Previous commit adds a workaround, so this doesn't mutate global state
anymore, only per-connection 'extra' state as originally intended.
|
|
|
|
|
|
|
| |
Caused "attempt to index a string value (local 'data')", but only if
keep_buffers is set to false, which is not the default.
Introduced in 917eca7be82b
|
|
|
|
|
| |
Read and write timeouts should usually match whether we want to read or
write.
|
|
|
|
| |
Should avoid rare but needless timer interactions
|
|
|
|
|
| |
Instead of removing and readding the timer, keep it and adjust it
instead. Should reduce garbage production a bit.
|
|
|
|
|
|
|
| |
Only real difference between the read and write timeouts is that the
former has a callback that allows the higher levels to keep the
connection alive, while hitting the later is immediately fatal. We want
the later behavior for TLS negotiation.
|
|
|
|
|
|
| |
Saves a function call. I forget if I measured this kind of thing but
IIRC infix concatenation is faster than a function call up to some
number of items, but let's stop at 2 here.
|
|
|
|
|
|
|
| |
writebuffer is now string | { string }
Saves the allocation of a buffer table until the second write, which
could be rare, especially with opportunistic writes.
|
|
|
|
|
| |
Reusing an already existing buffer table would reduce garbage, but
keeping it while idle is a waste.
|
|
|
|
|
| |
So that if a write ends up writing directly to the socket, it gets the
actual return value
|
|
|
|
|
|
|
|
|
|
| |
A timeout value less than 0.001 gets turned into zero on the C side, so
epoll_wait() returns instantly and essentially busy-loops up to 1ms,
e.g. when a timer event ends up scheduled (0, 0.001)ms into the future.
Unsure if this has much effect in practice, but it may waste a small
amount of CPU time. How much would depend on how often this ends up
happening and how fast the CPU gets trough main loop iterations.
|
|
|
|
| |
Nagle increases latency and is the bane of all networking!
|
|
|
|
|
|
|
| |
Activated by setting config.tcp_keepalive to a number, in seconds.
Defaults to 2h.
Depends on LuaSocket support for this option.
|
|
|
|
|
|
| |
In case one wishes to enable this for all connections, not just c2s
(not Direct TLS ones, because LuaSec) and s2s. Unclear what use these
are, since they kick in after 2 hours of idle time.
|
| |
|
|
|
|
|
|
|
|
|
| |
Good to know if it fails, especially since the return value doesn't seem
to be checked anywhere.
Since LuaSec-wrapped sockets don't expose the setoption method, this
will likely show when mod_c2s tries to enable keepalives on direct tls
connections.
|
|
|
|
|
|
| |
Skips a roundtrip through the main loop in case client-first data is
available already, if not then :onreadable() will set the appropriate
timeout.
|
| |
|
|
|
|
|
| |
There's the theory that the socket isn't the same before/after wrap(),
but since epoll operates on FD numbers this shouldn't matter.
|
|
|
|
| |
The :init() method sets a different timeout than the TLS related methods.
|
|
|
|
|
|
| |
Since TLS is a client-first protocol there is a chance that the
ClientHello message is available already. TLS Fast Open and/or the
TCP_DEFER_ACCEPT socket option would increase that chance.
|
|
|
|
|
|
|
|
|
|
|
| |
So there's :startls(), :inittls() and :tlshandshake()
:starttls() prepares for plain -> TLS upgrade and ensures that the
(unencrypted) write buffer is drained before proceeding.
:inittls() wraps the connection and does things like SNI, DANE etc.
:tlshandshake() steps the TLS negotiation forward until it completes
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
net.http.files serving a big enough file on a fast enough connection
with opportunistic_writes enabled could trigger a stack overflow through
repeatedly serving more data that immediately gets sent, draining the
buffer and triggering more data to be sent. This also blocked the server
on a single task until completion or an error.
This change prevents nested opportunistic writes, which should prevent
the stack overflow, at the cost of reduced download speed, but this is
unlikely to be noticeable outside of Gbit networks. Speed at the cost of
blocking other processing is not worth it, especially with the risk of
stack overflow.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When opportunistic writes are enabled this reduces the number of
syscalls and TCP packets sent on the wire.
Experiments with TCP Fast Open made this even more obvious.
That table trick probably wasn't as efficient. Lua generates bytecode
for a table with zero array slots and space for two entries in the hash
part, plus code to set [2] and [4]. I didn't verify but I suspect it
would have had to resize the table when setting [1] and [3], although
probably only once. Concatenating the strings directly in Lua is easier
to read and involves no extra table or function call.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This may speed up client-first protocols (e.g. XMPP, HTTP and TLS) when
the first client data already arrived by the time we accept() it.
If LuaSocket supported TCP_DEFER_ACCEPT we could use that to further
increase the chance that there's already data to handle.
In case no data has arrived, no harm should be done, :onreadable would
simply set the read timeout and we'll get back to it once there is
something to handle.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The :init method is more suited for new outgoing connections, which is
why it uses the connect_timeout setting.
Depending on whether a newly accepted connection is to a Direct TLS port
or not, it should be handled differently, and was already. The :starttls
method sets up timeouts on its own, so the one set in :init was not needed.
Newly accepted plain TCP connections don't need a write timeout set, a
read timeout is enough.
|
|
|
|
|
|
| |
This should make sure that if there's data left to be written when
closing a connection, there's also a timeout so that it doesn't wait
forever.
|
|
|
|
|
| |
Supported by the other net.server implementations already, but not used
anywhere in Prosody.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the underlying TCP connection times out before the write timeout
kicks in, end up here with err="timeout", which the following code
treats as a minor issue.
Then, due to epoll apparently returning the EPOLLOUT (writable) event
too, we go on and try to write to the socket (commonly stream headers).
This fails because the socket is closed, which becomes the error
returned up the stack to the rest of Prosody.
This also trips the 'onconnect' signal, which has effects on various
things, such as the net.connect state machine. Probably undesirable
effects.
With this, we instead return "connection timeout", like server_event,
and destroy the connection handle properly. And then nothing else
happens because the connection has been destroyed.
|
|
|
|
|
| |
Makes it easier to reuse, e.g. for SSE or websockets or other custom
responses.
|
|
|
|
|
| |
Not sure why these were here to begin with, since it does use the 'self'
argument and did so since they were added.
|
|
|
|
|
|
|
|
|
|
| |
Fixes mistake introduced in 5a71f14ab77c that made it so this ready()
newer got called and thus it would be stuck waiting for it.
Looks like the kind of thing that could have been introduced by a merge
or rebase.
Thanks MattJ
|
|
|
|
|
| |
Turns out 'extra' is, at least for mod_s2s, the same table for *all*
connections.
|
|
|
|
| |
Turns out it doesn't work with zero.
|
|
|
|
|
|
| |
Disabled DANE by default, since it needs extra steps to be useful. The
built-in DNS stub resolver does not support DNSSEC so having DANE
enabled by default only leads to an extra wasted DNS request.
|
| |
|
|
|
|
|
|
|
|
| |
Because it already sets request.secure, which depends on the connection,
just like the IP, so it makes sense to do both in the same place.
Dealing with proxies can be left to mod_http for now, but maybe it could
move into some util some day?
|
|
|
|
|
|
|
| |
Fixes that otherwise it would wait for the request to be done after
receiving the head of the request, when it's meant to select a target
for where to store the data, instead of waiting after receiving the
request for when the request has been handled.
|
|
|
|
| |
Storing the async thread on the connection was weird.
|
| |
|
|
|
|
|
| |
Lazy initialization only worked for async queries, but prosodyctl check
dns uses sync queries.
|
|\ |
|
| |
| |
| |
| | |
Thanks Ge0rG for testing
|
| |
| |
| |
| | |
Thanks tmolitor
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| | |
This makes sure that a timer that returns 0 (or less) does not prevent
runtimers() from completing, as well as making sure a timer added with
zero timeout from within a timer does not run until the next tick.
Thanks tmolitor
|
| |
| |
| |
| |
| | |
Shouldn't need a DNS resolver until later anyways. Might even be
sensible to only initialize if a query is actually attempted.
|
| |
| |
| |
| | |
Prepare for lazy-loading it.
|