| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
| |
ff4e34c448a4 broke the way net.http.server streams downloads from disk
because it made writes from the ondrain callback no longer reset the
want-write flag, causing the download to halt.
Writes from the predrain handler still must not trigger anything but
additions to the buffer, since it is about to do all the socket writing
already.
|
|
|
|
|
|
| |
Opportunistic writes sure do complicate things. This is especially
intended to avoid opportunistic_writes from within the onpredrain
callback.
|
|
|
|
|
| |
E.g. "connection refused" over one IP version instead of NoError for the
other IP version.
|
|
|
|
|
|
|
|
| |
Previously it would only say "unable to resolve server" for all DNS
problems. While "NoError in A lookup" might not make much sense to
users, it should help in debugging more than the previous generic error.
Friendlier errors will be future work.
|
|
|
|
|
|
|
| |
Should call timers less frequently when many sockets are waiting for
processing. May help under heavy load.
Requested by Ge0rG
|
|
|
|
|
|
|
|
|
|
| |
This is not a pretty way to signal this... but it is the current API
interface:inittls() is a new code path which did not go past the point
in interface:starttls() where it set starttls to false, leading mod_tls
to offer starttls on direct TLS connections
Thanks Martin for discovering.
|
|
|
|
|
|
| |
The intent is to ensure 'ondisconnect' only gets called once, while
giving buffered outgoing data a last chance to be delivered via the
:close() path in case the connection was only shutdown in one direction.
|
|
|
|
|
|
|
|
| |
Before 22825cb5dcd8 connection attempts that failed (e.g. connection
refused) would be immediately destroyed. After, it would schedule
another write cycle and then report 'ondisconnect' again when failing.
Thanks Martin for reporting
|
|\ |
|
| |
| |
| |
| | |
Should ensure shutdown even if sockets somehow take a very long to get closed.
|
| |
| |
| |
| |
| | |
This should ensure that sockets get closed even if they are added after
the quit signal. Otherwise they may keep the server alive.
|
| |
| |
| |
| | |
Seems to have happened in 6427e2642976, probably because of Meld
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead try to write any remaining buffered data. If the write attempt
also fails with "closed" then there's nothing we can do and the socket
is gone.
This reverts what appears to be a mistakenly included part of c8aa66595072
Thanks jonas’ for noticing
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously it was unclear whether "client port" was the port that the
client connected to, or from. I hereby declare that the client port is
the source port and the server port is the destination port.
Incoming and outgoing connections can be distinguished by looking at
the_server reference, which only incoming connections have.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To be removed in the future, but not right now. Give the log warning a
chance to prod anyone who might have network_backend="select" in their
config first.
There's also things built on Verse which uses server_select.lua, which
will need to be updated somehow.
|
| |
| |
| |
| |
| |
| |
| | |
Previously it would have gone for server_select if util.poll was for
some reason not available, which should be never these days. And even if
it was, best to flush it out by throwing loud errors so users notice.
Then they can work around it by using select until we delete that one.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Fixes that selecting libevent when unavaibalbe would fall back to select
instead of epoll, even if that's available.
This way, we only have to update it in once place when choosing a new
default.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In a case like this the timer would not be readded:
addtimer(1, function(t, id)
stop(id)
return 1
end);
|
|\| |
|
| |
| |
| |
| |
| |
| |
| | |
Likely affected rescheduling but have no reports of this.
After readding a timer, it would have been issued a new id. Rescheduling
would use the previous id, thus not working.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, if surrounding code was not configuring the TLS context
used default in net.http, it would not validate certificates at all.
This is not a security issue with prosody, because prosody updates the
context with `verify = "peer"` as well as paths to CA certificates in
util.startup.init_http_client.
Nevertheless... Let's not leave this pitfall out there in the open.
|
| |
| |
| |
| |
| |
| | |
Only relevant because a "dirty" connection (with incoming data in
LuaSocket's buffer) does not count as "readable" according to epoll, so
special care needs to be taken to keep on processing it.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Allows sneaking in things in the write buffer just before it's sent to
the network stack. For example ack requests, compression flushes or
other things that make sense to send after stanzas or other things.
This ensures any additional trailing data sent is included in the same
write, and possibly the same TCP packet. Other methods used such as
timers or nextTick might not have the same effect as it depends on
scheduling.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Should prevent further opportunistic write attempts after the kernel
buffers are full and stops accepting writes.
When combined with `keep_buffers = false` it should stop it from
repeatedly recreating the buffer table and concatenating it back into a
string when there's a lot to write.
|
| |
| |
| |
| |
| | |
Also special thanks to timeless, for wordlessly reminding me to check
for typos.
|
| |
| |
| |
| |
| |
| | |
If this seems backwards, that' because it is but the API isn't really
designed to easily pass along details from each resolution step onto the
next.
|
| |
| |
| |
| |
| | |
Previous commit adds a workaround, so this doesn't mutate global state
anymore, only per-connection 'extra' state as originally intended.
|
| |
| |
| |
| |
| |
| |
| | |
Caused "attempt to index a string value (local 'data')", but only if
keep_buffers is set to false, which is not the default.
Introduced in 917eca7be82b
|
| |
| |
| |
| |
| | |
Read and write timeouts should usually match whether we want to read or
write.
|
| |
| |
| |
| | |
Should avoid rare but needless timer interactions
|
| |
| |
| |
| |
| | |
Instead of removing and readding the timer, keep it and adjust it
instead. Should reduce garbage production a bit.
|
| |
| |
| |
| |
| |
| |
| | |
Only real difference between the read and write timeouts is that the
former has a callback that allows the higher levels to keep the
connection alive, while hitting the later is immediately fatal. We want
the later behavior for TLS negotiation.
|
| |
| |
| |
| |
| |
| | |
Saves a function call. I forget if I measured this kind of thing but
IIRC infix concatenation is faster than a function call up to some
number of items, but let's stop at 2 here.
|
| |
| |
| |
| |
| |
| |
| | |
writebuffer is now string | { string }
Saves the allocation of a buffer table until the second write, which
could be rare, especially with opportunistic writes.
|
| |
| |
| |
| |
| | |
Reusing an already existing buffer table would reduce garbage, but
keeping it while idle is a waste.
|
| |
| |
| |
| |
| | |
So that if a write ends up writing directly to the socket, it gets the
actual return value
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A timeout value less than 0.001 gets turned into zero on the C side, so
epoll_wait() returns instantly and essentially busy-loops up to 1ms,
e.g. when a timer event ends up scheduled (0, 0.001)ms into the future.
Unsure if this has much effect in practice, but it may waste a small
amount of CPU time. How much would depend on how often this ends up
happening and how fast the CPU gets trough main loop iterations.
|
| |
| |
| |
| | |
Nagle increases latency and is the bane of all networking!
|
| |
| |
| |
| |
| |
| |
| | |
Activated by setting config.tcp_keepalive to a number, in seconds.
Defaults to 2h.
Depends on LuaSocket support for this option.
|
| |
| |
| |
| |
| |
| | |
In case one wishes to enable this for all connections, not just c2s
(not Direct TLS ones, because LuaSec) and s2s. Unclear what use these
are, since they kick in after 2 hours of idle time.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Good to know if it fails, especially since the return value doesn't seem
to be checked anywhere.
Since LuaSec-wrapped sockets don't expose the setoption method, this
will likely show when mod_c2s tries to enable keepalives on direct tls
connections.
|
| |
| |
| |
| |
| |
| | |
Skips a roundtrip through the main loop in case client-first data is
available already, if not then :onreadable() will set the appropriate
timeout.
|
| | |
|
| |
| |
| |
| |
| | |
There's the theory that the socket isn't the same before/after wrap(),
but since epoll operates on FD numbers this shouldn't matter.
|
| |
| |
| |
| | |
The :init() method sets a different timeout than the TLS related methods.
|
| |
| |
| |
| |
| |
| | |
Since TLS is a client-first protocol there is a chance that the
ClientHello message is available already. TLS Fast Open and/or the
TCP_DEFER_ACCEPT socket option would increase that chance.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
So there's :startls(), :inittls() and :tlshandshake()
:starttls() prepares for plain -> TLS upgrade and ensures that the
(unencrypted) write buffer is drained before proceeding.
:inittls() wraps the connection and does things like SNI, DANE etc.
:tlshandshake() steps the TLS negotiation forward until it completes
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
net.http.files serving a big enough file on a fast enough connection
with opportunistic_writes enabled could trigger a stack overflow through
repeatedly serving more data that immediately gets sent, draining the
buffer and triggering more data to be sent. This also blocked the server
on a single task until completion or an error.
This change prevents nested opportunistic writes, which should prevent
the stack overflow, at the cost of reduced download speed, but this is
unlikely to be noticeable outside of Gbit networks. Speed at the cost of
blocking other processing is not worth it, especially with the risk of
stack overflow.
|