View Issue Details
|ID||Project||Category||View Status||Date Submitted||Last Update|
|0005822||GNUnet||cadet service||public||2019-08-01 21:43||2020-05-06 18:17|
|Target Version||0.13.0||Fixed in Version|
|Summary||0005822: No tunnel destroy after channel destroy was not received|
|Description||We have two nodes A and B. Node B opened a CADET port, and node A created a channel to that port. The channel was working fine. Now node A left the channel and has no tunnel anymore towards node B. Node B still has a tunnel and a channel to node A. This can happen for several reasons. Node A had send a channel destroy node B never got. Node A was not able to send a channel destroy, because node A was shutdown in an uncontrolled manner (hard reboot of the system). When node A is trying to open up the channel to node B again this will not work, because for some pair of nodes the method alice_or_betty in gnunet-service-cadet_tunnels.c might stop node A from starting the key exchange. Node B has no reason to start the key exchange, because node B still has a tunnel to node A.|
|Steps To Reproduce||For two nodes A and B check when the method alice_or_betty returns GNUNET_NO. If PeerIdentity of node B given to alice_or_betty returns GNUNET_NO, node B has to open the cadet port. Node A has to open a channel to node B. After the channel is working do a hard reset of the system node A is running on. After node A is running again try to create a channel to node B.|
|Additional Information||My first thought to solve this problem was to introduce some message node A sends to node B asking to destroy an existing tunnel, and to start key exchange. Then I saw that cadet connections are sending keepalive messages, but at the receiving node this keepalive message is not processed. There could be some recurring task checking for keepalives and if there are none after some timeout we destroy the tunnel and every channel for that tunnel.|
|Tags||No tags attached.|
Timeouts are OK for tunnels (low-ish timeout, maybe depending on path length!) and maybe for a channel's KX, but should not be applied to sessions; thus a timeout on the communication link may tear down a tunnel, if all tunnels go down for a while we may decide a channel's KX is invalid, but as long as there are any sessions, the channel itself must not be destroyed (as sessions must not fail, and if a session exists, a channel must _exist_ -- that's in invariant we have). That said, we could/should (probably) decide to 'reset' the KX to the uninitialized/just starting up state after some timeout and force the setup to begin from "fresh".
With regards to KEEPALIVE, maybe a good option would be to include the current CADET channel state in the keepalive, and if one peer thinks the channel was destroyed and the other thinks it is still up, well, with KEEPALIVE we can detect it, and then figure out how to resolve the situation. Here, I'd think that if either side has active sessions, the correct process is to reinitialize the channel, and if neither side has active sessions, to destroy it completely. So maybe KEEPALIVE should include the # sessions in the message as well.
For further analysis: With regards to the keepalive, I saw that these are processed indirectly by gnunet-service-cadet_core.c:route_message(). Every time a message is routed the respective route is updated (r->last_use).
The function gnunet-service-cadet_core.c:timeout_cb() checks for this value and if expired sends a GNUNET_MESSAGE_TYPE_CADET_CONNECTION_BROKEN message to destroy the respective connection. The timeout task is set when a connection is created.
||The timeout of connections is only loosely connected to the keepalive, because there is just no keepalive send if a connection is timing out. But there is no keepalive processed indirectly. There is no problem with connections in the context of this bug, but with missing handling of keepalive messages not arriving anymore.|
|2019-08-01 21:43||t3sserakt||New Issue|
|2019-08-01 21:43||t3sserakt||Status||new => assigned|
|2019-08-01 21:43||t3sserakt||Assigned To||=> t3sserakt|
|2019-08-01 22:23||Christian Grothoff||Note Added: 0014766|
|2019-08-04 16:56||xrs||Note Added: 0014770|
|2019-08-05 09:23||t3sserakt||Note Added: 0014773|
|2019-08-28 08:05||xrs||Relationship added||related to 0005833|
|2019-10-27 19:26||schanzen||Target Version||0.11.7 => 0.11.8|
|2019-11-04 23:05||schanzen||Target Version||0.11.8 => 0.11.9|
|2020-05-06 18:17||schanzen||Target Version||=> 0.13.0|
||Issue cloned: 0006298|
||Issue cloned: 0006337|