From: Etienne Dechamps Date: Wed, 4 Nov 2015 19:07:14 +0000 (+0000) Subject: Use a splay tree for node UDP addresses in order to avoid collisions. X-Git-Tag: release-1.1pre12~90 X-Git-Url: https://git.librecmc.org/?a=commitdiff_plain;h=eeebff55c07c09c5bc5e62a7b2a21f68ecd1c802;p=oweals%2Ftinc.git Use a splay tree for node UDP addresses in order to avoid collisions. This commit replaces the node UDP address hash table "cache" with a full-blown splay tree, aligning it with node_tree (name-indexed) and node_id_tree (ID-indexed). I'm doing this for two reasons. The first reason is to make sure we don't suddenly degrade to O(n) performance when two "hot" nodes end up in the same hash table bucket (collision). The second, and most important, reason, has to do with the fact that the hash table that was being used overrides elements that collide. Indeed, it turns out that there is one scenario in which the contents of node_udp_cache has *correctness* implications, not just performance implications. This has to do with the way handle_incoming_vpn_data() is implemented. Assume the following topology: A <-> B <-> C Now let's consider the perspective of tincd running on B, and let's assume the following is true: - All nodes are using the 1.1 protocol with node IDs and relaying support. - Nodes A and C have UDP addresses that hash to the same value. - Node C "wins" in the node_udp_cache (i.e. it overwrites A in the cache). - Node A has a "dynamic" UDP address (i.e. an UDP address that has been detected dynamically and cannot be deduced from edge addresses). Then, before this commit, A would be unable to relay packets through B. This is because handle_incoming_vpn_data() will fall back to try_harder(), which won't be able to match any edge addresses, doesn't check the dynamic UDP addresses, and won't be able to match any keys because this is a relayed packet which is encrypted with C's key, not B's. As a result, tinc will fail to match the source of the packet and will drop the packet with a "Received UDP packet from unknown source" message. I have seen this happen in the wild; it is actually quite likely to occur when there are more than a handful of nodes because node_udp_cache only has 256 buckets, making collisions quite likely. This problem is quite severe because it can completely prevent all packet communication between nodes - indeed, if node A tries to initiate some communication with C, it will use relaying at first, until C responds and helps A establish direct communication with it (e.g. hole punching). If relaying is broken, C will not help establish direct communication, and as a result no packets can make it through at all. The bug can be reproduced fairly easily by reproducing the topology above while changing the (hardcoded) node_udp_cache size to 1 to force a collision. One will quickly observe various issues when trying to make A talk to C. Setting IndirectData on B will make the issue even more severe and prevent all communication. Arguably, another way to fix this problem is to make try_harder() compare the packet's source address to each node's dynamic UDP addresses. However, I do not like this solution because if two "hot" nodes are contending on the same hash bucket, try_harder() will be called very often and packet routing performance will degrade closer to O(N) (where N is the total number of nodes in the graph). Using a more appropriate data structure fixes the bug without introducing this performance problem. --- diff --git a/src/node.c b/src/node.c index fb4b7eb..a571ae0 100644 --- a/src/node.c +++ b/src/node.c @@ -34,7 +34,7 @@ splay_tree_t *node_tree; static splay_tree_t *node_id_tree; -static hash_t *node_udp_cache; +static splay_tree_t *node_udp_tree; static hash_t *node_id_cache; node_t *myself; @@ -47,16 +47,23 @@ static int node_id_compare(const node_t *a, const node_t *b) { return memcmp(&a->id, &b->id, sizeof(node_id_t)); } +static int node_udp_compare(const node_t *a, const node_t *b) { + int result = sockaddrcmp(&a->address, &b->address); + if (result) + return result; + return (a->name && b->name) ? strcmp(a->name, b->name) : 0; +} + void init_nodes(void) { node_tree = splay_alloc_tree((splay_compare_t) node_compare, (splay_action_t) free_node); node_id_tree = splay_alloc_tree((splay_compare_t) node_id_compare, NULL); - node_udp_cache = hash_alloc(0x100, sizeof(sockaddr_t)); + node_udp_tree = splay_alloc_tree((splay_compare_t) node_udp_compare, NULL); node_id_cache = hash_alloc(0x100, sizeof(node_id_t)); } void exit_nodes(void) { hash_free(node_id_cache); - hash_free(node_udp_cache); + splay_delete_tree(node_udp_tree); splay_delete_tree(node_id_tree); splay_delete_tree(node_tree); } @@ -116,7 +123,7 @@ void node_add(node_t *n) { } void node_del(node_t *n) { - hash_delete(node_udp_cache, &n->address); + splay_delete(node_udp_tree, n); hash_delete(node_id_cache, &n->id); for splay_each(subnet_t, s, n->subnet_tree) @@ -150,7 +157,8 @@ node_t *lookup_node_id(const node_id_t *id) { } node_t *lookup_node_udp(const sockaddr_t *sa) { - return hash_search(node_udp_cache, sa); + node_t tmp = {.address = *sa}; + return splay_search(node_udp_tree, &tmp); } void update_node_udp(node_t *n, const sockaddr_t *sa) { @@ -159,7 +167,7 @@ void update_node_udp(node_t *n, const sockaddr_t *sa) { return; } - hash_delete(node_udp_cache, &n->address); + splay_delete(node_udp_tree, n); if(sa) { n->address = *sa; @@ -170,7 +178,7 @@ void update_node_udp(node_t *n, const sockaddr_t *sa) { break; } } - hash_insert(node_udp_cache, sa, n); + splay_insert(node_udp_tree, n); free(n->hostname); n->hostname = sockaddr2hostname(&n->address); logger(DEBUG_PROTOCOL, LOG_DEBUG, "UDP address of %s set to %s", n->name, n->hostname);