|1 year ago|
|cluster||1 year ago|
|common||3 years ago|
|dist||4 years ago|
|etchosts||3 years ago|
|tests||3 years ago|
|wg||1 year ago|
|.deepsource.toml||3 years ago|
|.gitignore||4 years ago|
|.travis.yml||3 years ago|
|Dockerfile||3 years ago|
|LICENSE||4 years ago|
|Makefile||4 years ago|
|README.md||1 year ago|
|config.go||1 year ago|
|go.mod||1 year ago|
|go.sum||3 years ago|
|main.go||1 year ago|
wesher creates and manages an encrypted mesh overlay network across a group of nodes, using wireguard.
Its main use-case is adding low-maintenance security to public-cloud networks or connecting different cloud providers.
⚠ WARNING: since mesh membership is controlled by a mesh-wide pre-shared key, this effectively downgrades some of the security benefits from wireguard. See security considerations below for more details.
make sure the wireguard kernel module is available on all nodes. It is bundled with linux newer than 5.6 and can otherwise be installed following the instructions here.
The following ports must be accessible between all nodes (see configuration options to change these):
- 51820 UDP
- 7946 UDP and TCP
Download the latest release for your architecture:
$ wget -O wesher https://github.com/costela/wesher/releases/latest/download/wesher-$(go env GOARCH) $ chmod a+x wesher
On the first node:
This will start the wesher daemon in the foreground and - when running on a terminal - will currently output a generated cluster key as follows:
new cluster key generated: XXXXX
Note: to avoid accidentally leaking it in the logs, the created key will only be displayed if running on a terminal. When started via other means (e.g.: desktop session manager or init system), the key can be retreived with
grep ClusterKey /var/lib/wesher/state.json.
Lastly, on any further node:
# wesher --cluster-key XXXXX --join x.x.x.x
XXXXXis the base64 encoded 256 bit key printed by the step above, and
x.x.x.xis the hostname or IP of any of the nodes already joined to the mesh cluster.
wireguard - and therefore
wesher - need root access to work properly.
It is also possible to give the
wesher binary enough capabilities to manage the
wireguard interface via:
# setcap cap_net_admin=eip wesher
This will enable running as an unprivileged user, but some functionality (like automatic adding peer entries to
/etc/hosts; see configuration options below) will not work.
(optional) systemd integration
systemd unit file is provided under the
dist folder and can be copied to
# wget -O /etc/systemd/system/wesher.service https://raw.githubusercontent.com/costela/wesher/master/dist/wesher.service # systemctl daemon-reload # systemctl enable wesher
The provided unit file assumes
wesher is installed to
Note that, as mentioned above, the initial cluster key will not be displayed in the journal.
It can either be initialized by running
wesher manually once, or by pre-seeding via
/etc/default/wesher as the
WESHER_CLUSTER_KEY environment var (see configuration options below).
Installing from source
There are a couple of ways of installing
wesher from sources:
$ git clone https://github.com/costela/wesher.git $ cd wesher $ make
This method can build a bit-by-bit identical binary to the released ones, assuming the same go version is used to build its respective git tag.
$ GO111MODULE=on go get github.com/costela/wesher
Note: this method will not provide a meaningful output for
wesher tool builds a cluster and manages the configuration of wireguard on each node to create peer-to-peer
connections between all nodes, thus forming a full mesh VPN.
This approach may not scale for hundreds of nodes (benchmarks accepted 😉), but is sufficiently performant to join
several nodes across multiple cloud providers, or simply to secure inter-node comunication in a single public-cloud.
Automatic Key management
The wireguard private keys are created on startup for each node and the respective public keys are then broadcast across the cluster.
The control-plane cluster communication is secured with a pre-shared AES-256 key. This key can be be automatically created during startup of the first node in a cluster, or it can be provided (see configuration). The cluster key must then be sent to other nodes via a out-of-band secure channel (e.g. ssh, cloud-init, etc). Once set, the cluster key is saved locally and reused on the next startup.
Automatic IP address management
The overlay IP address of each node is automatically selected out of a private network (
10.0.0.0/8 by default; MUST be different from the underlying network used for cluster communication) and is consistently hashed based on the peer's hostname.
The use of consistent hashing means a given node will always receive the same overlay IP address (see limitations of this approach below).
Note: the node's hostname is also used by the underlying cluster management (using memberlist) to identify nodes and must therefore be unique in the cluster.
Automatic /etc/hosts management
To ease intra-node communication,
wesher also adds entries to
/etc/hosts for each peer in the mesh. This enables using the nodes' hostnames to ensure communication over the secured overlay network (assuming
files is the first entry for
See configuration below for how to disable this behavior.
If a node in the cluster is restarted, it will attempt to re-join the last-known nodes using the same cluster key. This means a restart requires no manual intervention.
All options can be passed either as command-line flags or environment variables:
||WESHER_CLUSTER_KEY||shared key for cluster membership; must be 32 bytes base64 encoded; will be generated if not provided||autogenerated/loaded|
||WESHER_JOIN||comma separated list of hostnames or IP addresses to existing cluster members; if not provided, will attempt resuming any known state or otherwise wait for further members|
||WESHER_INIT||whether to explicitly (re)initialize the cluster; any known state from previous runs will be forgotten||
||WESHER_BIND_ADDR||IP address to bind to for cluster membership (cannot be used with --bind-iface)||autodetected|
||WESHER_BIND_IFACE||Interface to bind to for cluster membership (cannot be used with --bind-addr)|
||WESHER_CLUSTER_PORT||port used for membership gossip traffic (both TCP and UDP); must be the same across cluster||
||WESHER_WIREGUARD_PORT||port used for wireguard traffic (UDP); must be the same across cluster||
||WESHER_OVERLAY_NET||the network in which to allocate addresses for the overlay mesh network (CIDR format); smaller networks increase the chance of IP collision||
||WESHER_INTERFACE||name of the wireguard interface to create and manage||
||WESHER_NO_ETC_HOSTS||whether to skip writing hosts entries for each node in mesh||
||WESHER_LOG_LEVEL||set the verbosity (one of debug/info/warn/error)||
||WESHER_KEEPALIVE_INTERVAL||interval for which to send keepalive packets||
||WESHER_REJOIN||interval at which join nodes are joined again if away, 0 disables rejoining altogether||
Running multiple clusters
To make a node be a member of multiple clusters, simply start multiple wesher instances.
Each instance must have different values for the following settings:
The following settings are not required to be unique, but recommended:
--overlay-net(to reduce the chance of node address conflicts; see Overlay IP collisions)
--cluster-key(as a sensible security measure)
The decision of whom to allow in the mesh is made by memberlist and is secured by a cluster-wide pre-shared key. Compromise of this key will allow an attacker to:
- access services exposed on the overlay network
- impersonate and/or disrupt traffic to/from other nodes It will not, however, allow the attacker access to decrypt the traffic between other nodes.
This pre-shared key is currently static, set up during cluster bootstrapping, but will - in a future version - be rotated for improved security.
Current known limitations
Overlay IP collisions
Since the assignment of IPs on the overlay network is currently decided by the individual node and implemented as a naive hashing of the hostname, there can be no guarantee two hosts will not generate the same overlay IPs. This limitation may be worked around in a future version.
Once a cluster is joined, there is currently no way to distinguish a failed node from an intentionally removed one. This is partially by design: growing and shrinking your cluster dynamically (e.g. via autoscaling) should be as easy as possible.
However, this does mean longer connection loss between any two parts of the cluster (e.g. across a WAN link between different cloud providers) can lead to a split-brain scenario where each side thinks the other side is simply "gone".
There is currently no clean solution for this problem, but one could work around it by designating edge nodes which
wesher with the
--join option pointing to the other side.
Future versions might include the notion of a "static" node to more cleanly avoid this.