Come Register — Right Now, Over Me!

Come Register — Right Now, Over Me!


Come Register — Right Now, Over Me!

Currently Reading: The Manga Guide to Electricity, Kubernetes Up & Running, Linux Networking Internals.

In a previous post I asked “What is Consul and what does it do?” This blog is going to cover that question.

Consul is a key/value store (like ETCD, or Zookeeper), but it also supports service discovery and health checking. Consul is also designed to span across multiple data centers to increase redundancy.

But, what is service discovery, and why do we need it?

We didn’t always need it — but we currently live in a cloud based world. This means everything is dynamic, scaling, and constantly moving due to failures. So, any code that calls out to a service needs a way to be routed to the correct place, in case the place that service lives has been blown up.

I found this graph in the Nginx blog that describes this well:

(Source: https://www.nginx.com/blog/service-discovery-in-a-microservices-architecture/)

There are two discovery patterns that solve this problem: client-side discovery and server-side discovery.

Client-Side Discovery is when the client is responsible for discovering the location of the service, which it does by querying the service registry. The client then uses a load balancing algorithm to find out which of the services to send its request to.

Server-Side Discovery is when the client makes a request to the load balancer and the load balancer is responsible for querying the service registry.

Cool. So service discovery is used because the location of services is constantly changing.We build databases and systems to handle incoming traffic and make routing to the correct place possible.

What is so special about Consul?

First, what Consul’s architecture looks like:

(Source: https://www.consul.io/docs/internals/architecture.html)

To start simple, let’s cover to main protocols Consul uses — Gossip Protocol and Consensus Protocol.

Gossip Protocol (also known as Epidemic Protocol)

Gossip Protocol is a style of protocol intended to resemble how gossip travels throughout a workplace or high school. Gossip protocol works by randomly pairing computers so they can share information. Once both computers have the same information or state, the interaction is done.

Consul uses SWIM (Scalable Weakly-consistent Infection-style Process Group Membership Protocol), with some minor adaptations, such as LifeGuard which you can read about here:

Consul makes two gossip pools (grouping of servers that exchange information at random via SWIM), a LAN and a WAN pool. The LAN is used within a data center, which allows for fast and reliable detection of failures or additions. The WAN pool is used across data centers, which would help if an entire data center lost connectivity.

These two pools make up Consul’s node communication.

Tip: Consul uses Serf’s libraries to implement Gossip Protocol.

Consensus Protocol

A consensus protocol solves the problem of having multiple processes agree upon a single value. They must be able to communicate with one another, put forth their values, and work properly even in an event of failure.

Consensus Protocols must solve the consensus problem. Consul uses the RAFT algorithm.

Raft nodes can be followers, candidates, or leaders. Logs (ordered sequences of entries) are a unit of work in the Raft System. All nodes start out as followers, which means they can accept logs from leaders or cast votes. If a node doesn’t hear from the leader for some time, or get any logs, it becomes a candidate. As a candidate, a node requests votes from other candidates, and when one candidate receives the majority of votes, it becomes a leader. The leader’s job is to accept new log entries and to replicate those log entries to all the followers. This is leader election, and there can only be one leader at a time.

To check out more about RAFT: (http://thesecretlivesofdata.com/raft/)

Consul uses RAFT (Consensus Protocol) to elect a leader for all incoming transactions, and then uses SWIM (Gossip Protocol) to message all nodes when leader election takes place. With use of these two protocols, Consul provides high availability and reliable communication across data centers.

What makes Consul so great is the features it provides of doing health checks, service discovery, and being a Key/Value store all in one tool.

Now that I know what a Service Registry is, and how Consul works, figuring out how to configure Spookernetes may be a bit easier. I’ve set up Consul on my Google Cloud instance and manually registered it with Nginx, so progress is being made! Configuring Registrator (Automatic Service Discovery) and Consul to work together over a Bridged network has been a challenge though. Consul and Registrator work over a host network, so if I can have Consul and Registrator only talk via a host network but also be attached to the bridge network, then I’ll have overcome this roadblock.

Since I just had sinus surgery, I don’t have too many questions, other than why can’t I actually rest instead of study?