Distributed Lock implementation using Zookeeper in .NET Core

Mortaza Ghahremani
5 min readDec 28, 2021

--

Generally, Locks used to synchronize access to shared resource by multiple threads at the same time. That’s very important to protect consistency of data in concurrent environment, so that only one thread at a time can acquire an exclusive lock to modify the shared resource in isolation from others. In standalone environment these locks provided by the kernel to threads.

But what about in distributed environment? How to synchronize access to shared resource from multiple processes? Actually, we need a distributed service that provides the same functionality to give an exclusive lock for different processes maybe in different machines.

At a high level, there are two reasons why you might want a lock in a distributed application: for efficiency or for correctness

· Efficiency: Taking a lock saves you from unnecessarily doing the same work twice (e.g., some expensive computation).

· Correctness: Taking a lock prevents concurrent processes from stepping on each other’s toes and messing up the state of your system

Both are valid cases for wanting a lock, but you need to be very clear about which one of the two you are dealing with.

There are various technologies and tools to implement distributed lock but the most popular ones are:

Radis, Zookeeper, Etcd and Hazelcast.

The disadvantage of using Redis for distributed lock is that if the single machine deployment mode is adopted, there will be a single point of problem, as long as Redis fails. You can’t lock it.

In master slave mode, only one node is locked when locking. Even if high availability is achieved through sentinel, if master node fails and master-slave switch occurs, lock loss may occur.

Based on the above considerations, in fact, the author of Redis also considered this problem. He proposed a RedLock algorithm. But it is unnecessarily heavyweight and expensive for efficiency-optimization locks, but it is not sufficiently safe for situations in which correctness depends on the lock.

Another flaw about RedLock is its dangerous assumptions about timing and system clocks and it violates safety properties if those assumptions are not met.

On the other hand, if you need locks for correctness, please don’t use Redlock. Instead, please use a proper consensus system such as Zookeeper.

ZooKeeper is a distributed, hierarchical file system that facilitates loose coupling between clients and provides an eventually consistent view of its znodes, which are like files and directories in a traditional file system. It provides basic operations such as creating, deleting, and checking existence of znodes. It provides an event-driven model in which clients can watch for changes to specific znodes, for example if a new child is added to an existing znode. ZooKeeper achieves high availability by running multiple ZooKeeper servers, called an ensemble, with each server holding an in-memory copy of the distributed file system to service client read requests. Each server also holds a persistent copy on disk.

Netflix has developed a ZooKeeper client framework, Curator. But it is a java implementation, so I have tried to implement an abstraction over Zookeeper libraries in .Net core 6.

Let’s make a review on some basic concepts of Zookeeper to clarify the way Zookeeper can provide lock mechanism.

Node(znode)

The data storage data model of ZooKeeper is a tree (Znode Tree). The path of partitioning by slash (/) is an Znode (e.g. / locks / my_lock). Each znode will store its own data content and a series of attribute information at the same time.

Znode can be divided into the following four types:

· Persistent nodes: After the node is created, it will always exist and will not be deleted because the client session fails.

· Persistent Sequential Node: The basic features are consistent with persistent nodes. ZooKeeper automatically appends a monotonously increasing number suffix to its name as a new node name during the creation of the node.

· Temporary Node: When the client session fails or the connection is closed, the node will be deleted automatically.

· Temporary Sequential Node: The basic features are consistent with temporary nodes. ZooKeeper automatically appends a monotonously increasing number suffix to its name as a new node name in the process of creating a node.

ZooKeeper distributed lock is based on Temporary Sequential Node To achieve this, a lock can be understood as a node on ZooKeeper, under which a temporary sequential node is created when a lock needs to be acquired. When multiple clients acquire locks at the same time, several temporary sequential nodes are created sequentially. However, only the node with the first sequence number can acquire the locks successfully. Others monitor the changes of the previous node in order. When the listener releases the locks, the listener can acquire the locks immediately.

As shown above: Client A and Client B want to acquire locks at the same time, so they all create a temporary node 1 and 2 under locks node, and 1 is the node with the first ordinal number under the current lock’s node, so Client A gets the lock successfully, while Client B is in a waiting state, when two nodes in ZooKeeper listen on one node. When the lock of one node is released (the node is deleted), 2 becomes the node with the first ordinal number under the lock’s node, so Client B gets the lock successfully.

These two NuGet packages are available for using zookeeper in .Net projects:

· ZooKeeperNetEx

· ZooKeeperNetEx.Recipes

So, I have developed an abstraction over these packages for simplifying usage of zookeeper as a distributed lock in .NET core projects, you could find it in this repository:

Also you can use this NuGet package with below command:

Install-Package LockManagement -Version 1.0.1

Usage of this package for distributed lock is very easy and straightforward:

public class MyService{private readonly IDistributedLockerFactory distributedLockerFactory;private string data = "";public MyService(IDistributedLockerFactory distributedLockerFactory){  this.distributedLockerFactory = distributedLockerFactory;}public async Task<string> LockTest(){   var locker = distributedLockerFactory.Create("foo").GetLocker();   await locker.LockAsync(DoAction);   return data;}
//Callback for acquiring lock
private void DoAction(){//do some action that is needed for distributed processes be //synchronzed Thread.Sleep(2000); data = "lock acquired";}}

Resources:

--

--

Mortaza Ghahremani
Mortaza Ghahremani

Written by Mortaza Ghahremani

A Passionate software developer & engineer with many enthusiasm to learn new technologies and concepts to solve big challenges

Responses (1)