Redis — [Notes]

Tarun Jain
14 min readMay 12, 2022

--

Follow this link for all System design articles

REDIS = REMOTE DICTIONARY SERVER

Redis is an open-source in-memory data structure storage system, which can be used as a database, cache and message middleware.

It supports multiple types of data structures, such as String, Hash, List, Set, Sorted Set or ZSet and range query, Bitmaps, Hyperloglogs and Geospatial index radius query. The common data structure types are String, List, Set, Hash, and ZSet.

Redis has built-in replication (Replication), LUA scripting (Lua scripting), LRU driver events (LRU eviction), transactions (Transactions) and different levels of disk persistence (Persistence), and through Redis sentry (Sentinel www.xucaizxyl.com) And automatic partition (Cluster) to provide high availability (High Availability).

Redis also provides persistence options that allow users to save their own data to disk for storage. According to the actual situation, the data set can be exported to disk (snapshot) at regular intervals, or appended to the command log (AOF only appends files). When executing the write command, it will copy the executed write command to the hard disk. . You can also turn off the persistence function and use Redis as an efficient network cache data function.

Redis does not use tables, and its database will not pre-define or force users to associate different data stored in Redis.

Redis does not use tables, and its database does not predefine or force users to associate different data stored in Redis.

The working model of the database can be divided into hard disk database and memory database according to the storage mode. Redis stores data in memory, and when reading and writing data, it is not limited by the I/O speed of the hard disk, so it is extremely fast.

(1) Working mode of hard disk database:

The working mode of the in-memory database:

How fast is Redis

Redis uses a memory-based KV database that uses a single-process and single-threaded model. It is written in C language. The official data is a QPS (queries per second) that can reach 100,000+. This data is no worse than the same memory-based KV database Memcached with single-process multi-threading! If you are interested, you can refer to the official benchmark program “ How fast is Redis?” “( https://redis.io/topics/benchmarks )

The horizontal axis is the number of connections, and the vertical axis is QPS. At this time, this picture reflects an order of magnitude. I hope everyone can describe it correctly during the interview. When I don’t ask you, the order of magnitude of your answer is very different!

Why is Redis so fast

1. It is completely based on memory, and most of the requests are pure memory operations, which are very fast. The data is stored in memory, similar to HashMap, the advantage of HashMap is that the time complexity of search and operation is O(1);

2. The data structure is simple and the data operation is also simple. The data structure in Redis is specially designed;

3. Using a single thread avoids unnecessary context switching and competition conditions, and there is no CPU consumption due to switching caused by multi-process or multi-threading. There is no need to consider various lock issues. There is no lock release operation. Performance consumption due to possible deadlocks;

4. Use multiple I/O multiplexing models, non-blocking IO;

5. The underlying models are different, and the underlying implementation methods between them and the application protocol for communication with the client are different. Redis directly builds the VM mechanism by itself, because if the general system calls system functions, it will waste a certain amount of time to move. and requests;

The above points are easy to understand. Below we briefly discuss the multi-channel I/O multiplexing model:

(1) Multiplex I/O Multiplexing Model

The multi-channel I/O multiplexing model is the ability to use select, poll, and epoll to monitor the I/O events of multiple streams at the same time. When idle, the current thread will be blocked. When one or more streams have I/O events When the /O event occurs, it wakes up from the blocking state, so the program will poll all the streams (epoll is only polling those streams that actually emit events), and only process the ready streams in sequence. This approach A lot of useless operations are avoided.

Here “multiplexing” refers to multiple network connections, and “multiplexing” refers to multiplexing the same thread. The use of multi-channel I/O multiplexing technology allows a single thread to efficiently process multiple connection requests (minimizing the time consumption of network IO), and Redis operates data in memory very quickly, which means that in-memory operations are not It will become a bottleneck that affects Redis performance. The above points make Redis have high throughput.

So why is Redis single-threaded

We must first understand that the above analysis is all to create a fast atmosphere for Redis! The official FAQ said that because Redis is a memory-based operation, the CPU is not the bottleneck of Redis, and the bottleneck of Redis is most likely the size of the machine memory or network bandwidth. Since single-threading is easy to implement, and the CPU will not be the bottleneck, it is logical to use a single-threaded solution (after all, using multiple threads will cause a lot of trouble!).

You can refer to: https://redis.io/topics/faq

Seeing this, you may cry! I thought that there would be some major technical points to make Redis use a single thread to be so fast, but I didn’t expect it to be an official answer that seemed to fool us! However, we can already clearly explain why Redis is so fast, and because it is already fast in single-threaded mode, there is no need to use multi-threading!

However, the way we use a single thread is not able to play the multi-core CPU performance, but we can improve it by opening multiple Redis instances on a single machine!

Warning 1: The single thread we have been emphasizing here is only one thread to process our network requests. A formal Redis Server must be running with more than one thread, so everyone needs to pay attention here! For example, when Redis persists, it will be executed in the form of sub-process or sub-thread (specifically, whether the sub-thread or sub-process will be further studied by the reader); for example, I view the Redis process on the test server and then find the thread under the process:

The “-T” parameter of the ps command indicates the display thread (Show threads, possibly with SPID column.) The “SID” column indicates the thread ID, and the “CMD” column indicates the thread name.

Warning 2: In the last paragraph of the FAQ in the above figure, it is stated that multi-threading will be supported starting from Redis 4.0, but multi-threaded operations are only performed on certain operations! So whether this article is still single-threaded in future versions needs readers to verify!

pay attention

1. We know that Redis uses the “single-thread-multiplexing IO model” to achieve high-performance in-memory data services. This mechanism avoids the use of locks, but at the same time, this mechanism is more time-consuming in the process of sunion and the like. command will reduce the concurrency of redis. Because it is a single thread, only one operation is going on at the same time. Therefore, time-consuming commands will lead to a decrease in concurrency, not only read concurrency, but also write concurrency. A single thread can only use one CPU core, so multiple instances can be started in the same multi-core server to form a master-master or master-slave form, and time-consuming read commands can be performed entirely in the slave.

The redis.conf items that need to be changed:

pidfile /var/run/redis/redis_6377.pid  #pidfile要加上端口号
port 6377 #这个是必须改的
logfile /var/log/redis/redis_6377.log #logfile的名称也加上端口号
dbfilename dump_6377.rdb #rdbfile也加上端口号

2. “We can’t let the operating system load balance, because we know our own programs better, so we can manually assign CPU cores to them without taking up too much CPU, or let our key processes and a A heap of other processes huddled together.”.
CPU is an important factor because it is a single-threaded model, Redis prefers fast CPU with large cache, rather than multi-core

On multi-core CPU servers, Redis performance also depends on NUMA configuration and processor binding location. The most obvious effect is that redis-benchmark uses CPU cores randomly. To get accurate results, you need to use the fixed processor tool (tasket is available on Linux). The most effective way is to separate the client and server into two different CPUs to use the L3 cache.

Expansion

The following are also several models you should know, and good luck with your interviews!

1. Single-process multi-threading model: MySQL, Memcached, Oracle (Windows version);

2. Multi-process model: Oracle (Linux version);

3. Nginx has two types of processes, one is called the Master process (equivalent to the management process), and the other is called the Worker process (the actual work process). There are two ways to start:

(1) Single-process startup: At this time, there is only one process in the system, which acts as both the role of the Master process and the role of the Worker process.

(2) Multi-process startup: At this time, the system has one and only one Master process, and at least one Worker process is working.

(3) The Master process mainly performs some global initialization work and manages the Worker’s work; event processing is carried out in the Worker.

Redis single thread

  • In the beginning, redis adopted the single thread model.
  • Because Redis is a memory-based database that puts all data into memory, the operation efficiency of using a single thread is the highest.
  • Multi-thread will consume a lot of time for context switching. For memory systems, a single thread can produce higher efficiency.
  • However, a single thread does not mean that the whole Redis has only one thread. Other Redis modules have their own threads.

Redis is an open-source (BSD licensed), in-memory data structure store, used as a database, cache and message broker.

Data types supported

  • Binary-safe strings (max limit is 512MB, can be used for images etc.)
  • List (basically linked list)
  • Sets (Unique unsorted string elements)
  • Sorted sets (Sets with every string having a score)
  • Hashes (similar to ruby or python hashes)
  • Bit arrays (or simply bitmaps. handle strings like an array of bits)
  • HyperLogLogs (estimate cardinality of set) — probabilistically count with an error margin of 2%
  • Streams (append-only collection of map-like entries)

Redis Database File

  • Popularly called RDB
  • Snapshot style persistence
  • Gotchas:
    ➡ RDB snapshot taken every 5 minutes (you have set the interval for 5 minutes)
    ➡ What if the server goes down 3 minutes after the snapshot?
    ➡ Needs to fork, which is time-consuming for big datasets.

Append Only File

  • Popularly known as AOF
  • Changelog style persistent format
  • appendfsync can be “no/everysec/always”
  • Gotchas:
    ➡ AOF files bigger than RDB
    ➡ AOF can be slower depending on the sync policy
    ➡ There are cases where AOF might not reproduce exactly the same dataset on reloading

BEST WAY to recover data

  • Do both RDB and AOF (set sync to everysec)

The “Duct-Taped Life Boat” anti-pattern

Questions:

→ Single-threaded? — How does it work fine with multithreaded apps?
Redis is single-threaded, then how does it do concurrent I/O?3

OK, Redis is single-threaded at user-level, On the other Hand, all asynchronous I/O is supported by kernel thread pools and/or split-level drivers.

‘Concurrent’, to some, includes distributing network events to socket state-machines. It’s single-threaded, runs on one core, (at user level), so I would not refer to this as concurrent. Others differ..

‘scale indefinitely in terms of I/O concurrency’ is just being economical with the truth. They may get more belief if they said ‘can scale better than one-thread-per-client, providing the clients don’t ask for much’, though they may then feel obliged to add ‘blown away on heavy loading by other async solutions that use all cores at user level’.

How can Redis use multiple CPUs or cores?#

It’s not very frequent that the CPU becomes your bottleneck with Redis, as usually, Redis is either memory or network bound. For instance, when using pipelining a Redis instance running on an average Linux system can deliver 1 million requests per second, so if your application mainly uses O(N) or O(log(N)) commands, it is hardly going to use too much CPU.

However, to maximize CPU usage you can start multiple instances of Redis in the same box and treat them as different servers. At some point a single box may not be enough anyway, so if you want to use multiple CPUs you can start thinking of some way to shard earlier.

You can find more information about using multiple Redis instances on the Partitioning page.

As of version 4.0, Redis has started implementing threaded actions. For now this is limited to deleting objects in the background and blocking commands implemented via Redis modules. For subsequent releases, the plan is to make Redis more and more threaded.

What’s the Redis memory footprint?

To give you a few examples (all obtained using 64-bit instances):

  • An empty instance uses ~ 3MB of memory.
  • 1 Million small Keys -> String Value pairs use ~ 85MB of memory.
  • 1 Million Keys -> Hash value, representing an object with 5 fields, use ~ 160 MB of memory.

Testing your use case is trivial. Use the redis-benchmark utility to generate random data sets then check the space used with the INFO memory command.

64-bit systems will use considerably more memory than 32-bit systems to store the same keys, especially if the keys and values are small. This is because pointers take 8 bytes in 64-bit systems. But of course the advantage is that you can have a lot of memory in 64-bit systems, so in order to run large Redis servers, a 64-bit system is more or less required. The alternative is sharding.

How can Redis give multiple responses for multiple users with a single thread mechanism?

When you build a server program, you have two concurrency choices — threads or events.

Scalable I/O: Events- Vs Multithreading-based

To understand why the implementation of Redis is a straightforward event model, I think it’s a good idea to see how you would implement a web server from scratch to understand the basics.

Here’s how to build a classic fork-based web server (e.g. Apache 1.x)

  • A master process opens a socket and reserves the port 80 with listen(2)
  • Wait for a client by accept(2) — it will block there until a request comes in.
  • When there’s an incoming request, accept(2) returns, then fork(2) off a new child process to work on the request, and the master returns to accept(2) immediately — it’s a simple loop like that.
  • Since the child process has inherited the socket from parent, it can directly send a response back to the client, then exit to clear up the memory.

Congratulations! You’ve just built a web server. But as you can see, it creates a new child process every time you have a new request, and shuts it down when you are finished.

Remember that a child process is an exact copy of the parent, so it takes N times more memory of the master process when you have N children. Also, fork(2) has some overhead even though we have copy-on-write optimization on a modern OS.

Now you see a problem with a fork-based web server, so let’s build a thread-based web server next. (e.g. Apache MPM)

  • A main thread opens a socket and reserves the port 80 with listen(2)
  • Wait for a client by accept(2) — it will block there until a request comes in.
  • When there’s an incoming request, accept(2) returns, then put the request into a FIFO queue, and goes back to accept(2) immediately.
  • A pool of threads (workers) are waiting on the queue, and whichever thread comes first will take the request, process it and send a response back to the client, then return to waiting on the queue.

As you can see, there’s a pre-defined number of workers inside a process, so you have much more control over memory usage. Also, the worker threads are reused, and therefore don’t have the overhead of launch and exit per request.

It seems that you now have a sound web server, but there’s a catch — what if your typical requests take very long, or you have 10,000 concurrent clients?

A simple fact is that even with a modern OS, running 10,000 threads is unrealistic. The cost of context switching will grow to the point where it’s the dominant factor of slow down. Due to Amdahl’s law, it will never be solved completely.

That’s where you want to enter an event-based web server — let’s build one. (e.g. Nginx)

  • The main thread opens a socket and reserves the port 80 with listen(2)
  • Wait for multiple sockets at the same time using epoll/kqueue, this is the only place the main thread would block. You’ll proceed only when you are notified by the kernel that something you are waiting on is ready.
  • When there’s an incoming request, the kernel notifies you epoll/kqueue returns, then accept(2) without blocking. Then start processing the request directly within the main thread.
  • When there’s an I/O wait (e.g. database query) inside a request, register it also so that the kernel will notify you when the database query is returned. Then go back to the epoll/kqueue loop immediately.
  • The main thread switches context in the app logic, from accepting a request to processing a request, then to sending the response back to the client.

As you can see, this model completely changes how you write an app logic, which can be enormous. You can’t even block on a database query. But the reward is also great — epoll/kqueue will enable you to wait for tens of thousands of sockets at the same time, so it will infinitely scale concurrency-wise.

Another good thing is that since everything will run inside a single thread, there’s no race condition which can be agonizing with threaded programs.

A big, serious downside of this model is that you are limited within a single CPU. The main thread does everything, and that means you are confined to a single thread context. When the CPU is saturated, latency will add up.

This means that it works best when there are many connections but most of them are not active — a chat app is a perfect use case for that.

Finally, why the event-based model works great for Redis?

It’s because Redis is a server that constructs data structures such as lists, hashes and sets.

If you are familiar with multi-thread programming, you know how hard it is to safely write to them protected with a mutex, semaphore, etc.

But for Redis, thanks to the race-condition-free property of being event-based, it’s easy to modify such data structures without risking corruption. It dramatically contributes to the simplicity of its codebase.

A nice side effect is that Redis can handle tens of thousands of simultaneous connections — which is impossible with databases such as PostgreSQL or MySQL, as they embrace thread- or process-based concurrency models.

Now that you have a better understanding of its architecture, you should be able to infer more implications of the design — or better yet, you can build your own server. :)

Redis is single Threaded, how can it be so fast?

You’re not missing anything, besides perhaps the fact that most operations in Redis complete in less than a ~millisecond~ couple of microseconds. Long-running operations indeed block the server during their execution.

Reference

--

--

No responses yet