Categories
Uncategorized

Article for Redis memory monitoring and memory consumption

Redis is an in-memory database, the data stored in memory, reading and writing efficiency of the database stored on disk is much faster than traditional data. So, Redis monitor memory consumption and learn Redis Redis memory model for efficient and long-term stability is essential.

Memory usage statistics

Redis can get relevant info memory by memory index command. More important indicators shown and explained as follows:

Property name

Property Description

Redis total memory allocator, that is, all the data stored in the internal memory footprint

Back in a readable format used_memory

Shows the total amount of physical memory occupied by Redis process from the perspective of the operating system

usedmemoryrss user should read the display format

Maximum memory usage, represents the peak of used_memory

Usedmemorypeak return value in a readable format

Lua engine consumed by memory size.

Ratio usedmemoryrss / used_memory, the rate may represent memory fragmentation

Redis limit the maximum memory that can be used, 0 means no limit, in bytes.

Redis memory recovery strategies used, can be noeviction, allkeys-lru, volatile-lru, allkeys-random, volatile-random or volatile-ttl. The default is noeviction, that is not recycled.

used_memory
usedmemoryhuman
usedmemoryrss
usedmemoryrss_human
usedmemorypeak
usedmemorypeak_human
usedmemorylua
memfragmentationratio
maxmemory
maxmemory_policy

When memfragmentationratio> 1, and no explanation has some memory for data storage, but is consumed by memory fragmentation, if the value is large, indicating a serious breakage rates. When memfragmentationratio <1, this="" situation="" usually="" occurs="" in="" the="" operating="" system="" memory="" Redis="" exchange="" (swap)="" to="" lead="" hard="" disk,="" pay="" special="" attention="" situation,="" because="" disk="" speed="" is="" much="" slower="" than="" memory,="" performance="" becomes="" poor,="" even="" dead.<="" p=""/>

When the Redis memory beyond the available memory, the operating system will swap, the old page is written to disk. From the hard disk read and write from memory read and write about than five orders of magnitude slower. used_memory indicators can help determine whether the risk of being Redis swap or it has been swap.

In Redis Administration article (link at end of the text) is recommended swap to set and memory of the same size, if not swap, once Redis suddenly need more memory than the current operating system’s available memory, Redis because of out of memory to be Linix Kernel of OOM Killer directly kill. Although when Redis data is swapped out (swap out), Redis performance will be worse, but better than directly kill good.

Redis use maxmemory parameters limit the maximum available memory. The main purpose of memory limitations are:

    For caching scenarios, such as using the LRU strategy deleted to free up space when exceeded memory limit maxmemory.

    Prevent the use of server memory than physical memory, resulting OOM system after the process is killed.

maxmemory limit the amount of memory Redis is actually used, that is, used_memory statistical item corresponding memory. The actual memory consumption may be larger than maxmemory set to be careful because this memory leads to OOM. So if you have 10GB of memory, the best maxmemory to 8 or 9G

Memory consumption divided

The Redis process consumes including: own memory + Object Memory + buffer memory + memory fragmentation, which Redis empty process itself the memory consumption is very low, typically usedmemoryrss at around 3MB, used_memory generally about 800KB, an empty Redis process consumes memory can can be ignored.

Objects Memory

Objects Memory is the biggest piece of Redis memory footprint, storage of all data users. When all the data are used Redis key-value data types, every time you create key-value pairs, create at least two types of objects: key objects and value objects. Object memory consumption can be simply understood as both objects and memory consumption of (and similar information expired and the like). Key objects are strings, the impact on memory consumption is easy to overlook key while using Redis, should avoid the use of long keys. For more information about Redis object systems, see my previous article twelve view of a band you know Redis data structures and object system.

Buffer Memory

Buffer memory including: client-side caching, replication backlog AOF buffer and buffer.

The client buffer means all access to the server TCP connections Redis input output buffer.

You can not control the input buffer, the maximum space 1G, and if it exceeds disconnected. And the input buffer is not maxmemory control, assuming a Redis example provided maxmemory to 4G, 2G data has been stored, but at this time if the input buffer using 3G, maxmemory has exceeded the limit, may result in loss of data, key eliminated or OOM.

The input buffer is too large mainly because the processing speed of Redis can not keep up the speed input buffer, and every time you enter a command input buffer contains a large number of bigkey.

The output buffer parameter client-output-buffer-limit control, the following format.

  1. client-output-buffer-limit [hard limit] [soft limit] [duration]

Once the hard limit refers to the buffer size reaches this threshold, Redis will immediately close the connection. The soft limit and time duration common to take effect, such as soft time is 64mb, duration is 60, only sustained 60s when the buffer is greater than 64mb, Redis will close the connection.

The client is in addition to the ordinary copy and subscription clients all connections. The default configuration Reids it is client-output-buffer-limit normal 0 0 0, Redis and no ordinary client output buffer do limit, ordinary client’s memory consumption is negligible, but when there are a large number of slow connection client this part of the access memory consumption can not be ignored, you can set maxclients do limit. In particular when using a large amount of data and the command output data can not be pushed to the client in time, such as a monitor command, is likely to cause a sudden surge Redis server memory. Related cases can view this article on the US Mission in Redis stepped on some pit -3.redis memory usage soared.

From the client to copy from the master, the master node is connected to a command copies the default configuration to client-output-buffer-limit slave 256mb 64mb 60 established separately from each node. When the delay from the master node of the network between the master node or to mount a large number higher node from which part will occupy a large part of the memory consumption, it is recommended to mount the master node from not more than two nodes, the main node do not deploy in the inferior under the network environment, such as remote across the room environment, prevent copying client connection is slow cause overflow. And a master-slave replication There are two types of buffer-related, one is from the client output buffer, the other is a copy of the following will be introduced to the backlog of buffers.

Subscribe to the client for publication subscriptions, connecting clients use a separate output buffer, the default configuration for the client-output-buffer-limit pubsub 32mb 8mb 60, when the message subscription service production faster than the rate of consumption, the output buffer generating a backlog caused by memory overflow.

Input and output buffers easily out of control in high-volume scenarios, resulting in Redis memory instability, it is important to monitor. You can periodically perform client list command, control input and output buffer size of each client and other information.

Property name

Property Description

Query buffer length (in bytes, 0 indicates no buffer allocation query)

Buffer free space query length (bytes, 0 indicates no space remaining)

Output buffer length (bytes, 0 indicates no output buffer assigned)

The number of objects included in the output list (when the output buffer is no remaining space, the command reply will be enqueued in the queue of the object as a string)

qbuf
qbuf-free
obl
oll
  1. 127.0.0.1:6379> client list
  2. id=3 addr=127.0.0.1:58161 fd=8 name= \
  3. age=1408 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 \
  4. qbuf=26 qbuf-free=32742 obl=0 oll=0 omem=0 \
  5. events=r cmd=client

client list command execution is slow, there may be blocking redis frequently performed when more clients, it is generally possible to use the command info clients obtain maximum client buffer size.

  1. 127.0.0.1:6379> info clients
  2. # Clients
  3. connected_clients:1
  4. client_recent_max_input_buffer:2
  5. client_recent_max_output_buffer:0
  6. blocked_clients:0

Redis replication backlog in the buffer is version 2.8 provides a reusable fixed size buffer, for implementing a part of replication. The repl-backlog-size control parameters, the default 1MB. For replication backlog entire buffer is only one master node, all shared this buffer from the node. It is possible to set a large buffer space, such as 100MB, can effectively avoid the full amount of copy. For more information on replication backlog buffer can see my old article Redis replication process detailed.

AOF rewrite Buffer: This part of the space for Redis AOF rewrite period saves the last write command. AOF rewrite the buffer size of the user can not control, depending on the AOF rewrite the amount of time and write commands, but most are small. For more information about AOF persistence can see my old article Redis AOF persistent explain.

Redis memory fragmentation

Redis default memory allocator jemalloc, optional dispenser also: glibc, tcmalloc. To better manage the memory allocator and repeated use of memory, the memory allocation strategy will typically range fixed block of memory allocated. Specific allocation strategy follow-up will explain in detail, but Redis debris normal rate is generally about 1.03 (Why is this value). However, when the data length stored in the difference is large, the following scenario is prone to high memory fragmentation:

    Frequently do update operations, such as frequently perform append, setrange and other key update operation already exists.

    A large number of expired keys delete, delete the key object expires, freed space can not be reused, resulting in fragmentation rates.

This is part of our follow-up jemalloc then explain in detail, because a lot of the framework will use memory allocator, like Netty and so on.

Child process memory consumption

Child process memory consumption refers primarily to the implementation of AOF rewrite or sub-process memory consumption when RDB save Redis created. Redis memory footprint implementation process son fork operation produces the same performance as the parent process, in theory, need to double the physical memory to complete the appropriate action. But Linux has replication technology (copy-on-write) write, the parent and child share the same physical memory pages, when the parent process to handle write requests will need to modify a copy of a copy of the page write operation is completed, the child process still read the memory snapshot of the entire parent process when the fork.

As shown above, only the page table copy when the fork, i.e. the page table. Only when a page is modified until, really copying pages.

But in the 2.6.38 Linux Kernel memory increases Transparent Huge Pages (THP) mechanism, simple to understand, it is to make larger page size, was originally a 4KB, THP after opening mechanism, a size of 2MB. Although it can speed up the fork rate (the number of pages to be copied reduction), but will lead to copy-on-write copy memory page units increased from 4KB to 2MB, if the parent process a large number of write command, will increase the amount of memory copy, all modify the contents of a page, but the page bigger units, resulting in excessive memory consumption. For example, the following two execution log memory consumption at the time of rewriting AOF:

    // open THP

  1. C * AOF rewrite: 1039 MB of memory used by copy-on-write
  2. // close THP

  3. C * AOF rewrite: 9MB of memory used by copy-on-write

The two logs from the same Redis process, the total used_memory is 1.5GB, written during the second sub-process execution command in the amount of about 200. When are turned on and off THP, sub-process memory consumption are worlds apart. So turn at high concurrent write scenes THP, sub-process memory consumption may be several times the parent process, causing the machine physical memory overflow.

So, the child process does not need to consume Redis produce 1 times the parent process of memory, the actual amount determined based on consumption during the write command, you need to set aside some memory to prevent overflow. And THP advised to turn off the system to prevent excessive memory consumption during the copy-on-write. Not only is Redis, the machine will deploy MySQL generally closed THP.

I blog, welcome to play

Original micro-channel public number

Reference article

  • https://www.datadoghq.com/pdf/Understanding-the-Top-5-Redis-Performance-Metrics.pdf

  • Redis Administration https://redis.io/topics/admin

Leave a Reply