Redis過(guò)期鍵與內(nèi)存淘汰策略深入分析講解
以下內(nèi)容是基于Redis 6.2.6 版本整理總結(jié)
一、Redis數(shù)據(jù)庫(kù)的組織方式
Redis服務(wù)器將所有的數(shù)據(jù)庫(kù) 都保存在src/server.h/redisServer結(jié)構(gòu)中的db數(shù)組中。db數(shù)組的每個(gè)entry都是src/server.h/redisDb結(jié)構(gòu),每個(gè)redisDb結(jié)構(gòu)代表一個(gè)數(shù)據(jù)庫(kù)。Redis默認(rèn)有16個(gè)數(shù)據(jù)庫(kù)。
1.1 redisServer結(jié)構(gòu)定義
struct redisServer { /* General */ pid_t pid; /* Main process pid. */ pthread_t main_thread_id; /* Main thread id */ ... redisDb *db; // db數(shù)組 ... int dbnum; // redis db的數(shù)量 ... };
1.2 redisDb 結(jié)構(gòu)定義
typedef struct redisDb { dict *dict; /* The keyspace for this DB */ //鍵空間,保存數(shù)據(jù)庫(kù)中所有的鍵值對(duì) dict *expires; /* Timeout of keys with a timeout set */ dict *blocking_keys; /* Keys with clients waiting for data (BLPOP)*/ dict *ready_keys; /* Blocked keys that received a PUSH */ dict *watched_keys; /* WATCHED keys for MULTI/EXEC CAS */ int id; /* Database ID */ long long avg_ttl; /* Average TTL, just for stats */ unsigned long expires_cursor; /* Cursor of the active expire cycle. */ list *defrag_later; /* List of key names to attempt to defrag one by one, gradually. */ } redisDb;
各字段含義解釋:
- dict保存了數(shù)據(jù)庫(kù)中的所有鍵值對(duì),這個(gè)字典也被稱為:鍵空間(key space)。鍵空間的鍵就是數(shù)據(jù)庫(kù)的鍵,每個(gè)鍵都是字符串對(duì)象;鍵空間的值就是數(shù)據(jù)庫(kù)的值,每個(gè)值可以是五種對(duì)象中的任意一種對(duì)象。
- expires字典保存了數(shù)據(jù)庫(kù)中所有鍵的過(guò)期時(shí)間,也叫過(guò)期字典。過(guò)期字典的鍵是指向鍵空間中的某個(gè)鍵的指針;值是一個(gè)long long類型的unix毫秒級(jí)時(shí)間戳。
- blocking_keys使用比較少,redis只有blpop、brpop等命令造成主動(dòng)阻塞。
- ready_keys和blocking_keys配合使用,比如:一個(gè)客戶端blpop阻塞等待數(shù)據(jù),另一個(gè)客戶端在push時(shí),會(huì)檢查blocking_keys中是否存在相應(yīng)的key,如果有就將該key移動(dòng)到ready_keys中,阻塞的客戶端收到數(shù)據(jù)。
- watched_keys用來(lái)實(shí)現(xiàn)WATCH功能,實(shí)際線上環(huán)境不會(huì)使用,影響redis性能。
1.3 redisdb初始化
// src/server.c void initServer(void) { int j; // ... server.db = zmalloc(sizeof(redisDb)*server.dbnum); // ... /* Create the Redis databases, and initialize other internal state. */ for (j = 0; j < server.dbnum; j++) { server.db[j].dict = dictCreate(&dbDictType,NULL); server.db[j].expires = dictCreate(&dbExpiresDictType,NULL); server.db[j].expires_cursor = 0; server.db[j].blocking_keys = dictCreate(&keylistDictType,NULL); server.db[j].ready_keys = dictCreate(&objectKeyPointerValueDictType,NULL); server.db[j].watched_keys = dictCreate(&keylistDictType,NULL); server.db[j].id = j; server.db[j].avg_ttl = 0; server.db[j].defrag_later = listCreate(); listSetFreeMethod(server.db[j].defrag_later,(void (*)(void*))sdsfree); } //... }
二、過(guò)期鍵
2.1 設(shè)置鍵的過(guò)期時(shí)間
redis客戶端提供了expire或pexpire命令來(lái)設(shè)置鍵的過(guò)期時(shí)間(Time to live, TTL),在經(jīng)過(guò)指定秒數(shù)或者毫秒數(shù)后,redis服務(wù)器會(huì)自動(dòng)刪除生存時(shí)間為0的鍵。ttl命令是以秒為單位返回鍵的剩余生存時(shí)間,pttl命令則是以毫秒為單位。
也可以通過(guò) setex 在設(shè)置某個(gè)鍵的同時(shí)為其設(shè)置過(guò)期時(shí)間:
如果一個(gè)鍵沒(méi)有設(shè)置過(guò)期時(shí)間或者設(shè)置了過(guò)期時(shí)間又通過(guò)persist命令取消了過(guò)期時(shí)間,則執(zhí)行ttl查看鍵的過(guò)期時(shí)間返回-1
2.2 過(guò)期鍵的判定
開(kāi)頭我們?cè)趯W(xué)習(xí)redisDb 結(jié)構(gòu)的時(shí)候說(shuō)過(guò),過(guò)redisDb 中的expires過(guò)期字典保存了數(shù)據(jù)中的所有鍵的過(guò)期時(shí)間。要判斷一個(gè)鍵是否過(guò)期:
- 檢查給定鍵是不是在過(guò)期字典中,如果在,則拿到過(guò)期時(shí)間
- 跟當(dāng)前unix時(shí)間戳比較,如果小于當(dāng)前unix時(shí)間戳則過(guò)期,否則還沒(méi)過(guò)期。
2.3 過(guò)期鍵的刪除策略
惰性刪除:放任過(guò)期鍵不管,但是每次從鍵空間獲取鍵的時(shí)候,都會(huì)先檢查鍵是否過(guò)期,如果過(guò)期了就刪除,否則就正常返回。
優(yōu)點(diǎn):對(duì)CPU友好,對(duì)內(nèi)存不友好,如果有訪問(wèn)的不到鍵,且已經(jīng)過(guò)期了,則永遠(yuǎn)不會(huì)被刪除。
定期刪除:每隔一段時(shí)間,檢查一次數(shù)據(jù)庫(kù),刪除里面的過(guò)期鍵。要掃描多少個(gè)數(shù)據(jù)庫(kù),以及要?jiǎng)h除多少過(guò)期鍵,由算法控制。
Redis服務(wù)器采用了上面兩種策略的組合使用,很好的平衡了CPU的使用和內(nèi)存的使用。
2.3.1 惰性刪除的實(shí)現(xiàn)
惰性刪除由expireIfNeeded函數(shù)實(shí)現(xiàn),Redis在執(zhí)行讀寫命令時(shí)都會(huì)先調(diào)用expireIfNeeded函數(shù)對(duì)鍵進(jìn)行檢查。如果已經(jīng)過(guò)期,expireIfNeeded函數(shù)就會(huì)刪除該鍵值對(duì);如果沒(méi)有過(guò)期,則什么都不做。
// db.c int expireIfNeeded(redisDb *db, robj *key) { // 如果沒(méi)過(guò)期,什么都不做,直接返回 if (!keyIsExpired(db,key)) return 0; /* If we are running in the context of a slave, instead of * evicting the expired key from the database, we return ASAP: * the slave key expiration is controlled by the master that will * send us synthesized DEL operations for expired keys. * * Still we try to return the right information to the caller, * that is, 0 if we think the key should be still valid, 1 if * we think the key is expired at this time. */ if (server.masterhost != NULL) return 1; /* If clients are paused, we keep the current dataset constant, * but return to the client what we believe is the right state. Typically, * at the end of the pause we will properly expire the key OR we will * have failed over and the new primary will send us the expire. */ if (checkClientPauseTimeoutAndReturnIfPaused()) return 1; /* Delete the key */ // 刪除過(guò)期鍵 deleteExpiredKeyAndPropagate(db,key); return 1; } /* Check if the key is expired. */ int keyIsExpired(redisDb *db, robj *key) { mstime_t when = getExpire(db,key); mstime_t now; // 如果該鍵沒(méi)有設(shè)置過(guò)期時(shí)間 if (when < 0) return 0; /* No expire for this key */ /* Don't expire anything while loading. It will be done later. */ // server加載過(guò)程中,不執(zhí)行任何過(guò)期鍵刪除操作 if (server.loading) return 0; // 獲取當(dāng)前時(shí)間now /* If we are in the context of a Lua script, we pretend that time is * blocked to when the Lua script started. This way a key can expire * only the first time it is accessed and not in the middle of the * script execution, making propagation to slaves / AOF consistent. * See issue #1525 on Github for more information. */ if (server.lua_caller) { now = server.lua_time_snapshot; } /* If we are in the middle of a command execution, we still want to use * a reference time that does not change: in that case we just use the * cached time, that we update before each call in the call() function. * This way we avoid that commands such as RPOPLPUSH or similar, that * may re-open the same key multiple times, can invalidate an already * open object in a next call, if the next call will see the key expired, * while the first did not. */ else if (server.fixed_time_expire > 0) { now = server.mstime; } /* For the other cases, we want to use the most fresh time we have. */ else { now = mstime(); } /* The key expired if the current (virtual or real) time is greater * than the expire time of the key. */ // 如果當(dāng)前時(shí)間大于過(guò)期時(shí)間,則該鍵過(guò)期,返回true return now > when; } /* Return the expire time of the specified key, or -1 if no expire * is associated with this key (i.e. the key is non volatile) */ // 從過(guò)期字典中獲取key的過(guò)期時(shí)間 long long getExpire(redisDb *db, robj *key) { dictEntry *de; /* No expire? return ASAP */ // dictSize = db對(duì)應(yīng)的ht[0].used+ht[1].used // 在過(guò)期字典中找不到該key,則直接返回-1 if (dictSize(db->expires) == 0 || (de = dictFind(db->expires,key->ptr)) == NULL) return -1; /* The entry was found in the expire dict, this means it should also * be present in the main dict (safety check). */ serverAssertWithInfo(NULL,key,dictFind(db->dict,key->ptr) != NULL); // 如果找到了,返回鍵的unix時(shí)間戳 return dictGetSignedIntegerVal(de); }
2.3.2 定時(shí)刪除的實(shí)現(xiàn)
惰性刪除由src/db.c/activeExpireCycle函數(shù)實(shí)現(xiàn).
#define ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP 20 /* Keys for each DB loop. */ // 每個(gè)數(shù)據(jù)庫(kù)默認(rèn)檢查20個(gè)key #define ACTIVE_EXPIRE_CYCLE_FAST_DURATION 1000 /* Microseconds. */ // 每個(gè)數(shù)據(jù)庫(kù)默認(rèn)檢查20個(gè)key #define ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC 25 /* Max % of CPU to use. */ // CPU最大使用率25% #define ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE 10 /* % of stale keys after which we do extra efforts. */ void activeExpireCycle(int type) { /* Adjust the running parameters according to the configured expire * effort. The default effort is 1, and the maximum configurable effort * is 10. */ unsigned long effort = server.active_expire_effort-1, /* Rescale from 0 to 9. */ config_keys_per_loop = ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP + ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP/4*effort, config_cycle_fast_duration = ACTIVE_EXPIRE_CYCLE_FAST_DURATION + ACTIVE_EXPIRE_CYCLE_FAST_DURATION/4*effort, config_cycle_slow_time_perc = ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC + 2*effort, config_cycle_acceptable_stale = ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE- effort; /* This function has some global state in order to continue the work * incrementally across calls. */ static unsigned int current_db = 0; /* Next DB to test. */ static int timelimit_exit = 0; /* Time limit hit in previous call? */ static long long last_fast_cycle = 0; /* When last fast cycle ran. */ int j, iteration = 0; int dbs_per_call = CRON_DBS_PER_CALL; // 每次默認(rèn)檢查16個(gè)數(shù)據(jù)庫(kù) long long start = ustime(), timelimit, elapsed; /* When clients are paused the dataset should be static not just from the * POV of clients not being able to write, but also from the POV of * expires and evictions of keys not being performed. */ if (checkClientPauseTimeoutAndReturnIfPaused()) return; if (type == ACTIVE_EXPIRE_CYCLE_FAST) { /* Don't start a fast cycle if the previous cycle did not exit * for time limit, unless the percentage of estimated stale keys is * too high. Also never repeat a fast cycle for the same period * as the fast cycle total duration itself. */ if (!timelimit_exit && server.stat_expired_stale_perc < config_cycle_acceptable_stale) return; if (start < last_fast_cycle + (long long)config_cycle_fast_duration*2) return; last_fast_cycle = start; } /* We usually should test CRON_DBS_PER_CALL per iteration, with * two exceptions: * * 1) Don't test more DBs than we have. * 2) If last time we hit the time limit, we want to scan all DBs * in this iteration, as there is work to do in some DB and we don't want * expired keys to use memory for too much time. */ if (dbs_per_call > server.dbnum || timelimit_exit) dbs_per_call = server.dbnum; /* We can use at max 'config_cycle_slow_time_perc' percentage of CPU * time per iteration. Since this function gets called with a frequency of * server.hz times per second, the following is the max amount of * microseconds we can spend in this function. */ timelimit = config_cycle_slow_time_perc*1000000/server.hz/100; timelimit_exit = 0; if (timelimit <= 0) timelimit = 1; if (type == ACTIVE_EXPIRE_CYCLE_FAST) timelimit = config_cycle_fast_duration; /* in microseconds. */ /* Accumulate some global stats as we expire keys, to have some idea * about the number of keys that are already logically expired, but still * existing inside the database. */ long total_sampled = 0; long total_expired = 0; // 遍歷各個(gè)數(shù)據(jù)庫(kù) for (j = 0; j < dbs_per_call && timelimit_exit == 0; j++) { /* Expired and checked in a single loop. */ unsigned long expired, sampled; // 獲取當(dāng)前要處理的數(shù)據(jù)庫(kù) redisDb *db = server.db+(current_db % server.dbnum); /* Increment the DB now so we are sure if we run out of time * in the current DB we'll restart from the next. This allows to * distribute the time evenly across DBs. */ current_db++; /* Continue to expire if at the end of the cycle there are still * a big percentage of keys to expire, compared to the number of keys * we scanned. The percentage, stored in config_cycle_acceptable_stale * is not fixed, but depends on the Redis configured "expire effort". */ do { unsigned long num, slots; long long now, ttl_sum; int ttl_samples; iteration++; /* If there is nothing to expire try next DB ASAP. */ // 如果當(dāng)前數(shù)據(jù)庫(kù)過(guò)期字典為空,跳過(guò)這個(gè)數(shù)據(jù)庫(kù) if ((num = dictSize(db->expires)) == 0) { db->avg_ttl = 0; break; } slots = dictSlots(db->expires); now = mstime(); /* When there are less than 1% filled slots, sampling the key * space is expensive, so stop here waiting for better times... * The dictionary will be resized asap. */ if (slots > DICT_HT_INITIAL_SIZE && (num*100/slots < 1)) break; /* The main collection cycle. Sample random keys among keys * with an expire set, checking for expired ones. */ expired = 0; sampled = 0; ttl_sum = 0; ttl_samples = 0; if (num > config_keys_per_loop) num = config_keys_per_loop; /* Here we access the low level representation of the hash table * for speed concerns: this makes this code coupled with dict.c, * but it hardly changed in ten years. * * Note that certain places of the hash table may be empty, * so we want also a stop condition about the number of * buckets that we scanned. However scanning for free buckets * is very fast: we are in the cache line scanning a sequential * array of NULL pointers, so we can scan a lot more buckets * than keys in the same time. */ long max_buckets = num*20; long checked_buckets = 0; while (sampled < num && checked_buckets < max_buckets) { for (int table = 0; table < 2; table++) { if (table == 1 && !dictIsRehashing(db->expires)) break; unsigned long idx = db->expires_cursor; idx &= db->expires->ht[table].sizemask; dictEntry *de = db->expires->ht[table].table[idx]; long long ttl; /* Scan the current bucket of the current table. */ checked_buckets++; while(de) { /* Get the next entry now since this entry may get * deleted. */ dictEntry *e = de; de = de->next; ttl = dictGetSignedIntegerVal(e)-now; if (activeExpireCycleTryExpire(db,e,now)) expired++; if (ttl > 0) { /* We want the average TTL of keys yet * not expired. */ ttl_sum += ttl; ttl_samples++; } sampled++; } } db->expires_cursor++; } total_expired += expired; total_sampled += sampled; /* Update the average TTL stats for this database. */ if (ttl_samples) { long long avg_ttl = ttl_sum/ttl_samples; /* Do a simple running average with a few samples. * We just use the current estimate with a weight of 2% * and the previous estimate with a weight of 98%. */ if (db->avg_ttl == 0) db->avg_ttl = avg_ttl; db->avg_ttl = (db->avg_ttl/50)*49 + (avg_ttl/50); } /* We can't block forever here even if there are many keys to * expire. So after a given amount of milliseconds return to the * caller waiting for the other active expire cycle. */ if ((iteration & 0xf) == 0) { /* check once every 16 iterations. */ elapsed = ustime()-start; if (elapsed > timelimit) { timelimit_exit = 1; server.stat_expired_time_cap_reached_count++; break; } } /* We don't repeat the cycle for the current database if there are * an acceptable amount of stale keys (logically expired but yet * not reclaimed). */ } while (sampled == 0 || (expired*100/sampled) > config_cycle_acceptable_stale); } elapsed = ustime()-start; server.stat_expire_cycle_time_used += elapsed; latencyAddSampleIfNeeded("expire-cycle",elapsed/1000); /* Update our estimate of keys existing but yet to be expired. * Running average with this sample accounting for 5%. */ double current_perc; if (total_sampled) { current_perc = (double)total_expired/total_sampled; } else current_perc = 0; server.stat_expired_stale_perc = (current_perc*0.05)+ (server.stat_expired_stale_perc*0.95); }
三、Redis內(nèi)存淘汰策略
Redis為什么要有內(nèi)存淘汰策略?因?yàn)镽edis是內(nèi)存數(shù)據(jù)庫(kù),不能無(wú)限大,達(dá)到閾值時(shí)需要淘汰部分內(nèi)存的數(shù)據(jù),來(lái)存儲(chǔ)新的數(shù)據(jù)。
redis內(nèi)存配置參數(shù):maxmemory,一般設(shè)置為系統(tǒng)內(nèi)存的一半(經(jīng)驗(yàn)值),比如你的系統(tǒng)運(yùn)行內(nèi)存有哦96G,就設(shè)置為48G。
3.1 Redis針對(duì)過(guò)期key的淘汰策略
看你的業(yè)務(wù)是否使用了 expire 過(guò)期時(shí)間,如果使用了,則:
- volatile-lru (Least Recently Used的縮寫,即最近最少使用)
- volatile-lfu(east frequently used的縮寫,即最少次數(shù)使用)
- volatile-ttl(time to live的縮寫,最近要過(guò)期的)
- volatile-random (隨機(jī)淘汰)
3.2 Redis最對(duì)所有key的淘汰策略
- alllkeys-lru
- allkeys-lfu
- allkeys-random
3.3 禁止淘汰策略
redis還有一種淘汰策略,就是禁止淘汰,這種策略,當(dāng)redis使用的內(nèi)存達(dá)到設(shè)定的最大值時(shí),后續(xù)的寫進(jìn)redis的操作會(huì)失敗。
四、增刪改查圖解
4.1 新增鍵值對(duì)
舉例:我們?cè)谝粋€(gè)空的redis數(shù)據(jù)庫(kù)中執(zhí)行分別執(zhí)行以下命令:
127.0.0.1:6379[1]> keys *
(empty array) // 表示此時(shí)數(shù)據(jù)庫(kù)中沒(méi)有任何數(shù)據(jù)
127.0.0.1:6379[1]> set msg "hello world"
OK
127.0.0.1:6379[1]>
127.0.0.1:6379[1]> hmset student name panda age 20 addr beijing
OK
127.0.0.1:6379[1]>
127.0.0.1:6379[1]> rpush teacher Darren Mark King
(integer) 3
127.0.0.1:6379[1]>
4.2 更新鍵值對(duì)
127.0.0.1:6379[1]> set msg "redis"
OK
127.0.0.1:6379[1]> get msg
"redis"
127.0.0.1:6379[1]> hset student sex male
(integer) 1
127.0.0.1:6379[1]>
4.3 獲取鍵的值
127.0.0.1:6379[1]> get msg
"redis"
127.0.0.1:6379[1]> hmget student name age addr sex
1) "panda"
2) "20"
3) "beijing"
4) "male"
127.0.0.1:6379[1]>
4.4 刪除鍵值對(duì)
127.0.0.1:6379[1]> keys *
1) "msg"
2) "student"
3) "teacher"
127.0.0.1:6379[1]> del student
(integer) 1
127.0.0.1:6379[1]> keys *
1) "msg"
2) "teacher"
127.0.0.1:6379[1]>
到此這篇關(guān)于Redis過(guò)期鍵與內(nèi)存淘汰策略深入分析講解的文章就介紹到這了,更多相關(guān)Redis過(guò)期鍵與內(nèi)存淘汰策略內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
詳解RedisTemplate下Redis分布式鎖引發(fā)的系列問(wèn)題
這篇文章主要介紹了詳解RedisTemplate下Redis分布式鎖引發(fā)的系列問(wèn)題,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2021-03-03redis啟動(dòng)報(bào)錯(cuò)Can‘t?open?the?log?file:?No?such?file?or?d
這篇文章主要介紹了redis啟動(dòng)報(bào)錯(cuò)Can‘t?open?the?log?file:?No?such?file?or?directory問(wèn)題及解決方案,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2023-11-11Redis集群節(jié)點(diǎn)通信過(guò)程/原理流程分析
這篇文章主要介紹了Redis集群節(jié)點(diǎn)通信過(guò)程/原理,詳細(xì)介紹了Cluster(集群)的節(jié)點(diǎn)通信的流程,本文給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2022-03-03Redis?常見(jiàn)緩存問(wèn)題總結(jié)
這篇文章主要給大家總結(jié)了一些Redis?常見(jiàn)緩存問(wèn)題,并介紹了解決辦法,文中的圖文示例介紹的非常仔細(xì),感興趣的同學(xué)可以參考閱讀下2023-06-06Redis未授權(quán)訪問(wèn)配合SSH key文件利用詳解
這篇文章主要給大家介紹了關(guān)于Redis未授權(quán)訪問(wèn)配合SSH key文件利用的相關(guān)資料,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2018-09-09解決Redis的緩存與數(shù)據(jù)庫(kù)雙寫不一致問(wèn)題
在使用緩存和數(shù)據(jù)庫(kù)配合時(shí),常見(jiàn)的CacheAsidePattern模式要求讀操作先訪問(wèn)緩存,若缺失再讀數(shù)據(jù)庫(kù)并更新緩存;寫操作則是先寫數(shù)據(jù)庫(kù)后刪除緩存,但這種模式可能導(dǎo)致緩存與數(shù)據(jù)庫(kù)間的雙寫不一致問(wèn)題2024-10-10使用Redis實(shí)現(xiàn)微信步數(shù)排行榜功能
這篇文章主要介紹了使用Redis實(shí)現(xiàn)微信步數(shù)排行榜功能,本文通過(guò)圖文實(shí)例代碼相結(jié)合給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2020-06-06redis調(diào)用二維碼時(shí)的不斷刷新排查分析
這篇文章主要為大家介紹了redis調(diào)用二維碼時(shí)不斷刷新排查分析,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步早日升職加薪2022-04-04