欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Redis中key過期策略的實(shí)現(xiàn)

 更新時間:2024年09月26日 09:23:31   作者:smileNicky  
Key的過期機(jī)制是Redis保持高可用性的重要策略,過期策略分為惰性過期和定期過期,惰性過期在每次訪問key時檢查是否過期,定期過期則由serverCron方法定時清理過期key,本文就來詳細(xì)的介紹一下,感興趣的可以了解一下

為什么要有過期策略?

Redis是一個內(nèi)存型的數(shù)據(jù)庫,數(shù)據(jù)是放在內(nèi)存里的,但是內(nèi)存也是有大小的,所以,需要配置redis占用的最大內(nèi)存,主要通過maxmemory配置

maxmomory <bytes>  # redis占用的最大內(nèi)存

官網(wǎng):https://redis.io/docs/manual/eviction/ 介紹

For example, to configure a memory limit of 100 megabytes, you can use the following directive inside the redis.conf file:

maxmomory 100mb

Setting maxmemory to zero results into no memory limits. This is the default behavior for 64 bit systems, while 32 bit systems use an implicit memory limit of 3GB.

翻譯一下,大致意思是如果配置為0,那么模式最大內(nèi)存大小就是電腦的內(nèi)存,如果是32bit隱式大小就是3G。

如果我們不淘汰過期的key數(shù)據(jù),堆積到一定程度,就會占滿內(nèi)存,滿了,就不能再放數(shù)據(jù),所以我們需要key過期機(jī)制,去刪除過期的數(shù)據(jù),保證redis的高可用。

什么是Redis key過期策略?

我們知道redis有一個特性,redis中的數(shù)據(jù),我們都是可以設(shè)置過期時間的,如果時間到了,這個數(shù)據(jù)就會從Redis中移除。那么redis key的過期策略就是我們怎么將redis中的過期數(shù)據(jù)移除。

key的惰性過期策略

惰性過期,就是在redis里面,在每次訪問操作key的時候,才判斷這個key是否過期了,如果過期了就刪除數(shù)據(jù)。redis中主要是通過db.c的expireIfNeeded方法去判斷,調(diào)用到相關(guān)命令時才會去調(diào)用,平時不會去判斷是否過期

查看一下源碼,expireIfNeeded方法,在db.c源碼,基于Redis6.0

int expireIfNeeded(redisDb *db, robj *key) {
    if (!keyIsExpired(db,key)) return 0;

    /* If we are running in the context of a slave, instead of
     * evicting the expired key from the database, we return ASAP:
     * the slave key expiration is controlled by the master that will
     * send us synthesized DEL operations for expired keys.
     *
     * Still we try to return the right information to the caller,
     * that is, 0 if we think the key should be still valid, 1 if
     * we think the key is expired at this time. */
    // 如果有配置masterhost,說明是從節(jié)點(diǎn),那么不執(zhí)行key刪除操作
    if (server.masterhost != NULL) return 1;

    /* Delete the key */
    server.stat_expiredkeys++;
    propagateExpire(db,key,server.lazyfree_lazy_expire);
    notifyKeyspaceEvent(NOTIFY_EXPIRED,
        "expired",key,db->id);
    // 判斷l(xiāng)azyfree_lazy_expire是否開啟,開啟執(zhí)行異步刪除,不開啟執(zhí)行同步刪除,4.0之后新增的功能,默認(rèn)是關(guān)閉
    int retval = server.lazyfree_lazy_expire ? dbAsyncDelete(db,key) :
                                               dbSyncDelete(db,key);
    if (retval) signalModifiedKey(NULL,db,key);
    return retval;
}

惰性刪除策略可以節(jié)省CPU資源,因?yàn)橹恍枰L問key的時候才去判斷是否過期,所以平時是沒啥CPU損耗的,但是如果沒有再次訪問,改過期的key就一直堆積在內(nèi)存里面,不會被清除,從而占用大量內(nèi)存空間,所以我們需要另外一種策略來配合使用,解決內(nèi)存占用問題,就是下面說的key定時過期策略。

key的定期過期策略

Redis中也提供了定期清除過期key的策略,在redis源碼里的server.c,里面有個serverCron方法,這個方法除了做Rehash以外,還會做很多其他的操作,比如

  • 清理數(shù)據(jù)庫中的過期鍵值對
  • 關(guān)閉和清理連接失效的客戶端
  • 嘗試進(jìn)行持久化操作
  • 更新服務(wù)器的各類統(tǒng)計(jì)信息(時間、內(nèi)存占用、數(shù)據(jù)庫占用情況等)

Redis多久去清除過期的數(shù)據(jù),執(zhí)行頻率根據(jù)redis.conf里的配置hz

在這里插入圖片描述

然后實(shí)現(xiàn)流程大概是咋樣的?具體實(shí)現(xiàn)流程如下:

  • serverCron方法去執(zhí)行定時清理,執(zhí)行頻率redis.confhz參數(shù)配置,默認(rèn)是10,也就是1s執(zhí)行10次,100ms執(zhí)行1次

  • 執(zhí)行清理的時候,去掃描所有設(shè)置了過期時間的key,不會去掃描所有的key

  • 根據(jù)桶的維度去掃描key,直到掃到20個key(可配)且最多取400個桶。假如第一個桶是15個key,沒有達(dá)到20個key,所以會繼續(xù)掃描第二個桶,第二個桶20個key,由于是以桶為維度進(jìn)行掃描的,第二個桶會被全部掃描,所以總共掃描了35個key

  • 找到掃描的key里面過期的key,進(jìn)行刪除操作

  • 判斷掃描的過期數(shù)據(jù)跟掃描總數(shù)的比例是否超過10%,是,繼續(xù)執(zhí)行3、4步;否,刪除完成。

執(zhí)行過程,畫一個流程圖:

在這里插入圖片描述

查看源碼,驗(yàn)證一下,在redis源碼里的server.c有一個serverCron方法,里面有個databasesCron函數(shù)

/* Handle background operations on Redis databases. */
databasesCron();

同個類里,查看databasesCron函數(shù)

void databasesCron(void) {
    /* Expire keys by random sampling. Not required for slaves
     * as master will synthesize DELs for us. */
    if (server.active_expire_enabled) {
        if (iAmMaster()) { // 是否主服務(wù)器
            activeExpireCycle(ACTIVE_EXPIRE_CYCLE_SLOW);
        } else { // 從服務(wù)器
            expireSlaveKeys();
        }
    }

    /* Defrag keys gradually. */
    activeDefragCycle();

    /* Perform hash tables rehashing if needed, but only if there are no
     * other processes saving the DB on disk. Otherwise rehashing is bad
     * as will cause a lot of copy-on-write of memory pages. */
    if (!hasActiveChildProcess()) {
        /* We use global counters so if we stop the computation at a given
         * DB we'll be able to start from the successive in the next
         * cron loop iteration. */
        static unsigned int resize_db = 0;
        static unsigned int rehash_db = 0;
        int dbs_per_call = CRON_DBS_PER_CALL;
        int j;

        /* Don't test more DBs than we have. */
        if (dbs_per_call > server.dbnum) dbs_per_call = server.dbnum;

        /* Resize */
        for (j = 0; j < dbs_per_call; j++) {
            tryResizeHashTables(resize_db % server.dbnum);
            resize_db++;
        }

        /* Rehash */
        if (server.activerehashing) {
            for (j = 0; j < dbs_per_call; j++) {
                int work_done = incrementallyRehash(rehash_db);
                if (work_done) {
                    /* If the function did some work, stop here, we'll do
                     * more at the next cron loop. */
                    break;
                } else {
                    /* If this db didn't need rehash, we'll try the next one. */
                    rehash_db++;
                    rehash_db %= server.dbnum;
                }
            }
        }
    }
}

查看activeExpireCycle方法,在expire.c

void activeExpireCycle(int type) {
    /* Adjust the running parameters according to the configured expire
     * effort. The default effort is 1, and the maximum configurable effort
     * is 10. */
    unsigned long
    effort = server.active_expire_effort-1, /* Rescale from 0 to 9. */
    config_keys_per_loop = ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP +
                           ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP/4*effort,
    config_cycle_fast_duration = ACTIVE_EXPIRE_CYCLE_FAST_DURATION +
                                 ACTIVE_EXPIRE_CYCLE_FAST_DURATION/4*effort,
    config_cycle_slow_time_perc = ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC +
                                  2*effort,
    config_cycle_acceptable_stale = ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE-
                                    effort;

    /* This function has some global state in order to continue the work
     * incrementally across calls. */
    static unsigned int current_db = 0; /* Last DB tested. */
    static int timelimit_exit = 0;      /* Time limit hit in previous call? */
    static long long last_fast_cycle = 0; /* When last fast cycle ran. */

    int j, iteration = 0;
    int dbs_per_call = CRON_DBS_PER_CALL;
    long long start = ustime(), timelimit, elapsed;

    /* When clients are paused the dataset should be static not just from the
     * POV of clients not being able to write, but also from the POV of
     * expires and evictions of keys not being performed. */
    if (clientsArePaused()) return;

    if (type == ACTIVE_EXPIRE_CYCLE_FAST) {
        /* Don't start a fast cycle if the previous cycle did not exit
         * for time limit, unless the percentage of estimated stale keys is
         * too high. Also never repeat a fast cycle for the same period
         * as the fast cycle total duration itself. */
        if (!timelimit_exit &&
            server.stat_expired_stale_perc < config_cycle_acceptable_stale)
            return;

        if (start < last_fast_cycle + (long long)config_cycle_fast_duration*2)
            return;

        last_fast_cycle = start;
    }

    /* We usually should test CRON_DBS_PER_CALL per iteration, with
     * two exceptions:
     *
     * 1) Don't test more DBs than we have.
     * 2) If last time we hit the time limit, we want to scan all DBs
     * in this iteration, as there is work to do in some DB and we don't want
     * expired keys to use memory for too much time. */
    if (dbs_per_call > server.dbnum || timelimit_exit)
        dbs_per_call = server.dbnum;

    /* We can use at max 'config_cycle_slow_time_perc' percentage of CPU
     * time per iteration. Since this function gets called with a frequency of
     * server.hz times per second, the following is the max amount of
     * microseconds we can spend in this function. */
    timelimit = config_cycle_slow_time_perc*1000000/server.hz/100;
    timelimit_exit = 0;
    if (timelimit <= 0) timelimit = 1;

    if (type == ACTIVE_EXPIRE_CYCLE_FAST)
        timelimit = config_cycle_fast_duration; /* in microseconds. */

    /* Accumulate some global stats as we expire keys, to have some idea
     * about the number of keys that are already logically expired, but still
     * existing inside the database. */
    long total_sampled = 0;
    long total_expired = 0;

    for (j = 0; j < dbs_per_call && timelimit_exit == 0; j++) {
        /* Expired and checked in a single loop. */
        unsigned long expired, sampled;

        redisDb *db = server.db+(current_db % server.dbnum);

        /* Increment the DB now so we are sure if we run out of time
         * in the current DB we'll restart from the next. This allows to
         * distribute the time evenly across DBs. */
        current_db++;

        /* Continue to expire if at the end of the cycle there are still
         * a big percentage of keys to expire, compared to the number of keys
         * we scanned. The percentage, stored in config_cycle_acceptable_stale
         * is not fixed, but depends on the Redis configured "expire effort". */
        do {
            unsigned long num, slots;
            long long now, ttl_sum;
            int ttl_samples;
            iteration++;

            /* If there is nothing to expire try next DB ASAP. */
            if ((num = dictSize(db->expires)) == 0) {
                db->avg_ttl = 0;
                break;
            }
            slots = dictSlots(db->expires);
            now = mstime();

            /* When there are less than 1% filled slots, sampling the key
             * space is expensive, so stop here waiting for better times...
             * The dictionary will be resized asap. */
            if (num && slots > DICT_HT_INITIAL_SIZE &&
                (num*100/slots < 1)) break;

            /* The main collection cycle. Sample random keys among keys
             * with an expire set, checking for expired ones. */
            expired = 0;
            sampled = 0;
            ttl_sum = 0;
            ttl_samples = 0;
			// 最多那20個
            if (num > config_keys_per_loop)
                num = config_keys_per_loop;

            /* Here we access the low level representation of the hash table
             * for speed concerns: this makes this code coupled with dict.c,
             * but it hardly changed in ten years.
             *
             * Note that certain places of the hash table may be empty,
             * so we want also a stop condition about the number of
             * buckets that we scanned. However scanning for free buckets
             * is very fast: we are in the cache line scanning a sequential
             * array of NULL pointers, so we can scan a lot more buckets
             * than keys in the same time. */
            long max_buckets = num*20;
            long checked_buckets = 0;
			// 如果拿到的key數(shù)量大于20 或者 checked_buckets大于400,跳出循環(huán)
            while (sampled < num && checked_buckets < max_buckets) {
                for (int table = 0; table < 2; table++) {
                    if (table == 1 && !dictIsRehashing(db->expires)) break;

                    unsigned long idx = db->expires_cursor;
                    idx &= db->expires->ht[table].sizemask;
                    // 根據(jù)index拿到hash桶
                    dictEntry *de = db->expires->ht[table].table[idx];
                    long long ttl;

                    /* Scan the current bucket of the current table. */
                    checked_buckets++;
                    // 循環(huán)hash桶里的key
                    while(de) {
                        /* Get the next entry now since this entry may get
                         * deleted. */
                        dictEntry *e = de;
                        de = de->next;

                        ttl = dictGetSignedIntegerVal(e)-now;
                        if (activeExpireCycleTryExpire(db,e,now)) expired++;
                        if (ttl > 0) {
                            /* We want the average TTL of keys yet
                             * not expired. */
                            ttl_sum += ttl;
                            ttl_samples++;
                        }
                        sampled++;
                    }
                }
                db->expires_cursor++;
            }
            total_expired += expired;
            total_sampled += sampled;

            /* Update the average TTL stats for this database. */
            if (ttl_samples) {
                long long avg_ttl = ttl_sum/ttl_samples;

                /* Do a simple running average with a few samples.
                 * We just use the current estimate with a weight of 2%
                 * and the previous estimate with a weight of 98%. */
                if (db->avg_ttl == 0) db->avg_ttl = avg_ttl;
                db->avg_ttl = (db->avg_ttl/50)*49 + (avg_ttl/50);
            }

            /* We can't block forever here even if there are many keys to
             * expire. So after a given amount of milliseconds return to the
             * caller waiting for the other active expire cycle. */
            if ((iteration & 0xf) == 0) { /* check once every 16 iterations. */
                elapsed = ustime()-start;
                if (elapsed > timelimit) {
                    timelimit_exit = 1;
                    server.stat_expired_time_cap_reached_count++;
                    break;
                }
            }
            /* We don't repeat the cycle for the current database if there are
             * an acceptable amount of stale keys (logically expired but yet
             * not reclaimed). */
             // 比例超過10%,expired過期的key數(shù)量,sampled總的掃描數(shù)量
        } while (sampled == 0 ||
                 (expired*100/sampled) > config_cycle_acceptable_stale);
    }

    elapsed = ustime()-start;
    server.stat_expire_cycle_time_used += elapsed;
    latencyAddSampleIfNeeded("expire-cycle",elapsed/1000);

    /* Update our estimate of keys existing but yet to be expired.
     * Running average with this sample accounting for 5%. */
    double current_perc;
    if (total_sampled) {
        current_perc = (double)total_expired/total_sampled;
    } else
        current_perc = 0;
    server.stat_expired_stale_perc = (current_perc*0.05)+
                                     (server.stat_expired_stale_perc*0.95);
}

到此這篇關(guān)于Redis中key過期策略的實(shí)現(xiàn)的文章就介紹到這了,更多相關(guān)Redis key過期內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家! 

相關(guān)文章

  • 編譯安裝redisd的方法示例詳解

    編譯安裝redisd的方法示例詳解

    這篇文章主要介紹了編譯安裝redisd的方法示例詳解,本文給大家介紹的非常詳細(xì),具有一定的參考借鑒價值,需要的朋友可以參考下
    2020-02-02
  • Redis中的bitmap詳解

    Redis中的bitmap詳解

    BitMap是通過一個bit位來表示某個元素對應(yīng)的值或者狀態(tài),其中的key就是對應(yīng)元素本身。我們知道8個bit可以組成一個Byte,所以bitmap本身會極大的節(jié)省儲存空間,下面通過本文給大家介紹Redis中的bitmap知識,感興趣的朋友一起看看吧
    2021-10-10
  • Redis中緩存預(yù)熱與緩存穿透解決方案

    Redis中緩存預(yù)熱與緩存穿透解決方案

    Redis緩存預(yù)熱與緩存穿透是Redis緩存使用中的兩個重要概念,文章首先介紹了Redis緩存預(yù)熱和緩存穿透的基本概念,然后詳細(xì)闡述了它們的產(chǎn)生原因和解決方案,感興趣的可以了解一下
    2023-12-12
  • redis實(shí)現(xiàn)紅鎖的示例代碼

    redis實(shí)現(xiàn)紅鎖的示例代碼

    在分布式系統(tǒng)中,實(shí)現(xiàn)一個可靠的鎖機(jī)制是非常重要的,本文主要介紹了redis實(shí)現(xiàn)紅鎖的示例代碼,具有一定的參考價值,感興趣的可以了解一下
    2025-04-04
  • Redis實(shí)現(xiàn)庫存扣減的示例代碼

    Redis實(shí)現(xiàn)庫存扣減的示例代碼

    在日常開發(fā)中有很多地方都有類似扣減庫存的操作,本文主要介紹了Redis實(shí)現(xiàn)庫存扣減的示例代碼,具有一定的參考價值,感興趣的可以了解一下
    2023-07-07
  • Redis Cluster添加、刪除的完整操作步驟

    Redis Cluster添加、刪除的完整操作步驟

    這篇文章主要給大家介紹了關(guān)于Redis Cluster添加、刪除的完整操作步驟,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)下吧。
    2017-09-09
  • Redis Cluster的圖文講解

    Redis Cluster的圖文講解

    今天小編就為大家分享一篇關(guān)于Redis Cluster的圖文講解,小編覺得內(nèi)容挺不錯的,現(xiàn)在分享給大家,具有很好的參考價值,需要的朋友一起跟隨小編來看看吧
    2019-01-01
  • Redis集群服務(wù)器的實(shí)現(xiàn)(圖文步驟)

    Redis集群服務(wù)器的實(shí)現(xiàn)(圖文步驟)

    本文介紹了Redis集群服務(wù)器的優(yōu)勢,為讀者提供了全面的Redis集群服務(wù)器知識和使用技巧,具有一定的參考價值,感興趣的可以了解一下
    2023-09-09
  • Redis快速表、壓縮表和雙向鏈表(重點(diǎn)介紹quicklist)

    Redis快速表、壓縮表和雙向鏈表(重點(diǎn)介紹quicklist)

    這篇文章主要介紹了Redis快速表、壓縮表和雙向鏈表(重點(diǎn)介紹quicklist),文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2021-04-04
  • Redis獲取某個大key值的腳本實(shí)例

    Redis獲取某個大key值的腳本實(shí)例

    這篇文章主要給大家分享介紹了關(guān)于Redis獲取某個大key值的一個腳本實(shí)例,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧。
    2018-04-04

最新評論