欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

yolov5中train.py代碼注釋詳解與使用教程

 更新時間:2022年09月09日 08:57:45   作者:Charms@  
train.py里面加了很多額外的功能,使得整體看起來比較復雜,其實核心部分主要就是 讀取數(shù)據(jù)集,加載模型,訓練中損失的計算,下面這篇文章主要給大家介紹了關于yolov5中train.py代碼注釋詳解與使用的相關資料,需要的朋友可以參考下

前言

最近在用yolov5參加比賽,yolov5的技巧很多,僅僅用來參加比賽,著實有點浪費,所以有必要好好學習一番,在認真學習之前,首先向yolov5的作者致敬,對了我是用的版本是v6。每每看到這些大神的作品,實在是有點慚愧,要學的太多了

1. parse_opt函數(shù)

def parse_opt(known=False):
    """
    argparse 使用方法:
    parse = argparse.ArgumentParser()
    parse.add_argument('--s', type=int, default=2, help='flag_int')
    """
    parser = argparse.ArgumentParser()
    # weights 權重的路徑./weights/yolov5s.pt.... 
    # yolov5提供4個不同深度不同寬度的預訓練權重 用戶可以根據(jù)自己的需求選擇下載
    parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path')
    # cfg 配置文件(網(wǎng)絡結構) anchor/backbone/numclasses/head,訓練自己的數(shù)據(jù)集需要自己生成
    # 生成方式——例如我的yolov5s_mchar.yaml 根據(jù)自己的需求選擇復制./models/下面.yaml文件,5個文件的區(qū)別在于模型的深度和寬度依次遞增
    parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
    # data 數(shù)據(jù)集配置文件(路徑) train/val/label/, 該文件需要自己生成
    # 生成方式——例如我的/data/mchar.yaml 訓練集和驗證集的路徑 + 類別數(shù) + 類別名稱
    parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
    # hpy超參數(shù)設置文件(lr/sgd/mixup)./data/hyps/下面有5個超參數(shù)設置文件,每個文件的超參數(shù)初始值有細微區(qū)別,用戶可以根據(jù)自己的需求選擇其中一個
    parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path')
    # epochs 訓練輪次, 默認輪次為300次
    parser.add_argument('--epochs', type=int, default=300)
    # batchsize 訓練批次, 默認bs=16
    parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')
    # imagesize 設置圖片大小, 默認640*640
    parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
    # rect 是否采用矩形訓練,默認為False
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    # resume 是否接著上次的訓練結果,繼續(xù)訓練
    parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
    # nosave 不保存模型  默認False(保存)  在./runs/exp*/train/weights/保存兩個模型 一個是最后一次的模型 一個是最好的模型
    # best.pt/ last.pt 不建議運行代碼添加 --nosave 
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    # noval 最后進行測試, 設置了之后就是訓練結束都測試一下, 不設置每輪都計算mAP, 建議不設置
    parser.add_argument('--noval', action='store_true', help='only validate final epoch')
    # noautoanchor 不自動調(diào)整anchor, 默認False, 自動調(diào)整anchor
    parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor')
    # evolve參數(shù)進化, 遺傳算法調(diào)參
    parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
    # bucket谷歌優(yōu)盤 / 一般用不到
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    # cache 是否提前緩存圖片到內(nèi)存,以加快訓練速度,默認False
    parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
    # mage-weights 使用圖片采樣策略,默認不使用
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    # device 設備選擇
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    # multi-scale 多測度訓練
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    # single-cls 數(shù)據(jù)集是否多類/默認True
    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
    # optimizer 優(yōu)化器選擇 / 提供了三種優(yōu)化器
    parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer')
    # sync-bn:是否使用跨卡同步BN,在DDP模式使用
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    # workers/dataloader的最大worker數(shù)量
    parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
    # 保存路徑 / 默認保存路徑 ./runs/ train
    parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name')
    # 實驗名稱
    parser.add_argument('--name', default='exp', help='save to project/name')
    # 項目位置是否存在 / 默認是都不存在
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--quad', action='store_true', help='quad dataloader')
    # cos-lr 余弦學習率
    parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler')
    # 標簽平滑 / 默認不增強, 用戶可以根據(jù)自己標簽的實際情況設置這個參數(shù),建議設置小一點 0.1 / 0.05
    parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
    # 早停止忍耐次數(shù) / 100次不更新就停止訓練
    parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
    # --freeze凍結訓練 可以設置 default = [0] 數(shù)據(jù)量大的情況下,建議不設置這個參數(shù)
    parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2')
    # --save-period 多少個epoch保存一下checkpoint
    parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
    # --local_rank 進程編號 / 多卡使用
    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')

    # Weights & Biases arguments
    # 在線可視化工具,類似于tensorboard工具,想了解這款工具可以查看https://zhuanlan.zhihu.com/p/266337608
    parser.add_argument('--entity', default=None, help='W&B: Entity')
    # upload_dataset: 是否上傳dataset到wandb tabel(將數(shù)據(jù)集作為交互式 dsviz表 在瀏覽器中查看、查詢、篩選和分析數(shù)據(jù)集) 默認False
    parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='W&B: Upload data, "val" option')
    # bbox_interval: 設置界框圖像記錄間隔 Set bounding-box image logging interval for W&B 默認-1   opt.epochs // 10
    parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')
    # 使用數(shù)據(jù)的版本
    parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')

    # 傳入的基本配置中沒有的參數(shù)也不會報錯# parse_args()和parse_known_args() 
    # parse = argparse.ArgumentParser()
    # parse.add_argument('--s', type=int, default=2, help='flag_int')
    # parser.parse_args() / parse_args()
    opt = parser.parse_known_args()[0] if known else parser.parse_args()
    return opt

2. main函數(shù)

2.1 main函數(shù)——打印關鍵詞/安裝環(huán)境

def main(opt, callbacks=Callbacks()):
    ############################################### 1. Checks ##################################################
    if RANK in [-1, 0]:
        # 輸出所有訓練參數(shù) / 參數(shù)以彩色的方式表現(xiàn)
        print_args(FILE.stem, opt)
        # 檢查代碼版本是否更新
        check_git_status()
        # 檢查安裝是否都安裝了 requirements.txt, 缺少安裝包安裝。
        # 缺少安裝包:建議使用 pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt
        check_requirements(exclude=['thop'])

2.2 main函數(shù)——是否進行斷點訓練

############################################### 2. Resume ##################################################
    # 初始化可視化工具wandb,wandb使用教程看https://zhuanlan.zhihu.com/p/266337608
    # 斷點訓練使用教程可以查看:https://blog.csdn.net/CharmsLUO/article/details/123410081
    if opt.resume and not check_wandb_resume(opt) and not opt.evolve:  # resume an interrupted run
        # isinstance()是否是已經(jīng)知道的類型
        # 如果resume是True,則通過get_lastest_run()函數(shù)找到runs為文件夾中最近的權重文件last.pt
        ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run()  # specified or most recent path
        # 判斷是否是文件
        assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
        #  # 相關的opt參數(shù)也要替換成last.pt中的opt參數(shù) safe_load()yaml文件加載數(shù)據(jù)
        with open(Path(ckpt).parent.parent / 'opt.yaml', errors='ignore') as f:
            # argparse.Namespace 可以理解為字典
            opt = argparse.Namespace(**yaml.safe_load(f))  # replace
        opt.cfg, opt.weights, opt.resume = '', ckpt, True  # reinstate
        # 打印斷點訓練信息
        LOGGER.info(f'Resuming training from {ckpt}')
    else:
        # 不使用斷點訓練就在加載輸入的參數(shù)
        opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \
            check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project)  # checks
        assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
        # opt.evolve=False,opt.name='exp'    opt.evolve=True,opt.name='evolve'
        if opt.evolve:
            if opt.project == str(ROOT / 'runs/train'):  # if default project name, rename to runs/evolve
                opt.project = str(ROOT / 'runs/evolve')
            opt.exist_ok, opt.resume = opt.resume, False  # pass resume to exist_ok and disable resume
        # 保存相關信息
        opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok))

2.3 main函數(shù)——是否分布式訓練

# ############################################## 3.DDP mode ###############################################
    # 選擇設備cpu/cuda
    device = select_device(opt.device, batch_size=opt.batch_size)
    # 多卡訓練GPU
    if LOCAL_RANK != -1:
        msg = 'is not compatible with YOLOv5 Multi-GPU DDP training'
        assert not opt.image_weights, f'--image-weights {msg}'
        assert not opt.evolve, f'--evolve {msg}'
        assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size'
        assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE'
        assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command'
        # 根據(jù)編號選擇設備
        #使用torch.cuda.set_device()可以更方便地將模型和數(shù)據(jù)加載到對應GPU上, 直接定義模型之前加入一行代碼即可
        # torch.cuda.set_device(gpu_id) #單卡
        # torch.cuda.set_device('cuda:'+str(gpu_ids)) #可指定多卡
        torch.cuda.set_device(LOCAL_RANK)
        device = torch.device('cuda', LOCAL_RANK)
        # 初始化多進程
        dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo")

2.4 main函數(shù)——是否進化訓練/遺傳算法調(diào)參

################################################ 4. Train #################################################
    # 不設置evolve直接調(diào)用train訓練
    if not opt.evolve:
        train(opt.hyp, opt, device, callbacks)
        # 分布式訓練 WORLD_SIZE=主機的數(shù)量
        # 如果是使用多卡訓練, 那么銷毀進程組
        if WORLD_SIZE > 1 and RANK == 0:
            LOGGER.info('Destroying process group... ')
            # 使用多卡訓練, 那么銷毀進程組
            dist.destroy_process_group()

    # Evolve hyperparameters (optional)
    # 遺傳凈化算法/一邊訓練一遍進化
    # 了解遺傳算法可以查看我的博客:
    else:
        # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
        # 超參數(shù)列表(突變范圍 - 最小值 - 最大值)
        meta = {'lr0': (1, 1e-5, 1e-1),  # initial learning rate (SGD=1E-2, Adam=1E-3)
                'lrf': (1, 0.01, 1.0),  # final OneCycleLR learning rate (lr0 * lrf)
                'momentum': (0.3, 0.6, 0.98),  # SGD momentum/Adam beta1
                'weight_decay': (1, 0.0, 0.001),  # optimizer weight decay
                'warmup_epochs': (1, 0.0, 5.0),  # warmup epochs (fractions ok)
                'warmup_momentum': (1, 0.0, 0.95),  # warmup initial momentum
                'warmup_bias_lr': (1, 0.0, 0.2),  # warmup initial bias lr
                'box': (1, 0.02, 0.2),  # box loss gain
                'cls': (1, 0.2, 4.0),  # cls loss gain
                'cls_pw': (1, 0.5, 2.0),  # cls BCELoss positive_weight
                'obj': (1, 0.2, 4.0),  # obj loss gain (scale with pixels)
                'obj_pw': (1, 0.5, 2.0),  # obj BCELoss positive_weight
                'iou_t': (0, 0.1, 0.7),  # IoU training threshold
                'anchor_t': (1, 2.0, 8.0),  # anchor-multiple threshold
                'anchors': (2, 2.0, 10.0),  # anchors per output grid (0 to ignore)
                'fl_gamma': (0, 0.0, 2.0),  # focal loss gamma (efficientDet default gamma=1.5)
                'hsv_h': (1, 0.0, 0.1),  # image HSV-Hue augmentation (fraction)
                'hsv_s': (1, 0.0, 0.9),  # image HSV-Saturation augmentation (fraction)
                'hsv_v': (1, 0.0, 0.9),  # image HSV-Value augmentation (fraction)
                'degrees': (1, 0.0, 45.0),  # image rotation (+/- deg)
                'translate': (1, 0.0, 0.9),  # image translation (+/- fraction)
                'scale': (1, 0.0, 0.9),  # image scale (+/- gain)
                'shear': (1, 0.0, 10.0),  # image shear (+/- deg)
                'perspective': (0, 0.0, 0.001),  # image perspective (+/- fraction), range 0-0.001
                'flipud': (1, 0.0, 1.0),  # image flip up-down (probability)
                'fliplr': (0, 0.0, 1.0),  # image flip left-right (probability)
                'mosaic': (1, 0.0, 1.0),  # image mixup (probability)
                'mixup': (1, 0.0, 1.0),  # image mixup (probability)
                'copy_paste': (1, 0.0, 1.0)}  # segment copy-paste (probability)

        with open(opt.hyp, errors='ignore') as f:
            # 加載yaml超參數(shù)
            hyp = yaml.safe_load(f)  # load hyps dict
            if 'anchors' not in hyp:  # anchors commented in hyp.yaml
                hyp['anchors'] = 3
        opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir)  # only val/save final epoch
        # ei = [isinstance(x, (int, float)) for x in hyp.values()]  # evolvable indices
        # 保存進化的超參數(shù)列表
        evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv'
        if opt.bucket:
            os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {evolve_csv}')  # download evolve.csv if exists
        """
        遺傳算法調(diào)參:遵循適者生存、優(yōu)勝劣汰的法則,即尋優(yōu)過程中保留有用的,去除無用的。
        遺傳算法需要提前設置4個參數(shù): 群體大小/進化代數(shù)/交叉概率/變異概率

        """

        # 默認選擇進化300代
        for _ in range(opt.evolve):  # generations to evolve
            if evolve_csv.exists():  # if evolve.csv exists: select best hyps and mutate
                # Select parent(s)
                # 進化方式--single / --weight
                parent = 'single'  # parent selection method: 'single' or 'weighted'
                # 加載evolve.txt文件
                x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1)
                # 選取進化結果代數(shù)
                n = min(5, len(x))  # number of previous results to consider
                x = x[np.argsort(-fitness(x))][:n]  # top n mutations
                 # 根據(jù)resluts計算hyp權重
                w = fitness(x) - fitness(x).min() + 1E-6  # weights (sum > 0)
                # 根據(jù)不同進化方式獲得base hyp
                if parent == 'single' or len(x) == 1:
                    # x = x[random.randint(0, n - 1)]  # random selection
                    x = x[random.choices(range(n), weights=w)[0]]  # weighted selection
                elif parent == 'weighted':
                    x = (x * w.reshape(n, 1)).sum(0) / w.sum()  # weighted combination

                # Mutate
                # # 獲取突變初始值
                mp, s = 0.8, 0.2  # mutation probability, sigma
                npr = np.random
                npr.seed(int(time.time()))
                g = np.array([meta[k][0] for k in hyp.keys()])  # gains 0-1
                ng = len(meta)
                v = np.ones(ng)
                # 設置突變
                while all(v == 1):  # mutate until a change occurs (prevent duplicates)
                    # 將突變添加到base hyp上
                    # [i+7]是因為x中前7個數(shù)字為results的指標(P,R,mAP,F1,test_loss=(box,obj,cls)),之后才是超參數(shù)hyp
                    v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
                for i, k in enumerate(hyp.keys()):  # plt.hist(v.ravel(), 300)
                    hyp[k] = float(x[i + 7] * v[i])  # mutate

            # Constrain to limits
            # 限制超參再規(guī)定范圍
            for k, v in meta.items():
                hyp[k] = max(hyp[k], v[1])  # lower limit
                hyp[k] = min(hyp[k], v[2])  # upper limit
                hyp[k] = round(hyp[k], 5)  # significant digits

            # Train mutation
            # 訓練 使用突變后的參超 測試其效果
            results = train(hyp.copy(), opt, device, callbacks)
            callbacks = Callbacks()
            # Write mutation results
            # Write mutation results
            # 將結果寫入results 并將對應的hyp寫到evolve.txt evolve.txt中每一行為一次進化的結果
            # 每行前七個數(shù)字 (P, R, mAP, F1, test_losses(GIOU, obj, cls)) 之后為hyp
            # 保存hyp到y(tǒng)aml文件
            print_mutation(results, hyp.copy(), save_dir, opt.bucket)

        # Plot results
        # 將結果可視化 / 輸出保存信息
        plot_evolve(evolve_csv)
        LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n'
                    f"Results saved to {colorstr('bold', save_dir)}\n"
                    f'Usage example: $ python train.py --hyp {evolve_yaml}')

3. train函數(shù)

3.1 train函數(shù)——基本配置信息

################################################ 1. 傳入?yún)?shù)/基本配置 #############################################
    # opt傳入的參數(shù)
    save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze = \
        Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \
        opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze

    # Directories
    w = save_dir / 'weights'  # weights dir
    # 新建文件夾 weights train evolve
    (w.parent if evolve else w).mkdir(parents=True, exist_ok=True)  # make dir
    # 保存訓練結果的目錄  如runs/train/exp*/weights/last.pt
    last, best = w / 'last.pt', w / 'best.pt'

    # Hyperparameters # isinstance()是否是已知類型
    if isinstance(hyp, str):
        with open(hyp, errors='ignore') as f:
            # 加載yaml文件
            hyp = yaml.safe_load(f)  # load hyps dict
    # 打印超參數(shù) 彩色字體
    LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))

    # Save run settings
    # 如果不使用進化訓練
    if not evolve:
        # safe_dump() python值轉化為yaml序列化
        with open(save_dir / 'hyp.yaml', 'w') as f:
            yaml.safe_dump(hyp, f, sort_keys=False)
        with open(save_dir / 'opt.yaml', 'w') as f:
            # vars(opt) 的作用是把數(shù)據(jù)類型是Namespace的數(shù)據(jù)轉換為字典的形式。
            yaml.safe_dump(vars(opt), f, sort_keys=False)

    # Loggers
    data_dict = None
    if RANK in [-1, 0]:
        loggers = Loggers(save_dir, weights, opt, hyp, LOGGER)  # loggers instance
        if loggers.wandb:
            data_dict = loggers.wandb.data_dict
            if resume:
                weights, epochs, hyp, batch_size = opt.weights, opt.epochs, opt.hyp, opt.batch_size

        # Register actions
        for k in methods(loggers):
            callbacks.register_action(k, callback=getattr(loggers, k))

    # Config 畫圖
    plots = not evolve  # create plots
    # GPU / CPU
    cuda = device.type != 'cpu'
    # 隨機種子
    init_seeds(1 + RANK)
    # 存在子進程-分布式訓練
    with torch_distributed_zero_first(LOCAL_RANK):
        data_dict = data_dict or check_dataset(data)  # check if None
    # 訓練集和驗證集的位路徑
    train_path, val_path = data_dict['train'], data_dict['val']
    # 設置類別 是否單類
    nc = 1 if single_cls else int(data_dict['nc'])  # number of classes
    # 類別對應的名稱
    names = ['item'] if single_cls and len(data_dict['names']) != 1 else data_dict['names']  # class names
    # 判斷類別長度和文件是否對應
    assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}'  # check
    # 當前數(shù)據(jù)集是否是coco數(shù)據(jù)集(80個類別) 
    is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt')  # COCO dataset

3.2 train函數(shù)——模型加載/斷點訓練

################################################### 2. Model ###########################################
    # 檢查文件后綴是否是.pt
    check_suffix(weights, '.pt')  # check weights
    # 加載預訓練權重 yolov5提供了5個不同的預訓練權重,大家可以根據(jù)自己的模型選擇預訓練權重
    pretrained = weights.endswith('.pt')
    if pretrained:
        # # torch_distributed_zero_first(RANK): 用于同步不同進程對數(shù)據(jù)讀取的上下文管理器
        with torch_distributed_zero_first(LOCAL_RANK):
            # 如果本地不存在就從網(wǎng)站上下載
            weights = attempt_download(weights)  # download if not found locally
        # 加載模型以及參數(shù)
        ckpt = torch.load(weights, map_location='cpu')  # load checkpoint to CPU to avoid CUDA memory leak
        """
        兩種加載模型的方式: opt.cfg / ckpt['model'].yaml
        使用resume-斷點訓練: 選擇ckpt['model']yaml創(chuàng)建模型, 且不加載anchor
        使用斷點訓練時,保存的模型會保存anchor,所以不需要加載

        """
        model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device)  # create
        exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else []  # exclude keys
        csd = ckpt['model'].float().state_dict()  # checkpoint state_dict as FP32
        # 篩選字典中的鍵值對  把exclude刪除
        csd = intersect_dicts(csd, model.state_dict(), exclude=exclude)  # intersect
        model.load_state_dict(csd, strict=False)  # load
        LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}')  # report
    else:
        # 不適用預訓練權重
        model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device)  # create

3.3 train函數(shù)——凍結訓練/凍結層設置

################################################ 3. Freeze/凍結訓練 #########################################
    # 凍結訓練的網(wǎng)絡層
    freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))]  # layers to freeze
    for k, v in model.named_parameters():
        v.requires_grad = True  # train all layers
        if any(x in k for x in freeze):
            LOGGER.info(f'freezing {k}')
            # 凍結訓練的層梯度不更新
            v.requires_grad = False

3.4 train函數(shù)——圖片大小/batchsize設置

# Image size
    gs = max(int(model.stride.max()), 32)  # grid size (max stride)
    # 檢查圖片的大小
    imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2)  # verify imgsz is gs-multiple

    # Batch size
    if RANK == -1 and batch_size == -1:  # single-GPU only, estimate best batch size
        batch_size = check_train_batch_size(model, imgsz)
        loggers.on_params_update({"batch_size": batch_size})

3.5 train函數(shù)——優(yōu)化器選擇 / 分組優(yōu)化設置

############################################ 4. Optimizer/優(yōu)化器 ###########################################
    """
    nbs = 64
    batchsize = 16
    accumulate = 64 / 16 = 4
    模型梯度累計accumulate次之后就更新一次模型 相當于使用更大batch_size
    """
    nbs = 64  # nominal batch size
    accumulate = max(round(nbs / batch_size), 1)  # accumulate loss before optimizing
    # 權重衰減參數(shù)
    hyp['weight_decay'] *= batch_size * accumulate / nbs  # scale weight_decay
    # 打印日志
    LOGGER.info(f"Scaled weight_decay = {hyp['weight_decay']}")

    # 將模型參數(shù)分為三組(weights、biases、bn)來進行分組優(yōu)化
    g0, g1, g2 = [], [], []  # optimizer parameter groups
    for v in model.modules():
        if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):  # bias
            g2.append(v.bias)
        if isinstance(v, nn.BatchNorm2d):  # weight (no decay)
            g0.append(v.weight)
        elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):  # weight (with decay)
            g1.append(v.weight)
    # 選擇優(yōu)化器 / 提供了三個優(yōu)化器——g0
    if opt.optimizer == 'Adam':
        optimizer = Adam(g0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999))  # adjust beta1 to momentum
    elif opt.optimizer == 'AdamW':
        optimizer = AdamW(g0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999))  # adjust beta1 to momentum
    else:
        optimizer = SGD(g0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
    # 設置優(yōu)化的方式——g1 / g2
    optimizer.add_param_group({'params': g1, 'weight_decay': hyp['weight_decay']})  # add g1 with weight_decay
    optimizer.add_param_group({'params': g2})  # add g2 (biases)
    # 打印log日志 優(yōu)化信息
    LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__} with parameter groups "
                f"{len(g0)} weight (no decay), {len(g1)} weight, {len(g2)} bias")
    # 刪除變量
    del g0, g1, g2

3.6 train函數(shù)——學習率/ema/歸一化/單機多卡

############################################ 5. Scheduler ##############################################
    # 是否余弦學習率調(diào)整方式
    if opt.cos_lr:
        lf = one_cycle(1, hyp['lrf'], epochs)  # cosine 1->hyp['lrf']
    else:
        lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf']  # linear
    scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)  # plot_lr_scheduler(optimizer, scheduler, epochs)

    # EMA
    # 使用EMA(指數(shù)移動平均)對模型的參數(shù)做平均, 一種給予近期數(shù)據(jù)更高權重的平均方法, 以求提高測試指標并增加模型魯棒。
    ema = ModelEMA(model) if RANK in [-1, 0] else None

    # Resume
    start_epoch, best_fitness = 0, 0.0
    if pretrained:
        # Optimizer
        if ckpt['optimizer'] is not None:
            optimizer.load_state_dict(ckpt['optimizer'])
            best_fitness = ckpt['best_fitness']

        # EMA
        if ema and ckpt.get('ema'):
            ema.ema.load_state_dict(ckpt['ema'].float().state_dict())
            ema.updates = ckpt['updates']

        # Epochs
        start_epoch = ckpt['epoch'] + 1
        if resume:
            assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.'
        if epochs < start_epoch:
            LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.")
            epochs += ckpt['epoch']  # finetune additional epochs

        del ckpt, csd

    # DP mode
    # DP: 單機多卡模式
    if cuda and RANK == -1 and torch.cuda.device_count() > 1:
        LOGGER.warning('WARNING: DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n'
                       'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.')
        model = torch.nn.DataParallel(model)

    # SyncBatchNorm 多卡歸一化
    if opt.sync_bn and cuda and RANK != -1:
        model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
        # 打印信息
        LOGGER.info('Using SyncBatchNorm()')

3.7 train函數(shù)——數(shù)據(jù)加載 / anchor調(diào)整

# ############################################## 6. Trainloader / 數(shù)據(jù)加載 ######################################
    # 訓練集數(shù)據(jù)加載
    train_loader, dataset = create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls,
                                              hyp=hyp, augment=True, cache=None if opt.cache == 'val' else opt.cache,
                                              rect=opt.rect, rank=LOCAL_RANK, workers=workers,
                                              image_weights=opt.image_weights, quad=opt.quad,
                                              prefix=colorstr('train: '), shuffle=True)
    # 標簽編號最大值
    mlc = int(np.concatenate(dataset.labels, 0)[:, 0].max())  # max label class
    # 類別總數(shù)
    nb = len(train_loader)  # number of batches
    # 判斷編號是否正確
    assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}'

    # Process 0
    # 驗證集數(shù)據(jù)集加載
    if RANK in [-1, 0]:
        val_loader = create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls,
                                       hyp=hyp, cache=None if noval else opt.cache,
                                       rect=True, rank=-1, workers=workers * 2, pad=0.5,
                                       prefix=colorstr('val: '))[0]
        # 沒有使用斷點訓練
        if not resume:
            labels = np.concatenate(dataset.labels, 0)
            # c = torch.tensor(labels[:, 0])  # classes
            # cf = torch.bincount(c.long(), minlength=nc) + 1.  # frequency
            # model._initialize_biases(cf.to(device))
            if plots:
                # 畫出標簽信息
                plot_labels(labels, names, save_dir)

            # Anchors
            # 自適應anchor / anchor可以理解為程序預測的box
            # 根據(jù)k-mean算法聚類生成新的錨框
            if not opt.noautoanchor:
                # 參數(shù)dataset代表的是訓練集,hyp['anchor_t']是從配置文件hpy.scratch.yaml讀取的超參數(shù) anchor_t:4.0
                # 當配置文件中的anchor計算bpr(best possible recall)小于0.98時才會重新計算anchor。
                # best possible recall最大值1,如果bpr小于0.98,程序會根據(jù)數(shù)據(jù)集的label自動學習anchor的尺寸
                check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
            # 半進度
            model.half().float()  # pre-reduce anchor precision
        callbacks.run('on_pretrain_routine_end')

3.8 train函數(shù)——訓練配置/多尺度訓練/熱身訓練

# #################################################### 7. 訓練 ###############################################
    # DDP mode
    # DDP:多機多卡
    if cuda and RANK != -1:
        model = DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)

    # Model attributes
    nl = de_parallel(model).model[-1].nl  # number of detection layers (to scale hyps)
    hyp['box'] *= 3 / nl  # scale to layers
    hyp['cls'] *= nc / 80 * 3 / nl  # scale to classes and layers
    hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl  # scale to image size and layers
    # 標簽平滑
    hyp['label_smoothing'] = opt.label_smoothing
    model.nc = nc  # attach number of classes to model
    model.hyp = hyp  # attach hyperparameters to model
    # 從訓練樣本標簽得到類別權重(和類別中的目標數(shù)即類別頻率成反比)
    model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc  # attach class weights
    model.names = names

    # Start training
    t0 = time.time()
    # # 獲取熱身迭代的次數(shù)iterations: 3
    nw = max(round(hyp['warmup_epochs'] * nb), 100)  # number of warmup iterations, max(3 epochs, 100 iterations)
    # nw = min(nw, (epochs - start_epoch) / 2 * nb)  # limit warmup to < 1/2 of training
    last_opt_step = -1
    # # 初始化maps(每個類別的map)和results
    maps = np.zeros(nc)  # mAP per class
    results = (0, 0, 0, 0, 0, 0, 0)  # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
    # 設置學習率衰減所進行到的輪次,即使打斷訓練,使用resume接著訓練也能正常銜接之前的訓練進行學習率衰減
    scheduler.last_epoch = start_epoch - 1  # do not move
    # 設置amp混合精度訓練
    scaler = amp.GradScaler(enabled=cuda)
    # 早停止,不更新結束訓練
    stopper = EarlyStopping(patience=opt.patience)
    # 初始化損失函數(shù)
    compute_loss = ComputeLoss(model)  # init loss class
    # 打印信息
    LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n'
                f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
                f"Logging results to {colorstr('bold', save_dir)}\n"
                f'Starting training for {epochs} epochs...')
    # 開始走起訓練
    for epoch in range(start_epoch, epochs):  # epoch ------------------------------------------------------------------
        model.train()

        # Update image weights (optional, single-GPU only)
        # opt.image_weights
        if opt.image_weights:
            """
            如果設置進行圖片采樣策略,
            則根據(jù)前面初始化的圖片采樣權重model.class_weights以及maps配合每張圖片包含的類別數(shù)
            通過random.choices生成圖片索引indices從而進行采樣
            """
            cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc  # class weights
            iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw)  # image weights
            dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n)  # rand weighted idx

        # Update mosaic border (optional)
        # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
        # dataset.mosaic_border = [b - imgsz, -b]  # height, width borders

        mloss = torch.zeros(3, device=device)  # mean losses
        if RANK != -1:
            train_loader.sampler.set_epoch(epoch)
        pbar = enumerate(train_loader)
        LOGGER.info(('\n' + '%10s' * 7) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'labels', 'img_size'))
        if RANK in [-1, 0]:
            # 進度條顯示
            pbar = tqdm(pbar, total=nb, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}')  # progress bar
        # 梯度清零
        optimizer.zero_grad()
        for i, (imgs, targets, paths, _) in pbar:  # batch -------------------------------------------------------------
            ni = i + nb * epoch  # number integrated batches (since train start)
            imgs = imgs.to(device, non_blocking=True).float() / 255  # uint8 to float32, 0-255 to 0.0-1.0

            """
            熱身訓練(前nw次迭代)
            在前nw次迭代中, 根據(jù)以下方式選取accumulate和學習率
            """
            # Warmup
            if ni <= nw:
                xi = [0, nw]  # x interp
                # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0])  # iou loss ratio (obj_loss = 1.0 or iou)
                accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round())
                for j, x in enumerate(optimizer.param_groups):
                    """
                    bias的學習率從0.1下降到基準學習率lr*lf(epoch),
                    其他的參數(shù)學習率從0增加到lr*lf(epoch).
                    lf為上面設置的余弦退火的衰減函數(shù)
                    動量momentum也從0.9慢慢變到hyp['momentum'](default=0.937)
                    """

                    # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
                    x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
                    if 'momentum' in x:
                        x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])

            # Multi-scale
            if opt.multi_scale:
                """
                Multi-scale  設置多尺度訓練,從imgsz * 0.5, imgsz * 1.5 + gs隨機選取尺寸
                """
                sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs  # size
                sf = sz / max(imgs.shape[2:])  # scale factor
                if sf != 1:
                    ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]]  # new shape (stretched to gs-multiple)
                    imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)

            # Forward / 前向傳播
            with amp.autocast(enabled=cuda):
                pred = model(imgs)  # forward
                # # 計算損失,包括分類損失,objectness損失,框的回歸損失
                # loss為總損失值,loss_items為一個元組,包含分類損失,objectness損失,框的回歸損失和總損失
                loss, loss_items = compute_loss(pred, targets.to(device))  # loss scaled by batch_size
                if RANK != -1:
                    # 平均不同gpu之間的梯度
                    loss *= WORLD_SIZE  # gradient averaged between devices in DDP mode
                if opt.quad:
                    loss *= 4.

            # Backward
            scaler.scale(loss).backward()

            # Optimize  # 模型反向傳播accumulate次之后再根據(jù)累積的梯度更新一次參數(shù)
            if ni - last_opt_step >= accumulate:
                scaler.step(optimizer)  # optimizer.step
                scaler.update()
                optimizer.zero_grad()
                if ema:
                    ema.update(model)
                last_opt_step = ni

            # Log
            if RANK in [-1, 0]:
                mloss = (mloss * i + loss_items) / (i + 1)  # update mean losses
                mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G'  # (GB)
                pbar.set_description(('%10s' * 2 + '%10.4g' * 5) % (
                    f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1]))
                callbacks.run('on_train_batch_end', ni, model, imgs, targets, paths, plots, opt.sync_bn)
                if callbacks.stop_training:
                    return
            # end batch ------------------------------------------------------------------------------------------------

        # Scheduler 進行學習率衰減
        lr = [x['lr'] for x in optimizer.param_groups]  # for loggers
        scheduler.step()

        if RANK in [-1, 0]:
            # mAP
            callbacks.run('on_train_epoch_end', epoch=epoch)
            # 將model中的屬性賦值給ema
            ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights'])
            # 判斷當前的epoch是否是最后一輪
            final_epoch = (epoch + 1 == epochs) or stopper.possible_stop
            # notest: 是否只測試最后一輪  True: 只測試最后一輪   False: 每輪訓練完都測試mAP
            if not noval or final_epoch:  # Calculate mAP
                """
                測試使用的是ema(指數(shù)移動平均 對模型的參數(shù)做平均)的模型
                results: [1] Precision 所有類別的平均precision(最大f1時)
                         [1] Recall 所有類別的平均recall
                         [1] map@0.5 所有類別的平均mAP@0.5
                         [1] map@0.5:0.95 所有類別的平均mAP@0.5:0.95
                         [1] box_loss 驗證集回歸損失, obj_loss 驗證集置信度損失, cls_loss 驗證集分類損失
                maps: [80] 所有類別的mAP@0.5:0.95
                """
                results, maps, _ = val.run(data_dict,
                                           batch_size=batch_size // WORLD_SIZE * 2,
                                           imgsz=imgsz,
                                           model=ema.ema,
                                           single_cls=single_cls,
                                           dataloader=val_loader,
                                           save_dir=save_dir,
                                           plots=False,
                                           callbacks=callbacks,
                                           compute_loss=compute_loss)

            # Update best mAP
            # Update best mAP 這里的best mAP其實是[P, R, mAP@.5, mAP@.5-.95]的一個加權值
            # fi: [P, R, mAP@.5, mAP@.5-.95]的一個加權值 = 0.1*mAP@.5 + 0.9*mAP@.5-.95
            fi = fitness(np.array(results).reshape(1, -1))  # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
            if fi > best_fitness:
                best_fitness = fi
            log_vals = list(mloss) + list(results) + lr
            callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi)

            # Save model
            """
            保存帶checkpoint的模型用于inference或resuming training
            保存模型, 還保存了epoch, results, optimizer等信息
            optimizer將不會在最后一輪完成后保存
            model保存的是EMA的模型
            """
            if (not nosave) or (final_epoch and not evolve):  # if save
                ckpt = {'epoch': epoch,
                        'best_fitness': best_fitness,
                        'model': deepcopy(de_parallel(model)).half(),
                        'ema': deepcopy(ema.ema).half(),
                        'updates': ema.updates,
                        'optimizer': optimizer.state_dict(),
                        'wandb_id': loggers.wandb.wandb_run.id if loggers.wandb else None,
                        'date': datetime.now().isoformat()}

                # Save last, best and delete
                torch.save(ckpt, last)
                if best_fitness == fi:
                    torch.save(ckpt, best)
                if (epoch > 0) and (opt.save_period > 0) and (epoch % opt.save_period == 0):
                    torch.save(ckpt, w / f'epoch{epoch}.pt')
                del ckpt
                callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi)

            # Stop Single-GPU
            if RANK == -1 and stopper(epoch=epoch, fitness=fi):
                break

            # Stop DDP TODO: known issues shttps://github.com/ultralytics/yolov5/pull/4576
            # stop = stopper(epoch=epoch, fitness=fi)
            # if RANK == 0:
            #    dist.broadcast_object_list([stop], 0)  # broadcast 'stop' to all ranks

        # Stop DPP
        # with torch_distributed_zero_first(RANK):
        # if stop:
        #    break  # must break all DDP ranks

3.9 train函數(shù)——訓練結束/打印信息/保存結果

############################################### 8. 打印訓練信息 ##########################################
    if RANK in [-1, 0]:
        LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.')
        for f in last, best:
            if f.exists():
                # 模型訓練完后, strip_optimizer函數(shù)將optimizer從ckpt中刪除
                # 并對模型進行model.half() 將Float32->Float16 這樣可以減少模型大小, 提高inference速度
                strip_optimizer(f)  # strip optimizers
                if f is best:
                    LOGGER.info(f'\nValidating {f}...')
                    results, _, _ = val.run(data_dict,
                                            batch_size=batch_size // WORLD_SIZE * 2,
                                            imgsz=imgsz,
                                            model=attempt_load(f, device).half(),
                                            iou_thres=0.65 if is_coco else 0.60,  # best pycocotools results at 0.65
                                            single_cls=single_cls,
                                            dataloader=val_loader,
                                            save_dir=save_dir,
                                            save_json=is_coco,
                                            verbose=True,
                                            plots=True,
                                            callbacks=callbacks,
                                            compute_loss=compute_loss)  # val best model with plots
                    if is_coco:
                        callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi)
        # 回調(diào)函數(shù)
        callbacks.run('on_train_end', last, best, plots, epoch, results)
        LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}")
    # 釋放顯存
    torch.cuda.empty_cache()
    return results

4. run函數(shù)

def run(**kwargs):
    # 執(zhí)行這個腳本/ 調(diào)用train函數(shù) / 開啟訓練
    # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt')
    opt = parse_opt(True)
    for k, v in kwargs.items():
        # setattr() 賦值屬性,屬性不存在則創(chuàng)建一個賦值
        setattr(opt, k, v)
    main(opt)
    return opt

5.全部代碼注釋

# YOLOv5 ?? by Ultralytics, GPL-3.0 license
"""
Train a YOLOv5 model on a custom dataset.

Models and datasets download automatically from the latest YOLOv5 release.
Models: https://github.com/ultralytics/yolov5/tree/master/models
Datasets: https://github.com/ultralytics/yolov5/tree/master/data
Tutorial: https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data

Usage:
    $ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640  # from pretrained (RECOMMENDED)
    $ python path/to/train.py --data coco128.yaml --weights '' --cfg yolov5s.yaml --img 640  # from scratch
"""

import argparse
import math
import os
import random
import sys
import time
from copy import deepcopy
from datetime import datetime
from pathlib import Path

import numpy as np
import torch
import torch.distributed as dist
import torch.nn as nn
import yaml
from torch.cuda import amp
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.optim import SGD, Adam, AdamW, lr_scheduler
from tqdm import tqdm

FILE = Path(__file__).resolve()
ROOT = FILE.parents[0]  # YOLOv5 root directory
if str(ROOT) not in sys.path:
    sys.path.append(str(ROOT))  # add ROOT to PATH
ROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative

import val  # for end-of-epoch mAP
from models.experimental import attempt_load
from models.yolo import Model
from utils.autoanchor import check_anchors
from utils.autobatch import check_train_batch_size
from utils.callbacks import Callbacks
from utils.datasets import create_dataloader
from utils.downloads import attempt_download
from utils.general import (LOGGER, check_dataset, check_file, check_git_status, check_img_size, check_requirements,
                           check_suffix, check_yaml, colorstr, get_latest_run, increment_path, init_seeds,
                           intersect_dicts, labels_to_class_weights, labels_to_image_weights, methods, one_cycle,
                           print_args, print_mutation, strip_optimizer)
from utils.loggers import Loggers
from utils.loggers.wandb.wandb_utils import check_wandb_resume
from utils.loss import ComputeLoss
from utils.metrics import fitness
from utils.plots import plot_evolve, plot_labels
from utils.torch_utils import EarlyStopping, ModelEMA, de_parallel, select_device, torch_distributed_zero_first

LOCAL_RANK = int(os.getenv('LOCAL_RANK', -1))  # https://pytorch.org/docs/stable/elastic/run.html
RANK = int(os.getenv('RANK', -1))
WORLD_SIZE = int(os.getenv('WORLD_SIZE', 1))


def train(hyp,  # path/to/hyp.yaml or hyp dictionary
          opt,
          device,
          callbacks
          ):
    ################################################ 1. 傳入?yún)?shù)/基本配置 #############################################
    # opt傳入的參數(shù)
    save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, freeze = \
        Path(opt.save_dir), opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \
        opt.resume, opt.noval, opt.nosave, opt.workers, opt.freeze

    # Directories
    w = save_dir / 'weights'  # weights dir
    # 新建文件夾 weights train evolve
    (w.parent if evolve else w).mkdir(parents=True, exist_ok=True)  # make dir
    # 保存訓練結果的目錄  如runs/train/exp*/weights/last.pt
    last, best = w / 'last.pt', w / 'best.pt'

    # Hyperparameters # isinstance()是否是已知類型
    if isinstance(hyp, str):
        with open(hyp, errors='ignore') as f:
            # 加載yaml文件
            hyp = yaml.safe_load(f)  # load hyps dict
    # 打印超參數(shù) 彩色字體
    LOGGER.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))

    # Save run settings
    # 如果不使用進化訓練
    if not evolve:
        # safe_dump() python值轉化為yaml序列化
        with open(save_dir / 'hyp.yaml', 'w') as f:
            yaml.safe_dump(hyp, f, sort_keys=False)
        with open(save_dir / 'opt.yaml', 'w') as f:
            # vars(opt) 的作用是把數(shù)據(jù)類型是Namespace的數(shù)據(jù)轉換為字典的形式。
            yaml.safe_dump(vars(opt), f, sort_keys=False)

    # Loggers
    data_dict = None
    if RANK in [-1, 0]:
        loggers = Loggers(save_dir, weights, opt, hyp, LOGGER)  # loggers instance
        if loggers.wandb:
            data_dict = loggers.wandb.data_dict
            if resume:
                weights, epochs, hyp, batch_size = opt.weights, opt.epochs, opt.hyp, opt.batch_size

        # Register actions
        for k in methods(loggers):
            callbacks.register_action(k, callback=getattr(loggers, k))

    # Config 畫圖
    plots = not evolve  # create plots
    # GPU / CPU
    cuda = device.type != 'cpu'
    # 隨機種子
    init_seeds(1 + RANK)
    # 存在子進程-分布式訓練
    with torch_distributed_zero_first(LOCAL_RANK):
        data_dict = data_dict or check_dataset(data)  # check if None
    # 訓練集和驗證集的位路徑
    train_path, val_path = data_dict['train'], data_dict['val']
    # 設置類別 是否單類
    nc = 1 if single_cls else int(data_dict['nc'])  # number of classes
    # 類別對應的名稱
    names = ['item'] if single_cls and len(data_dict['names']) != 1 else data_dict['names']  # class names
    # 判斷類別長度和文件是否對應
    assert len(names) == nc, f'{len(names)} names found for nc={nc} dataset in {data}'  # check
    # 當前數(shù)據(jù)集是否是coco數(shù)據(jù)集(80個類別) 
    is_coco = isinstance(val_path, str) and val_path.endswith('coco/val2017.txt')  # COCO dataset

    ################################################### 2. Model ###########################################
    # 檢查文件后綴是否是.pt
    check_suffix(weights, '.pt')  # check weights
    # 加載預訓練權重 yolov5提供了5個不同的預訓練權重,大家可以根據(jù)自己的模型選擇預訓練權重
    pretrained = weights.endswith('.pt')
    if pretrained:
        # # torch_distributed_zero_first(RANK): 用于同步不同進程對數(shù)據(jù)讀取的上下文管理器
        with torch_distributed_zero_first(LOCAL_RANK):
            # 如果本地不存在就從網(wǎng)站上下載
            weights = attempt_download(weights)  # download if not found locally
        # 加載模型以及參數(shù)
        ckpt = torch.load(weights, map_location='cpu')  # load checkpoint to CPU to avoid CUDA memory leak
        """
        兩種加載模型的方式: opt.cfg / ckpt['model'].yaml
        使用resume-斷點訓練: 選擇ckpt['model']yaml創(chuàng)建模型, 且不加載anchor
        使用斷點訓練時,保存的模型會保存anchor,所以不需要加載

        """
        model = Model(cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device)  # create
        exclude = ['anchor'] if (cfg or hyp.get('anchors')) and not resume else []  # exclude keys
        csd = ckpt['model'].float().state_dict()  # checkpoint state_dict as FP32
        # 篩選字典中的鍵值對  把exclude刪除
        csd = intersect_dicts(csd, model.state_dict(), exclude=exclude)  # intersect
        model.load_state_dict(csd, strict=False)  # load
        LOGGER.info(f'Transferred {len(csd)}/{len(model.state_dict())} items from {weights}')  # report
    else:
        # 不適用預訓練權重
        model = Model(cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device)  # create

    ################################################ 3. Freeze/凍結訓練 #########################################
    # 凍結訓練的網(wǎng)絡層
    freeze = [f'model.{x}.' for x in (freeze if len(freeze) > 1 else range(freeze[0]))]  # layers to freeze
    for k, v in model.named_parameters():
        v.requires_grad = True  # train all layers
        if any(x in k for x in freeze):
            LOGGER.info(f'freezing {k}')
            # 凍結訓練的層梯度不更新
            v.requires_grad = False

    # Image size
    gs = max(int(model.stride.max()), 32)  # grid size (max stride)
    # 檢查圖片的大小
    imgsz = check_img_size(opt.imgsz, gs, floor=gs * 2)  # verify imgsz is gs-multiple

    # Batch size
    if RANK == -1 and batch_size == -1:  # single-GPU only, estimate best batch size
        batch_size = check_train_batch_size(model, imgsz)
        loggers.on_params_update({"batch_size": batch_size})

    ############################################ 4. Optimizer/優(yōu)化器 ###########################################
    """
    nbs = 64
    batchsize = 16
    accumulate = 64 / 16 = 4
    模型梯度累計accumulate次之后就更新一次模型 相當于使用更大batch_size
    """
    nbs = 64  # nominal batch size
    accumulate = max(round(nbs / batch_size), 1)  # accumulate loss before optimizing
    # 權重衰減參數(shù)
    hyp['weight_decay'] *= batch_size * accumulate / nbs  # scale weight_decay
    # 打印日志
    LOGGER.info(f"Scaled weight_decay = {hyp['weight_decay']}")

    # 將模型參數(shù)分為三組(weights、biases、bn)來進行分組優(yōu)化
    g0, g1, g2 = [], [], []  # optimizer parameter groups
    for v in model.modules():
        if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):  # bias
            g2.append(v.bias)
        if isinstance(v, nn.BatchNorm2d):  # weight (no decay)
            g0.append(v.weight)
        elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):  # weight (with decay)
            g1.append(v.weight)
    # 選擇優(yōu)化器 / 提供了三個優(yōu)化器——g0
    if opt.optimizer == 'Adam':
        optimizer = Adam(g0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999))  # adjust beta1 to momentum
    elif opt.optimizer == 'AdamW':
        optimizer = AdamW(g0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999))  # adjust beta1 to momentum
    else:
        optimizer = SGD(g0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
    # 設置優(yōu)化的方式——g1 / g2
    optimizer.add_param_group({'params': g1, 'weight_decay': hyp['weight_decay']})  # add g1 with weight_decay
    optimizer.add_param_group({'params': g2})  # add g2 (biases)
    # 打印log日志 優(yōu)化信息
    LOGGER.info(f"{colorstr('optimizer:')} {type(optimizer).__name__} with parameter groups "
                f"{len(g0)} weight (no decay), {len(g1)} weight, {len(g2)} bias")
    # 刪除變量
    del g0, g1, g2

    ############################################ 5. Scheduler ##############################################
    # 是否余弦學習率調(diào)整方式
    if opt.cos_lr:
        lf = one_cycle(1, hyp['lrf'], epochs)  # cosine 1->hyp['lrf']
    else:
        lf = lambda x: (1 - x / epochs) * (1.0 - hyp['lrf']) + hyp['lrf']  # linear
    scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)  # plot_lr_scheduler(optimizer, scheduler, epochs)

    # EMA
    # 使用EMA(指數(shù)移動平均)對模型的參數(shù)做平均, 一種給予近期數(shù)據(jù)更高權重的平均方法, 以求提高測試指標并增加模型魯棒。
    ema = ModelEMA(model) if RANK in [-1, 0] else None

    # Resume
    start_epoch, best_fitness = 0, 0.0
    if pretrained:
        # Optimizer
        if ckpt['optimizer'] is not None:
            optimizer.load_state_dict(ckpt['optimizer'])
            best_fitness = ckpt['best_fitness']

        # EMA
        if ema and ckpt.get('ema'):
            ema.ema.load_state_dict(ckpt['ema'].float().state_dict())
            ema.updates = ckpt['updates']

        # Epochs
        start_epoch = ckpt['epoch'] + 1
        if resume:
            assert start_epoch > 0, f'{weights} training to {epochs} epochs is finished, nothing to resume.'
        if epochs < start_epoch:
            LOGGER.info(f"{weights} has been trained for {ckpt['epoch']} epochs. Fine-tuning for {epochs} more epochs.")
            epochs += ckpt['epoch']  # finetune additional epochs

        del ckpt, csd

    # DP mode
    # DP: 單機多卡模式
    if cuda and RANK == -1 and torch.cuda.device_count() > 1:
        LOGGER.warning('WARNING: DP not recommended, use torch.distributed.run for best DDP Multi-GPU results.\n'
                       'See Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.')
        model = torch.nn.DataParallel(model)

    # SyncBatchNorm 多卡歸一化
    if opt.sync_bn and cuda and RANK != -1:
        model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
        # 打印信息
        LOGGER.info('Using SyncBatchNorm()')

    # ############################################## 6. Trainloader / 數(shù)據(jù)加載 ######################################
    # 訓練集數(shù)據(jù)加載
    train_loader, dataset = create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls,
                                              hyp=hyp, augment=True, cache=None if opt.cache == 'val' else opt.cache,
                                              rect=opt.rect, rank=LOCAL_RANK, workers=workers,
                                              image_weights=opt.image_weights, quad=opt.quad,
                                              prefix=colorstr('train: '), shuffle=True)
    # 標簽編號最大值
    mlc = int(np.concatenate(dataset.labels, 0)[:, 0].max())  # max label class
    # 類別總數(shù)
    nb = len(train_loader)  # number of batches
    # 判斷編號是否正確
    assert mlc < nc, f'Label class {mlc} exceeds nc={nc} in {data}. Possible class labels are 0-{nc - 1}'

    # Process 0
    # 驗證集數(shù)據(jù)集加載
    if RANK in [-1, 0]:
        val_loader = create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls,
                                       hyp=hyp, cache=None if noval else opt.cache,
                                       rect=True, rank=-1, workers=workers * 2, pad=0.5,
                                       prefix=colorstr('val: '))[0]
        # 沒有使用斷點訓練
        if not resume:
            labels = np.concatenate(dataset.labels, 0)
            # c = torch.tensor(labels[:, 0])  # classes
            # cf = torch.bincount(c.long(), minlength=nc) + 1.  # frequency
            # model._initialize_biases(cf.to(device))
            if plots:
                # 畫出標簽信息
                plot_labels(labels, names, save_dir)

            # Anchors
            # 自適應anchor / anchor可以理解為程序預測的box
            # 根據(jù)k-mean算法聚類生成新的錨框
            if not opt.noautoanchor:
                # 參數(shù)dataset代表的是訓練集,hyp['anchor_t']是從配置文件hpy.scratch.yaml讀取的超參數(shù) anchor_t:4.0
                # 當配置文件中的anchor計算bpr(best possible recall)小于0.98時才會重新計算anchor。
                # best possible recall最大值1,如果bpr小于0.98,程序會根據(jù)數(shù)據(jù)集的label自動學習anchor的尺寸
                check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
            # 半進度
            model.half().float()  # pre-reduce anchor precision
        callbacks.run('on_pretrain_routine_end')

    # #################################################### 7. 訓練 ###############################################
    # DDP mode
    # DDP:多機多卡
    if cuda and RANK != -1:
        model = DDP(model, device_ids=[LOCAL_RANK], output_device=LOCAL_RANK)

    # Model attributes
    nl = de_parallel(model).model[-1].nl  # number of detection layers (to scale hyps)
    hyp['box'] *= 3 / nl  # scale to layers
    hyp['cls'] *= nc / 80 * 3 / nl  # scale to classes and layers
    hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl  # scale to image size and layers
    # 標簽平滑
    hyp['label_smoothing'] = opt.label_smoothing
    model.nc = nc  # attach number of classes to model
    model.hyp = hyp  # attach hyperparameters to model
    # 從訓練樣本標簽得到類別權重(和類別中的目標數(shù)即類別頻率成反比)
    model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc  # attach class weights
    model.names = names

    # Start training
    t0 = time.time()
    # # 獲取熱身迭代的次數(shù)iterations: 3
    nw = max(round(hyp['warmup_epochs'] * nb), 100)  # number of warmup iterations, max(3 epochs, 100 iterations)
    # nw = min(nw, (epochs - start_epoch) / 2 * nb)  # limit warmup to < 1/2 of training
    last_opt_step = -1
    # # 初始化maps(每個類別的map)和results
    maps = np.zeros(nc)  # mAP per class
    results = (0, 0, 0, 0, 0, 0, 0)  # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
    # 設置學習率衰減所進行到的輪次,即使打斷訓練,使用resume接著訓練也能正常銜接之前的訓練進行學習率衰減
    scheduler.last_epoch = start_epoch - 1  # do not move
    # 設置amp混合精度訓練
    scaler = amp.GradScaler(enabled=cuda)
    # 早停止,不更新結束訓練
    stopper = EarlyStopping(patience=opt.patience)
    # 初始化損失函數(shù)
    compute_loss = ComputeLoss(model)  # init loss class
    # 打印信息
    LOGGER.info(f'Image sizes {imgsz} train, {imgsz} val\n'
                f'Using {train_loader.num_workers * WORLD_SIZE} dataloader workers\n'
                f"Logging results to {colorstr('bold', save_dir)}\n"
                f'Starting training for {epochs} epochs...')
    # 開始走起訓練
    for epoch in range(start_epoch, epochs):  # epoch ------------------------------------------------------------------
        model.train()

        # Update image weights (optional, single-GPU only)
        # opt.image_weights
        if opt.image_weights:
            """
            如果設置進行圖片采樣策略,
            則根據(jù)前面初始化的圖片采樣權重model.class_weights以及maps配合每張圖片包含的類別數(shù)
            通過random.choices生成圖片索引indices從而進行采樣
            """
            cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc  # class weights
            iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw)  # image weights
            dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n)  # rand weighted idx

        # Update mosaic border (optional)
        # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
        # dataset.mosaic_border = [b - imgsz, -b]  # height, width borders

        mloss = torch.zeros(3, device=device)  # mean losses
        if RANK != -1:
            train_loader.sampler.set_epoch(epoch)
        pbar = enumerate(train_loader)
        LOGGER.info(('\n' + '%10s' * 7) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'labels', 'img_size'))
        if RANK in [-1, 0]:
            # 進度條顯示
            pbar = tqdm(pbar, total=nb, bar_format='{l_bar}{bar:10}{r_bar}{bar:-10b}')  # progress bar
        # 梯度清零
        optimizer.zero_grad()
        for i, (imgs, targets, paths, _) in pbar:  # batch -------------------------------------------------------------
            ni = i + nb * epoch  # number integrated batches (since train start)
            imgs = imgs.to(device, non_blocking=True).float() / 255  # uint8 to float32, 0-255 to 0.0-1.0

            """
            熱身訓練(前nw次迭代)
            在前nw次迭代中, 根據(jù)以下方式選取accumulate和學習率
            """
            # Warmup
            if ni <= nw:
                xi = [0, nw]  # x interp
                # compute_loss.gr = np.interp(ni, xi, [0.0, 1.0])  # iou loss ratio (obj_loss = 1.0 or iou)
                accumulate = max(1, np.interp(ni, xi, [1, nbs / batch_size]).round())
                for j, x in enumerate(optimizer.param_groups):
                    """
                    bias的學習率從0.1下降到基準學習率lr*lf(epoch),
                    其他的參數(shù)學習率從0增加到lr*lf(epoch).
                    lf為上面設置的余弦退火的衰減函數(shù)
                    動量momentum也從0.9慢慢變到hyp['momentum'](default=0.937)
                    """

                    # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
                    x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
                    if 'momentum' in x:
                        x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])

            # Multi-scale
            if opt.multi_scale:
                """
                Multi-scale  設置多尺度訓練,從imgsz * 0.5, imgsz * 1.5 + gs隨機選取尺寸
                """
                sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs  # size
                sf = sz / max(imgs.shape[2:])  # scale factor
                if sf != 1:
                    ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]]  # new shape (stretched to gs-multiple)
                    imgs = nn.functional.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)

            # Forward / 前向傳播
            with amp.autocast(enabled=cuda):
                pred = model(imgs)  # forward
                # # 計算損失,包括分類損失,objectness損失,框的回歸損失
                # loss為總損失值,loss_items為一個元組,包含分類損失,objectness損失,框的回歸損失和總損失
                loss, loss_items = compute_loss(pred, targets.to(device))  # loss scaled by batch_size
                if RANK != -1:
                    # 平均不同gpu之間的梯度
                    loss *= WORLD_SIZE  # gradient averaged between devices in DDP mode
                if opt.quad:
                    loss *= 4.

            # Backward
            scaler.scale(loss).backward()

            # Optimize  # 模型反向傳播accumulate次之后再根據(jù)累積的梯度更新一次參數(shù)
            if ni - last_opt_step >= accumulate:
                scaler.step(optimizer)  # optimizer.step
                scaler.update()
                optimizer.zero_grad()
                if ema:
                    ema.update(model)
                last_opt_step = ni

            # Log
            if RANK in [-1, 0]:
                mloss = (mloss * i + loss_items) / (i + 1)  # update mean losses
                mem = f'{torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0:.3g}G'  # (GB)
                pbar.set_description(('%10s' * 2 + '%10.4g' * 5) % (
                    f'{epoch}/{epochs - 1}', mem, *mloss, targets.shape[0], imgs.shape[-1]))
                callbacks.run('on_train_batch_end', ni, model, imgs, targets, paths, plots, opt.sync_bn)
                if callbacks.stop_training:
                    return
            # end batch ------------------------------------------------------------------------------------------------

        # Scheduler 進行學習率衰減
        lr = [x['lr'] for x in optimizer.param_groups]  # for loggers
        scheduler.step()

        if RANK in [-1, 0]:
            # mAP
            callbacks.run('on_train_epoch_end', epoch=epoch)
            # 將model中的屬性賦值給ema
            ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'names', 'stride', 'class_weights'])
            # 判斷當前的epoch是否是最后一輪
            final_epoch = (epoch + 1 == epochs) or stopper.possible_stop
            # notest: 是否只測試最后一輪  True: 只測試最后一輪   False: 每輪訓練完都測試mAP
            if not noval or final_epoch:  # Calculate mAP
                """
                測試使用的是ema(指數(shù)移動平均 對模型的參數(shù)做平均)的模型
                results: [1] Precision 所有類別的平均precision(最大f1時)
                         [1] Recall 所有類別的平均recall
                         [1] map@0.5 所有類別的平均mAP@0.5
                         [1] map@0.5:0.95 所有類別的平均mAP@0.5:0.95
                         [1] box_loss 驗證集回歸損失, obj_loss 驗證集置信度損失, cls_loss 驗證集分類損失
                maps: [80] 所有類別的mAP@0.5:0.95
                """
                results, maps, _ = val.run(data_dict,
                                           batch_size=batch_size // WORLD_SIZE * 2,
                                           imgsz=imgsz,
                                           model=ema.ema,
                                           single_cls=single_cls,
                                           dataloader=val_loader,
                                           save_dir=save_dir,
                                           plots=False,
                                           callbacks=callbacks,
                                           compute_loss=compute_loss)

            # Update best mAP
            # Update best mAP 這里的best mAP其實是[P, R, mAP@.5, mAP@.5-.95]的一個加權值
            # fi: [P, R, mAP@.5, mAP@.5-.95]的一個加權值 = 0.1*mAP@.5 + 0.9*mAP@.5-.95
            fi = fitness(np.array(results).reshape(1, -1))  # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
            if fi > best_fitness:
                best_fitness = fi
            log_vals = list(mloss) + list(results) + lr
            callbacks.run('on_fit_epoch_end', log_vals, epoch, best_fitness, fi)

            # Save model
            """
            保存帶checkpoint的模型用于inference或resuming training
            保存模型, 還保存了epoch, results, optimizer等信息
            optimizer將不會在最后一輪完成后保存
            model保存的是EMA的模型
            """
            if (not nosave) or (final_epoch and not evolve):  # if save
                ckpt = {'epoch': epoch,
                        'best_fitness': best_fitness,
                        'model': deepcopy(de_parallel(model)).half(),
                        'ema': deepcopy(ema.ema).half(),
                        'updates': ema.updates,
                        'optimizer': optimizer.state_dict(),
                        'wandb_id': loggers.wandb.wandb_run.id if loggers.wandb else None,
                        'date': datetime.now().isoformat()}

                # Save last, best and delete
                torch.save(ckpt, last)
                if best_fitness == fi:
                    torch.save(ckpt, best)
                if (epoch > 0) and (opt.save_period > 0) and (epoch % opt.save_period == 0):
                    torch.save(ckpt, w / f'epoch{epoch}.pt')
                del ckpt
                callbacks.run('on_model_save', last, epoch, final_epoch, best_fitness, fi)

            # Stop Single-GPU
            if RANK == -1 and stopper(epoch=epoch, fitness=fi):
                break

            # Stop DDP TODO: known issues shttps://github.com/ultralytics/yolov5/pull/4576
            # stop = stopper(epoch=epoch, fitness=fi)
            # if RANK == 0:
            #    dist.broadcast_object_list([stop], 0)  # broadcast 'stop' to all ranks

        # Stop DPP
        # with torch_distributed_zero_first(RANK):
        # if stop:
        #    break  # must break all DDP ranks

        # end epoch ----------------------------------------------------------------------------------------------------
    # end training --------------------------------------------------------------------------------------------------
    ############################################### 8. 打印訓練信息 ##########################################
    if RANK in [-1, 0]:
        LOGGER.info(f'\n{epoch - start_epoch + 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.')
        for f in last, best:
            if f.exists():
                # 模型訓練完后, strip_optimizer函數(shù)將optimizer從ckpt中刪除
                # 并對模型進行model.half() 將Float32->Float16 這樣可以減少模型大小, 提高inference速度
                strip_optimizer(f)  # strip optimizers
                if f is best:
                    LOGGER.info(f'\nValidating {f}...')
                    results, _, _ = val.run(data_dict,
                                            batch_size=batch_size // WORLD_SIZE * 2,
                                            imgsz=imgsz,
                                            model=attempt_load(f, device).half(),
                                            iou_thres=0.65 if is_coco else 0.60,  # best pycocotools results at 0.65
                                            single_cls=single_cls,
                                            dataloader=val_loader,
                                            save_dir=save_dir,
                                            save_json=is_coco,
                                            verbose=True,
                                            plots=True,
                                            callbacks=callbacks,
                                            compute_loss=compute_loss)  # val best model with plots
                    if is_coco:
                        callbacks.run('on_fit_epoch_end', list(mloss) + list(results) + lr, epoch, best_fitness, fi)
        # 回調(diào)函數(shù)
        callbacks.run('on_train_end', last, best, plots, epoch, results)
        LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}")
    # 釋放顯存
    torch.cuda.empty_cache()
    return results


def parse_opt(known=False):
    parser = argparse.ArgumentParser()
    # weights 權重的路徑./weights/yolov5s.pt....
    parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='initial weights path')
    # cfg 配置文件(網(wǎng)絡結構) anchor/backbone/numclasses/head,該文件需要自己生成
    parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
    # data 數(shù)據(jù)集配置文件(路徑) train/val/label/, 該文件需要自己生成
    parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
    # hpy超參數(shù)設置文件(lr/sgd/mixup)
    parser.add_argument('--hyp', type=str, default=ROOT / 'data/hyps/hyp.scratch-low.yaml', help='hyperparameters path')
    # epochs 訓練輪次
    parser.add_argument('--epochs', type=int, default=300)
    # batchsize 訓練批次
    parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs, -1 for autobatch')
    # imagesize 設置圖片大小
    parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='train, val image size (pixels)')
    # rect 是否采用矩形訓練,默認為False
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    # resume 是否接著上次的訓練結果,繼續(xù)訓練
    parser.add_argument('--resume', nargs='?', const=True, default=True, help='resume most recent training')
    # nosave 保存最好的模型
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    # noval 最后進行測試
    parser.add_argument('--noval', action='store_true', help='only validate final epoch')
    # noautoanchor 不自動調(diào)整anchor, 默認False
    parser.add_argument('--noautoanchor', action='store_true', help='disable AutoAnchor')
    # evolve參數(shù)進化
    parser.add_argument('--evolve', type=int, nargs='?', const=300, help='evolve hyperparameters for x generations')
    # bucket谷歌優(yōu)盤
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    # cache 是否提前緩存圖片到內(nèi)存,以加快訓練速度,默認False
    parser.add_argument('--cache', type=str, nargs='?', const='ram', help='--cache images in "ram" (default) or "disk"')
    # mage-weights 加載的權重文件
    parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
    # device 設備選擇
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    # multi-scale 多測度訓練
    parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
    # single-cls 數(shù)據(jù)集是否多類/默認True
    parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
    # optimizer 優(yōu)化器選擇
    parser.add_argument('--optimizer', type=str, choices=['SGD', 'Adam', 'AdamW'], default='SGD', help='optimizer')
    # sync-bn:是否使用跨卡同步BN,在DDP模式使用
    parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
    # workers/dataloader的最大worker數(shù)量
    parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
    # 保存路徑
    parser.add_argument('--project', default=ROOT / 'runs/train', help='save to project/name')
    # 實驗名稱
    parser.add_argument('--name', default='exp', help='save to project/name')
    # 項目位置是否存在
    parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
    parser.add_argument('--quad', action='store_true', help='quad dataloader')
    # cos-lr 余弦學習率
    parser.add_argument('--cos-lr', action='store_true', help='cosine LR scheduler')
    # 標簽平滑
    parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
    # 早停止忍耐次數(shù)
    parser.add_argument('--patience', type=int, default=100, help='EarlyStopping patience (epochs without improvement)')
    # 凍結訓練次數(shù)
    parser.add_argument('--freeze', nargs='+', type=int, default=[0], help='Freeze layers: backbone=10, first3=0 1 2')
    parser.add_argument('--save-period', type=int, default=-1, help='Save checkpoint every x epochs (disabled if < 1)')
    parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')

    # Weights & Biases arguments
    # 在線可視化工具,類似于tensorboard工具,想了解這款工具可以查看https://zhuanlan.zhihu.com/p/266337608
    parser.add_argument('--entity', default=None, help='W&B: Entity')
    # upload_dataset: 是否上傳dataset到wandb tabel(將數(shù)據(jù)集作為交互式 dsviz表 在瀏覽器中查看、查詢、篩選和分析數(shù)據(jù)集) 默認False
    parser.add_argument('--upload_dataset', nargs='?', const=True, default=False, help='W&B: Upload data, "val" option')
    # bbox_interval: 設置界框圖像記錄間隔 Set bounding-box image logging interval for W&B 默認-1   opt.epochs // 10
    parser.add_argument('--bbox_interval', type=int, default=-1, help='W&B: Set bounding-box image logging interval')
    # 使用數(shù)據(jù)的版本
    parser.add_argument('--artifact_alias', type=str, default='latest', help='W&B: Version of dataset artifact to use')

    # 傳入的基本配置中沒有的參數(shù)也不會報錯# parse_args()和parse_known_args() 
    # parse = argparse.ArgumentParser()
    # parse.add_argument('--s', type=int, default=2, help='flag_int')
    # parser.parse_args() / parse_args()
    opt = parser.parse_known_args()[0] if known else parser.parse_args()
    return opt


def main(opt, callbacks=Callbacks()):
    ############################################### 1. Checks ##################################################
    if RANK in [-1, 0]:
        # 輸出所有訓練參數(shù) / 參數(shù)以彩色的方式表現(xiàn)
        print_args(FILE.stem, opt)
        # 檢查代碼版本是否更新
        check_git_status()
        # 檢查安裝是否都安裝了 requirements.txt, 缺少安裝包安裝。
        # 缺少安裝包:建議使用 pip install -i https://pypi.tuna.tsinghua.edu.cn/simple -r requirements.txt
        check_requirements(exclude=['thop'])

    ############################################### 2. Resume ##################################################
    # 初始化可視化工具wandb,wandb使用教程看https://zhuanlan.zhihu.com/p/266337608
    # 斷點訓練使用教程可以查看:https://blog.csdn.net/CharmsLUO/article/details/123410081
    if opt.resume and not check_wandb_resume(opt) and not opt.evolve:  # resume an interrupted run
        # isinstance()是否是已經(jīng)知道的類型
        # 如果resume是True,則通過get_lastest_run()函數(shù)找到runs為文件夾中最近的權重文件last.pt
        ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run()  # specified or most recent path
        # 判斷是否是文件
        assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
        #  # 相關的opt參數(shù)也要替換成last.pt中的opt參數(shù) safe_load()yaml文件加載數(shù)據(jù)
        with open(Path(ckpt).parent.parent / 'opt.yaml', errors='ignore') as f:
            # argparse.Namespace 可以理解為字典
            opt = argparse.Namespace(**yaml.safe_load(f))  # replace
        opt.cfg, opt.weights, opt.resume = '', ckpt, True  # reinstate
        # 打印斷點訓練信息
        LOGGER.info(f'Resuming training from {ckpt}')
    else:
        # 不使用斷點訓練就在加載輸入的參數(shù)
        opt.data, opt.cfg, opt.hyp, opt.weights, opt.project = \
            check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project)  # checks
        assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
        # opt.evolve=False,opt.name='exp'    opt.evolve=True,opt.name='evolve'
        if opt.evolve:
            if opt.project == str(ROOT / 'runs/train'):  # if default project name, rename to runs/evolve
                opt.project = str(ROOT / 'runs/evolve')
            opt.exist_ok, opt.resume = opt.resume, False  # pass resume to exist_ok and disable resume
        # 保存相關信息
        opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok))

    # ############################################## 3.DDP mode ###############################################
    # 選擇設備cpu/cuda
    device = select_device(opt.device, batch_size=opt.batch_size)
    # 多卡訓練GPU
    if LOCAL_RANK != -1:
        msg = 'is not compatible with YOLOv5 Multi-GPU DDP training'
        assert not opt.image_weights, f'--image-weights {msg}'
        assert not opt.evolve, f'--evolve {msg}'
        assert opt.batch_size != -1, f'AutoBatch with --batch-size -1 {msg}, please pass a valid --batch-size'
        assert opt.batch_size % WORLD_SIZE == 0, f'--batch-size {opt.batch_size} must be multiple of WORLD_SIZE'
        assert torch.cuda.device_count() > LOCAL_RANK, 'insufficient CUDA devices for DDP command'
        # 根據(jù)編號選擇設備
        #使用torch.cuda.set_device()可以更方便地將模型和數(shù)據(jù)加載到對應GPU上, 直接定義模型之前加入一行代碼即可
        # torch.cuda.set_device(gpu_id) #單卡
        # torch.cuda.set_device('cuda:'+str(gpu_ids)) #可指定多卡
        torch.cuda.set_device(LOCAL_RANK)
        device = torch.device('cuda', LOCAL_RANK)
        # 初始化多進程
        dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo")

    ################################################ 4. Train #################################################
    # 不設置evolve直接調(diào)用train訓練
    if not opt.evolve:
        train(opt.hyp, opt, device, callbacks)
        # 分布式訓練 WORLD_SIZE=主機的數(shù)量
        # 如果是使用多卡訓練, 那么銷毀進程組
        if WORLD_SIZE > 1 and RANK == 0:
            LOGGER.info('Destroying process group... ')
            # 使用多卡訓練, 那么銷毀進程組
            dist.destroy_process_group()

    # Evolve hyperparameters (optional)
    # 遺傳凈化算法/一邊訓練一遍進化
    # 了解遺傳算法可以查看我的博客:
    else:
        # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
        # 超參數(shù)列表(突變范圍 - 最小值 - 最大值)
        meta = {'lr0': (1, 1e-5, 1e-1),  # initial learning rate (SGD=1E-2, Adam=1E-3)
                'lrf': (1, 0.01, 1.0),  # final OneCycleLR learning rate (lr0 * lrf)
                'momentum': (0.3, 0.6, 0.98),  # SGD momentum/Adam beta1
                'weight_decay': (1, 0.0, 0.001),  # optimizer weight decay
                'warmup_epochs': (1, 0.0, 5.0),  # warmup epochs (fractions ok)
                'warmup_momentum': (1, 0.0, 0.95),  # warmup initial momentum
                'warmup_bias_lr': (1, 0.0, 0.2),  # warmup initial bias lr
                'box': (1, 0.02, 0.2),  # box loss gain
                'cls': (1, 0.2, 4.0),  # cls loss gain
                'cls_pw': (1, 0.5, 2.0),  # cls BCELoss positive_weight
                'obj': (1, 0.2, 4.0),  # obj loss gain (scale with pixels)
                'obj_pw': (1, 0.5, 2.0),  # obj BCELoss positive_weight
                'iou_t': (0, 0.1, 0.7),  # IoU training threshold
                'anchor_t': (1, 2.0, 8.0),  # anchor-multiple threshold
                'anchors': (2, 2.0, 10.0),  # anchors per output grid (0 to ignore)
                'fl_gamma': (0, 0.0, 2.0),  # focal loss gamma (efficientDet default gamma=1.5)
                'hsv_h': (1, 0.0, 0.1),  # image HSV-Hue augmentation (fraction)
                'hsv_s': (1, 0.0, 0.9),  # image HSV-Saturation augmentation (fraction)
                'hsv_v': (1, 0.0, 0.9),  # image HSV-Value augmentation (fraction)
                'degrees': (1, 0.0, 45.0),  # image rotation (+/- deg)
                'translate': (1, 0.0, 0.9),  # image translation (+/- fraction)
                'scale': (1, 0.0, 0.9),  # image scale (+/- gain)
                'shear': (1, 0.0, 10.0),  # image shear (+/- deg)
                'perspective': (0, 0.0, 0.001),  # image perspective (+/- fraction), range 0-0.001
                'flipud': (1, 0.0, 1.0),  # image flip up-down (probability)
                'fliplr': (0, 0.0, 1.0),  # image flip left-right (probability)
                'mosaic': (1, 0.0, 1.0),  # image mixup (probability)
                'mixup': (1, 0.0, 1.0),  # image mixup (probability)
                'copy_paste': (1, 0.0, 1.0)}  # segment copy-paste (probability)

        with open(opt.hyp, errors='ignore') as f:
            # 加載yaml超參數(shù)
            hyp = yaml.safe_load(f)  # load hyps dict
            if 'anchors' not in hyp:  # anchors commented in hyp.yaml
                hyp['anchors'] = 3
        opt.noval, opt.nosave, save_dir = True, True, Path(opt.save_dir)  # only val/save final epoch
        # ei = [isinstance(x, (int, float)) for x in hyp.values()]  # evolvable indices
        # 保存進化的超參數(shù)列表
        evolve_yaml, evolve_csv = save_dir / 'hyp_evolve.yaml', save_dir / 'evolve.csv'
        if opt.bucket:
            os.system(f'gsutil cp gs://{opt.bucket}/evolve.csv {evolve_csv}')  # download evolve.csv if exists
        """
        遺傳算法調(diào)參:遵循適者生存、優(yōu)勝劣汰的法則,即尋優(yōu)過程中保留有用的,去除無用的。
        遺傳算法需要提前設置4個參數(shù): 群體大小/進化代數(shù)/交叉概率/變異概率

        """

        # 默認選擇進化300代
        for _ in range(opt.evolve):  # generations to evolve
            if evolve_csv.exists():  # if evolve.csv exists: select best hyps and mutate
                # Select parent(s)
                # 進化方式--single / --weight
                parent = 'single'  # parent selection method: 'single' or 'weighted'
                # 加載evolve.txt文件
                x = np.loadtxt(evolve_csv, ndmin=2, delimiter=',', skiprows=1)
                # 選取進化結果代數(shù)
                n = min(5, len(x))  # number of previous results to consider
                x = x[np.argsort(-fitness(x))][:n]  # top n mutations
                 # 根據(jù)resluts計算hyp權重
                w = fitness(x) - fitness(x).min() + 1E-6  # weights (sum > 0)
                # 根據(jù)不同進化方式獲得base hyp
                if parent == 'single' or len(x) == 1:
                    # x = x[random.randint(0, n - 1)]  # random selection
                    x = x[random.choices(range(n), weights=w)[0]]  # weighted selection
                elif parent == 'weighted':
                    x = (x * w.reshape(n, 1)).sum(0) / w.sum()  # weighted combination

                # Mutate
                # # 獲取突變初始值
                mp, s = 0.8, 0.2  # mutation probability, sigma
                npr = np.random
                npr.seed(int(time.time()))
                g = np.array([meta[k][0] for k in hyp.keys()])  # gains 0-1
                ng = len(meta)
                v = np.ones(ng)
                # 設置突變
                while all(v == 1):  # mutate until a change occurs (prevent duplicates)
                    # 將突變添加到base hyp上
                    # [i+7]是因為x中前7個數(shù)字為results的指標(P,R,mAP,F1,test_loss=(box,obj,cls)),之后才是超參數(shù)hyp
                    v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
                for i, k in enumerate(hyp.keys()):  # plt.hist(v.ravel(), 300)
                    hyp[k] = float(x[i + 7] * v[i])  # mutate

            # Constrain to limits
            # 限制超參再規(guī)定范圍
            for k, v in meta.items():
                hyp[k] = max(hyp[k], v[1])  # lower limit
                hyp[k] = min(hyp[k], v[2])  # upper limit
                hyp[k] = round(hyp[k], 5)  # significant digits

            # Train mutation
            # 訓練 使用突變后的參超 測試其效果
            results = train(hyp.copy(), opt, device, callbacks)
            callbacks = Callbacks()
            # Write mutation results
            # Write mutation results
            # 將結果寫入results 并將對應的hyp寫到evolve.txt evolve.txt中每一行為一次進化的結果
            # 每行前七個數(shù)字 (P, R, mAP, F1, test_losses(GIOU, obj, cls)) 之后為hyp
            # 保存hyp到y(tǒng)aml文件
            print_mutation(results, hyp.copy(), save_dir, opt.bucket)

        # Plot results
        # 將結果可視化 / 輸出保存信息
        plot_evolve(evolve_csv)
        LOGGER.info(f'Hyperparameter evolution finished {opt.evolve} generations\n'
                    f"Results saved to {colorstr('bold', save_dir)}\n"
                    f'Usage example: $ python train.py --hyp {evolve_yaml}')


def run(**kwargs):
    # 執(zhí)行這個腳本/ 調(diào)用train函數(shù) / 開啟訓練
    # Usage: import train; train.run(data='coco128.yaml', imgsz=320, weights='yolov5m.pt')
    opt = parse_opt(True)
    for k, v in kwargs.items():
        # setattr() 賦值屬性,屬性不存在則創(chuàng)建一個賦值
        setattr(opt, k, v)
    main(opt)
    return opt


if __name__ == "__main__":
    # 接著上次訓練
    # python train.py --data ./data/mchar.yaml --cfg yolov5l_mchar.yaml --epochs 80 --batch-size 8 --weights ./runs/train/exp7/weights/last.pt
    opt = parse_opt()
    main(opt)

使用教程

下面我把大家能使用到的參數(shù),給大家打個樣,大家可以一葫蘆畫瓢,根據(jù)自己的情況設置這些參數(shù),運行代碼如下

python train.py --cfg yolov5l_mchar.yaml --weights ./weights/yolov5s.pt  --data ./data/mchar.yaml --epoch 200 --batch-size 8 --rect --noval --evolve 300 --image-weights --multi-scale --optimizer Adam --cos-lr --freeze 3 --bbox_interval 20

n

總結 

到此這篇關于yolov5中train.py代碼注釋詳解與使用的文章就介紹到這了,更多相關yolov5 train.py代碼注釋內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!

您可能感興趣的文章:

相關文章

  • 學會用Python實現(xiàn)滑雪小游戲,再也不用去北海道啦

    學會用Python實現(xiàn)滑雪小游戲,再也不用去北海道啦

    Python除了極少的事情不能做之外,其他基本上可以說全能.,圖形處理、文本處理、數(shù)據(jù)庫編程、網(wǎng)絡編程、web編程、黑客編程、爬蟲編寫、機器學習、人工智能等.接下來我就教大家做一個不用去北海道也可以滑雪的小游戲,需要的朋友可以參考下
    2021-05-05
  • 基于python select.select模塊通信的實例講解

    基于python select.select模塊通信的實例講解

    下面小編就為大家?guī)硪黄趐ython select.select模塊通信的實例講解。小編覺得挺不錯的,現(xiàn)在就分享給大家,也給大家做個參考。一起跟隨小編過來看看吧
    2017-09-09
  • 利用Python實現(xiàn)簡單的相似圖片搜索的教程

    利用Python實現(xiàn)簡單的相似圖片搜索的教程

    這篇文章主要介紹了利用Python實現(xiàn)簡單的相似圖片搜索的教程,文中的示例主要在一個圖片指紋數(shù)據(jù)庫中實現(xiàn),需要的朋友可以參考下
    2015-04-04
  • Python調(diào)用edge-tts實現(xiàn)在線文字轉語音效果

    Python調(diào)用edge-tts實現(xiàn)在線文字轉語音效果

    edge-tts是一個 Python 模塊,允許通過Python代碼或命令的方式使用 Microsoft Edge 的在線文本轉語音服務,這篇文章主要介紹了Python調(diào)用edge-tts實現(xiàn)在線文字轉語音效果,需要的朋友可以參考下
    2024-03-03
  • 使用DataFrame實現(xiàn)兩表連接方式

    使用DataFrame實現(xiàn)兩表連接方式

    這篇文章主要介紹了使用DataFrame實現(xiàn)兩表連接方式,具有很好的參考價值,希望對大家有所幫助,如有錯誤或未考慮完全的地方,望不吝賜教
    2023-08-08
  • Django-Model數(shù)據(jù)庫操作(增刪改查、連表結構)詳解

    Django-Model數(shù)據(jù)庫操作(增刪改查、連表結構)詳解

    這篇文章主要介紹了Django-Model數(shù)據(jù)庫操作(增刪改查、連表結構)詳解,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下
    2019-07-07
  • Python數(shù)組并集交集補集代碼實例

    Python數(shù)組并集交集補集代碼實例

    這篇文章主要介紹了Python數(shù)組并集交集補集代碼實例,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下
    2020-02-02
  • Python?Tkinter?Gui運行不卡頓(解決多線程解決界面卡死問題)

    Python?Tkinter?Gui運行不卡頓(解決多線程解決界面卡死問題)

    最近寫的Python代碼不知為何,總是執(zhí)行到一半卡住不動,所以下面這篇文章主要給大家介紹了關于Python?Tkinter?Gui運行不卡頓,解決多線程解決界面卡死問題的相關資料,需要的朋友可以參考下
    2023-02-02
  • flask-script模塊的具體使用

    flask-script模塊的具體使用

    本文主要介紹了flask-script模塊的具體使用,文中通過示例代碼介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們可以參考一下
    2021-11-11
  • 一文帶你詳解Python中sys.executable函數(shù)的作用

    一文帶你詳解Python中sys.executable函數(shù)的作用

    sys.executable函數(shù)是用來獲取當前Python解釋器的完整路徑的,本文主要介紹了一文帶你詳解Python中sys.executable函數(shù)的作用,具有一定的參考價值,感興趣的可以了解一下
    2024-03-03

最新評論