欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

計算pytorch標(biāo)準(zhǔn)化(Normalize)所需要數(shù)據(jù)集的均值和方差實例

 更新時間:2020年01月15日 14:56:36   作者:菜鳥川  
今天小編就為大家分享一篇計算pytorch標(biāo)準(zhǔn)化(Normalize)所需要數(shù)據(jù)集的均值和方差實例,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧

pytorch做標(biāo)準(zhǔn)化利用transforms.Normalize(mean_vals, std_vals),其中常用數(shù)據(jù)集的均值方差有:

if 'coco' in args.dataset:
  mean_vals = [0.471, 0.448, 0.408]
  std_vals = [0.234, 0.239, 0.242]
elif 'imagenet' in args.dataset:
  mean_vals = [0.485, 0.456, 0.406]
  std_vals = [0.229, 0.224, 0.225]

計算自己數(shù)據(jù)集圖像像素的均值方差:

import numpy as np
import cv2
import random
 
# calculate means and std
train_txt_path = './train_val_list.txt'
 
CNum = 10000   # 挑選多少圖片進行計算
 
img_h, img_w = 32, 32
imgs = np.zeros([img_w, img_h, 3, 1])
means, stdevs = [], []
 
with open(train_txt_path, 'r') as f:
  lines = f.readlines()
  random.shuffle(lines)  # shuffle , 隨機挑選圖片
 
  for i in tqdm_notebook(range(CNum)):
    img_path = os.path.join('./train', lines[i].rstrip().split()[0])
 
    img = cv2.imread(img_path)
    img = cv2.resize(img, (img_h, img_w))
    img = img[:, :, :, np.newaxis]
    
    imgs = np.concatenate((imgs, img), axis=3)
#     print(i)
 
imgs = imgs.astype(np.float32)/255.
 
 
for i in tqdm_notebook(range(3)):
  pixels = imgs[:,:,i,:].ravel() # 拉成一行
  means.append(np.mean(pixels))
  stdevs.append(np.std(pixels))
 
# cv2 讀取的圖像格式為BGR,PIL/Skimage讀取到的都是RGB不用轉(zhuǎn)
means.reverse() # BGR --> RGB
stdevs.reverse()
 
print("normMean = {}".format(means))
print("normStd = {}".format(stdevs))
print('transforms.Normalize(normMean = {}, normStd = {})'.format(means, stdevs))

以上這篇計算pytorch標(biāo)準(zhǔn)化(Normalize)所需要數(shù)據(jù)集的均值和方差實例就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持腳本之家。

相關(guān)文章

最新評論