pytorch 如何把圖像數(shù)據(jù)集進行劃分成train,test和val
1、手上目前擁有數(shù)據(jù)集是一大坨,沒有train,test,val的劃分
如圖所示

2、目錄結(jié)構(gòu):
|---data
|---dslr
|---images
|---back_pack
|---a.jpg
|---b.jpg
...
3、轉(zhuǎn)換后的格式如圖

目錄結(jié)構(gòu)為:
|---datanews
|---dslr
|---images
|---test
|---train
|---valid
|---back_pack
|---a.jpg
|---b.jpg
...
4、代碼如下:
4.1 先創(chuàng)建同樣結(jié)構(gòu)的層級結(jié)構(gòu)
4.2 然后講原始數(shù)據(jù)按照比例劃分
4.3 移入到對應(yīng)的文件目錄里面
import os, random, shutil
def make_dir(source, target):
'''
創(chuàng)建和源文件相似的文件路徑函數(shù)
:param source: 源文件位置
:param target: 目標文件位置
'''
dir_names = os.listdir(source)
for names in dir_names:
for i in ['train', 'valid', 'test']:
path = target + '/' + i + '/' + names
if not os.path.exists(path):
os.makedirs(path)
def divideTrainValiTest(source, target):
'''
創(chuàng)建和源文件相似的文件路徑
:param source: 源文件位置
:param target: 目標文件位置
'''
# 得到源文件下的種類
pic_name = os.listdir(source)
# 對于每一類里的數(shù)據(jù)進行操作
for classes in pic_name:
# 得到這一種類的圖片的名字
pic_classes_name = os.listdir(os.path.join(source, classes))
random.shuffle(pic_classes_name)
# 按照8:1:1比例劃分
train_list = pic_classes_name[0:int(0.8 * len(pic_classes_name))]
valid_list = pic_classes_name[int(0.8 * len(pic_classes_name)):int(0.9 * len(pic_classes_name))]
test_list = pic_classes_name[int(0.9 * len(pic_classes_name)):]
# 對于每個圖片,移入到對應(yīng)的文件夾里面
for train_pic in train_list:
shutil.copyfile(source + '/' + classes + '/' + train_pic, target + '/train/' + classes + '/' + train_pic)
for validation_pic in valid_list:
shutil.copyfile(source + '/' + classes + '/' + validation_pic,
target + '/valid/' + classes + '/' + validation_pic)
for test_pic in test_list:
shutil.copyfile(source + '/' + classes + '/' + test_pic, target + '/test/' + classes + '/' + test_pic)
if __name__ == '__main__':
filepath = r'../data/dslr/images'
dist = r'../datanews/dslr/images'
make_dir(filepath, dist)
divideTrainValiTest(filepath, dist)
補充:pytorch中數(shù)據(jù)集的劃分方法及eError: take(): argument 'index' (position 1) must be Tensor, not numpy.ndarray錯誤原因
在使用pytorch框架時,難免需要對數(shù)據(jù)集進行訓練集和驗證集的劃分,一般使用sklearn.model_selection中的train_test_split方法
該方法使用如下:
from sklearn.model_selection import train_test_split import numpy as np import torch import torch.autograd import Variable from torch.utils.data import DataLoader traindata = np.load(train_path) # image_num * W * H trainlabel = np.load(train_label_path) train_data = traindata[:, np.newaxis, ...] train_label_data = trainlabel[:, np.newaxis, ...] x_tra, x_val, y_tra, y_val = train_test_split(train_data, train_label_data, test_size=0.1, random_state=0) # 訓練集和驗證集使用9:1 x_tra = Variable(torch.from_numpy(x_tra)) x_tra = x_tra.float() y_tra = Variable(torch.from_numpy(y_tra)) y_tra = y_tra.float() x_val = Variable(torch.from_numpy(x_val)) x_val = x_val.float() y_val = Variable(torch.from_numpy(y_val)) y_val = y_val.float() # 訓練集的DataLoader traindataset = torch.utils.data.TensorDataset(x_tra, y_tra) trainloader = DataLoader(dataset=traindataset, num_workers=opt.threads, batch_size=8, shuffle=True) # 驗證集的DataLoader validataset = torch.utils.data.TensorDataset(x_val, y_val) valiloader = DataLoader(dataset=validataset, num_workers=opt.threads, batch_size=opt.batchSize, shuffle=True)
注意:如果按照如下方式使用,就會報eError: take(): argument 'index' (position 1) must be Tensor, not numpy.ndarray錯誤
from sklearn.model_selection import train_test_split import numpy as np import torch import torch.autograd import Variable from torch.utils.data import DataLoader traindata = np.load(train_path) # image_num * W * H trainlabel = np.load(train_label_path) train_data = traindata[:, np.newaxis, ...] train_label_data = trainlabel[:, np.newaxis, ...] x_train = Variable(torch.from_numpy(train_data)) x_train = x_train.float() y_train = Variable(torch.from_numpy(train_label_data)) y_train = y_train.float() # 將原始的訓練數(shù)據(jù)集分為訓練集和驗證集,后面就可以使用早停機制 x_tra, x_val, y_tra, y_val = train_test_split(x_train, y_train, test_size=0.1) # 訓練集和驗證集使用9:1
報錯原因:
train_test_split方法接受的x_train,y_train格式應(yīng)該為numpy.ndarray 而不應(yīng)該是Tensor,這點需要注意。
以上為個人經(jīng)驗,希望能給大家一個參考,也希望大家多多支持腳本之家。
相關(guān)文章
python基于concurrent模塊實現(xiàn)多線程
這篇文章主要介紹了python基于concurrent模塊實現(xiàn)多線程,幫助大家更好的理解和學習使用python,感興趣的朋友可以了解下2021-04-04
Python pandas實現(xiàn)excel工作表合并功能詳解
這篇文章主要介紹了Python pandas實現(xiàn)excel工作表合并功能以及相關(guān)實例代碼,需要的朋友們參考學習下。2019-08-08
python深度學習人工智能BackPropagation鏈式法則
這篇文章主要為大家介紹了python深度學習人工智能BackPropagation鏈式法則的示例詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助2021-11-11

