欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

PyTorch中torch.utils.data.Dataset的介紹與實(shí)戰(zhàn)

 更新時(shí)間:2022年06月22日 11:11:26   作者:想變厲害的大白菜  
PyTorch是一個(gè)開源的Python機(jī)器學(xué)習(xí)庫,基于Torch,用于自然語言處理等應(yīng)用程序,下面這篇文章主要給大家介紹了關(guān)于PyTorch中torch.utils.data.Dataset的介紹與實(shí)戰(zhàn),需要的朋友可以參考下

一、前言

訓(xùn)練模型一般都是先處理 數(shù)據(jù)的輸入問題 和 預(yù)處理問題 。Pytorch提供了幾個(gè)有用的工具:torch.utils.data.Dataset 類和 torch.utils.data.DataLoader 類 。

流程是先把原始數(shù)據(jù)轉(zhuǎn)變成 torch.utils.data.Dataset 類,隨后再把得到的 torch.utils.data.Dataset 類當(dāng)作一個(gè)參數(shù)傳遞給 torch.utils.data.DataLoader 類,得到一個(gè)數(shù)據(jù)加載器,這個(gè)數(shù)據(jù)加載器每次可以返回一個(gè) Batch 的數(shù)據(jù)供模型訓(xùn)練使用。

在 pytorch 中,提供了一種十分方便的數(shù)據(jù)讀取機(jī)制,即使用 torch.utils.data.Dataset 與 Dataloader 組合得到數(shù)據(jù)迭代器。在每次訓(xùn)練時(shí),利用這個(gè)迭代器輸出每一個(gè) batch 數(shù)據(jù),并能在輸出時(shí)對數(shù)據(jù)進(jìn)行相應(yīng)的預(yù)處理或數(shù)據(jù)增廣操作。

本文我們主要介紹對 torch.utils.data.Dataset 的理解,對 Dataloader 的介紹請參考我的另一篇文章:【PyTorch】torch.utils.data.DataLoader 簡單介紹與使用

在本文的最后將給出 torch.utils.data.Dataset 與 Dataloader 結(jié)合使用處理數(shù)據(jù)的實(shí)戰(zhàn)代碼。

二、torch.utils.data.Dataset 是什么

1. 干什么用的?

  1. pytorch 提供了一個(gè)數(shù)據(jù)讀取的方法,其由兩個(gè)類構(gòu)成:torch.utils.data.Dataset 和 DataLoader。
  2. 如果我們要自定義自己讀取數(shù)據(jù)的方法,就需要繼承類 torch.utils.data.Dataset ,并將其封裝到DataLoader 中。
  3. torch.utils.data.Dataset 是一個(gè) 類 Dataset 。通過重寫定義在該類上的方法,我們可以實(shí)現(xiàn)多種數(shù)據(jù)讀取及數(shù)據(jù)預(yù)處理方式。

2. 長什么樣子?

torch.utils.data.Dataset 的源碼:

class Dataset(object):
    """An abstract class representing a Dataset.

    All other datasets should subclass it. All subclasses should override
    ``__len__``, that provides the size of the dataset, and ``__getitem__``,
    supporting integer indexing in range from 0 to len(self) exclusive.
    """

    def __getitem__(self, index):
        raise NotImplementedError

    def __len__(self):
        raise NotImplementedError

    def __add__(self, other):
        return ConcatDataset([self, other])

注釋翻譯:

表示一個(gè)數(shù)據(jù)集的抽象類。

所有其他數(shù)據(jù)集都應(yīng)該對其進(jìn)行子類化。 所有子類都應(yīng)該重寫提供數(shù)據(jù)集大小的 __len__ 和 __getitem__ ,支持從 0 到 len(self) 獨(dú)占的整數(shù)索引。

理解:

就是說,Dataset 是一個(gè) 數(shù)據(jù)集 抽象類,它是其他所有數(shù)據(jù)集類的父類(所有其他數(shù)據(jù)集類都應(yīng)該繼承它),繼承時(shí)需要重寫方法 __len__ 和 __getitem__ , __len__ 是提供數(shù)據(jù)集大小的方法, __getitem__ 是可以通過索引號找到數(shù)據(jù)的方法。

三、通過繼承 torch.utils.data.Dataset 定義自己的數(shù)據(jù)集類

torch.utils.data.Dataset 是代表自定義數(shù)據(jù)集的抽象類,我們可以定義自己的數(shù)據(jù)類抽象這個(gè)類,只需要重寫__len__和__getitem__這兩個(gè)方法就可以。

要自定義自己的 Dataset 類,至少要重載兩個(gè)方法:__len__, __getitem__

  1. __len__返回的是數(shù)據(jù)集的大小
  2. __getitem__實(shí)現(xiàn)索引數(shù)據(jù)集中的某一個(gè)數(shù)據(jù)

下面將簡單實(shí)現(xiàn)一個(gè)返回 torch.Tensor 類型的數(shù)據(jù)集:

from torch.utils.data import Dataset
import torch

class TensorDataset(Dataset):
    # TensorDataset繼承Dataset, 重載了__init__, __getitem__, __len__
    # 實(shí)現(xiàn)將一組Tensor數(shù)據(jù)對封裝成Tensor數(shù)據(jù)集
    # 能夠通過index得到數(shù)據(jù)集的數(shù)據(jù),能夠通過len,得到數(shù)據(jù)集大小

    def __init__(self, data_tensor, target_tensor):
        self.data_tensor = data_tensor
        self.target_tensor = target_tensor

    def __getitem__(self, index):
        return self.data_tensor[index], self.target_tensor[index]

    def __len__(self):
        return self.data_tensor.size(0)    # size(0) 返回當(dāng)前張量維數(shù)的第一維

# 生成數(shù)據(jù)
data_tensor = torch.randn(4, 3)   # 4 行 3 列,服從正態(tài)分布的張量
print(data_tensor)
target_tensor = torch.rand(4)     # 4 個(gè)元素,服從均勻分布的張量
print(target_tensor)

# 將數(shù)據(jù)封裝成 Dataset (用 TensorDataset 類)
tensor_dataset = TensorDataset(data_tensor, target_tensor)

# 可使用索引調(diào)用數(shù)據(jù)
print('tensor_data[0]: ', tensor_dataset[0])

# 可返回?cái)?shù)據(jù)len
print('len os tensor_dataset: ', len(tensor_dataset))

輸出結(jié)果:

tensor([[ 0.8618,  0.4644, -0.5929],
        [ 0.9566, -0.9067,  1.5781],
        [ 0.3943, -0.7775,  2.0366],
        [-1.2570, -0.3859, -0.3542]])
tensor([0.1363, 0.6545, 0.4345, 0.9928])
tensor_data[0]:  (tensor([ 0.8618,  0.4644, -0.5929]), tensor(0.1363))
len os tensor_dataset:  4

四、為什么要定義自己的數(shù)據(jù)集類?

因?yàn)槲覀兛梢酝ㄟ^定義自己的數(shù)據(jù)集類并重寫該類上的方法 實(shí)現(xiàn)多種多樣的(自定義的)數(shù)據(jù)讀取方式。

比如,我們重寫 __init__ 實(shí)現(xiàn)用 pd.read_csv 讀取 csv 文件:

from torch.utils.data import Dataset
import pandas as pd  # 這個(gè)包用來讀取CSV數(shù)據(jù)

# 繼承Dataset,定義自己的數(shù)據(jù)集類 mydataset
class mydataset(Dataset):
    def __init__(self, csv_file):   # self 參數(shù)必須,其他參數(shù)及其形式隨程序需要而不同,比如(self,*inputs)
        self.csv_data = pd.read_csv(csv_file)
    def __len__(self):
        return len(self.csv_data)
    def __getitem__(self, idx):
        data = self.csv_data.values[idx]
        return data

data = mydataset('spambase.csv')
print(data[3])
print(len(data))

輸出結(jié)果:

[0.000e+00 0.000e+00 0.000e+00 0.000e+00 6.300e-01 0.000e+00 3.100e-01
 6.300e-01 3.100e-01 6.300e-01 3.100e-01 3.100e-01 3.100e-01 0.000e+00
 0.000e+00 3.100e-01 0.000e+00 0.000e+00 3.180e+00 0.000e+00 3.100e-01
 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00 0.000e+00
 1.370e-01 0.000e+00 1.370e-01 0.000e+00 0.000e+00 3.537e+00 4.000e+01
 1.910e+02 1.000e+00]
4601

要點(diǎn):

  1. 自己定義的 dataset 類需要繼承 Dataset。
  2. 需要實(shí)現(xiàn)必要的魔法方法:

在 __init__ 方法里面進(jìn)行 讀取數(shù)據(jù)文件 。

在 __getitem__ 方法里支持通過下標(biāo)訪問數(shù)據(jù)。

在 __len__ 方法里返回自定義數(shù)據(jù)集的大小,方便后期遍歷。

五、實(shí)戰(zhàn):torch.utils.data.Dataset + Dataloader 實(shí)現(xiàn)數(shù)據(jù)集讀取和迭代

實(shí)例 1

數(shù)據(jù)集 spambase.csv 用的是 UCI 機(jī)器學(xué)習(xí)存儲庫里的垃圾郵件數(shù)據(jù)集,它一條數(shù)據(jù)有57個(gè)特征和1個(gè)標(biāo)簽。

import torch.utils.data as Data
import pandas as pd  # 這個(gè)包用來讀取CSV數(shù)據(jù)
import torch


# 繼承Dataset,定義自己的數(shù)據(jù)集類 mydataset
class mydataset(Data.Dataset):
    def __init__(self, csv_file):   # self 參數(shù)必須,其他參數(shù)及其形式隨程序需要而不同,比如(self,*inputs)
        data_csv = pd.DataFrame(pd.read_csv(csv_file))   # 讀數(shù)據(jù)
        self.csv_data = data_csv.drop(axis=1, columns='58', inplace=False)  # 刪除最后一列標(biāo)簽
    def __len__(self):
        return len(self.csv_data)
    def __getitem__(self, idx):
        data = self.csv_data.values[idx]
        return data


data = mydataset('spambase.csv')
x = torch.tensor(data[:5])         # 前五個(gè)數(shù)據(jù)
y = torch.tensor([1, 1, 1, 1, 1])  # 標(biāo)簽


torch_dataset = Data.TensorDataset(x, y)  # 對給定的 tensor 數(shù)據(jù),將他們包裝成 dataset

loader = Data.DataLoader(
    # 從數(shù)據(jù)庫中每次抽出batch size個(gè)樣本
    dataset = torch_dataset,       # torch TensorDataset format
    batch_size = 2,                # mini batch size
    shuffle=True,                  # 要不要打亂數(shù)據(jù) (打亂比較好)
    num_workers=2,                 # 多線程來讀數(shù)據(jù)
)

def show_batch():
    for step, (batch_x, batch_y) in enumerate(loader):
        print("steop:{}, batch_x:{}, batch_y:{}".format(step, batch_x, batch_y))

show_batch()

輸出結(jié)果:

steop:0, batch_x:tensor([[0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 6.3000e-01, 0.0000e+00,
         3.1000e-01, 6.3000e-01, 3.1000e-01, 6.3000e-01, 3.1000e-01, 3.1000e-01,
         3.1000e-01, 0.0000e+00, 0.0000e+00, 3.1000e-01, 0.0000e+00, 0.0000e+00,
         3.1800e+00, 0.0000e+00, 3.1000e-01, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 1.3500e-01, 0.0000e+00, 1.3500e-01, 0.0000e+00, 0.0000e+00,
         3.5370e+00, 4.0000e+01, 1.9100e+02],
        [0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 6.3000e-01, 0.0000e+00,
         3.1000e-01, 6.3000e-01, 3.1000e-01, 6.3000e-01, 3.1000e-01, 3.1000e-01,
         3.1000e-01, 0.0000e+00, 0.0000e+00, 3.1000e-01, 0.0000e+00, 0.0000e+00,
         3.1800e+00, 0.0000e+00, 3.1000e-01, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 1.3700e-01, 0.0000e+00, 1.3700e-01, 0.0000e+00, 0.0000e+00,
         3.5370e+00, 4.0000e+01, 1.9100e+02]], dtype=torch.float64), batch_y:tensor([1, 1])
steop:1, batch_x:tensor([[2.1000e-01, 2.8000e-01, 5.0000e-01, 0.0000e+00, 1.4000e-01, 2.8000e-01,
         2.1000e-01, 7.0000e-02, 0.0000e+00, 9.4000e-01, 2.1000e-01, 7.9000e-01,
         6.5000e-01, 2.1000e-01, 1.4000e-01, 1.4000e-01, 7.0000e-02, 2.8000e-01,
         3.4700e+00, 0.0000e+00, 1.5900e+00, 0.0000e+00, 4.3000e-01, 4.3000e-01,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         7.0000e-02, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 1.3200e-01, 0.0000e+00, 3.7200e-01, 1.8000e-01, 4.8000e-02,
         5.1140e+00, 1.0100e+02, 1.0280e+03],
        [6.0000e-02, 0.0000e+00, 7.1000e-01, 0.0000e+00, 1.2300e+00, 1.9000e-01,
         1.9000e-01, 1.2000e-01, 6.4000e-01, 2.5000e-01, 3.8000e-01, 4.5000e-01,
         1.2000e-01, 0.0000e+00, 1.7500e+00, 6.0000e-02, 6.0000e-02, 1.0300e+00,
         1.3600e+00, 3.2000e-01, 5.1000e-01, 0.0000e+00, 1.1600e+00, 6.0000e-02,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 6.0000e-02, 0.0000e+00, 0.0000e+00,
         1.2000e-01, 0.0000e+00, 6.0000e-02, 6.0000e-02, 0.0000e+00, 0.0000e+00,
         1.0000e-02, 1.4300e-01, 0.0000e+00, 2.7600e-01, 1.8400e-01, 1.0000e-02,
         9.8210e+00, 4.8500e+02, 2.2590e+03]], dtype=torch.float64), batch_y:tensor([1, 1])
steop:2, batch_x:tensor([[  0.0000,   0.6400,   0.6400,   0.0000,   0.3200,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.6400,   0.0000,   0.0000,
           0.0000,   0.3200,   0.0000,   1.2900,   1.9300,   0.0000,   0.9600,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,   0.0000,
           0.0000,   0.0000,   0.7780,   0.0000,   0.0000,   3.7560,  61.0000,
         278.0000]], dtype=torch.float64), batch_y:tensor([1])

一共 5 條數(shù)據(jù),batch_size 設(shè)為 2 ,則數(shù)據(jù)被分為三組,每組的數(shù)據(jù)量為:2,2,1。

實(shí)例 2:進(jìn)階

import torch.utils.data as Data
import pandas as pd  # 這個(gè)包用來讀取CSV數(shù)據(jù)
import numpy as np

# 繼承Dataset,定義自己的數(shù)據(jù)集類 mydataset
class mydataset(Data.Dataset):
    def __init__(self, csv_file):   # self 參數(shù)必須,其他參數(shù)及其形式隨程序需要而不同,比如(self,*inputs)
        # 讀取數(shù)據(jù)
        frame = pd.DataFrame(pd.read_csv('spambase.csv'))
        spam = frame[frame['58'] == 1]
        ham = frame[frame['58'] == 0]
        SpamNew = spam.drop(axis=1, columns='58', inplace=False)  # 刪除第58列,inplace=False不改變原數(shù)據(jù),返回一個(gè)新dataframe
        HamNew = ham.drop(axis=1, columns='58', inplace=False)
        # 數(shù)據(jù)
        self.csv_data = np.vstack([np.array(SpamNew), np.array(HamNew)])  # 將兩個(gè)N維數(shù)組進(jìn)行連接,形成X
        # 標(biāo)簽
        self.Label = np.array([1] * len(spam) + [0] * len(ham))  # 形成標(biāo)簽值列表y
    def __len__(self):
        return len(self.csv_data)
    def __getitem__(self, idx):
        data = self.csv_data[idx]
        label = self.Label[idx]
        return data, label


data = mydataset('spambase.csv')
print(len(data))

loader = Data.DataLoader(
    # 從數(shù)據(jù)庫中每次抽出batch size個(gè)樣本
    dataset = data,       # torch TensorDataset format
    batch_size = 460,                # mini batch size
    shuffle=True,                  # 要不要打亂數(shù)據(jù) (打亂比較好)
    num_workers=2,                 # 多線程來讀數(shù)據(jù)
)

def show_batch():
    for step, (batch_x, batch_y) in enumerate(loader):
        print("steop:{}, batch_x:{}, batch_y:{}".format(step, batch_x, batch_y))

show_batch()

輸出結(jié)果:

4601
steop:0, batch_x:tensor([[0.0000e+00, 2.4600e+00, 0.0000e+00,  ..., 2.1420e+00, 1.0000e+01,
         7.5000e+01],
        [0.0000e+00, 0.0000e+00, 1.6000e+00,  ..., 2.0650e+00, 1.2000e+01,
         9.5000e+01],
        [0.0000e+00, 0.0000e+00, 3.6000e-01,  ..., 3.7220e+00, 2.0000e+01,
         2.6800e+02],
        ...,
        [7.7000e-01, 3.8000e-01, 7.7000e-01,  ..., 1.4619e+01, 5.2500e+02,
         9.2100e+02],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.0000e+00, 1.0000e+00,
         5.0000e+00],
        [4.0000e-01, 1.8000e-01, 3.2000e-01,  ..., 3.3050e+00, 1.8100e+02,
         1.6130e+03]], dtype=torch.float64), batch_y:tensor([0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1,
        0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0,
        0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0,
        1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0,
        0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1,
        1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0,
        0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0,
        0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0,
        1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1,
        0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1,
        1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0,
        0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0,
        0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1,
        0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0,
        1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0,
        0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1,
        1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1,
        0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1,
        0, 1, 0, 1])
steop:1, batch_x:tensor([[0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.0000e+00, 1.0000e+00,
         2.0000e+00],
        [4.9000e-01, 0.0000e+00, 7.4000e-01,  ..., 3.9750e+00, 4.7000e+01,
         4.8500e+02],
        [0.0000e+00, 0.0000e+00, 7.1000e-01,  ..., 4.0220e+00, 9.7000e+01,
         5.4300e+02],
        ...,
        [0.0000e+00, 1.4000e-01, 1.4000e-01,  ..., 5.3310e+00, 8.0000e+01,
         1.0290e+03],
        [0.0000e+00, 0.0000e+00, 3.6000e-01,  ..., 3.1760e+00, 5.1000e+01,
         2.7000e+02],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.1660e+00, 2.0000e+00,
         7.0000e+00]], dtype=torch.float64), batch_y:tensor([0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,
        1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0,
        0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 0,
        1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0,
        1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0,
        0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0,
        1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0,
        0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0,
        1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1,
        1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1,
        0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0,
        0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1,
        0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0,
        0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0,
        0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1,
        1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1,
        1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1,
        0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0,
        0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1,
        1, 0, 0, 0])
steop:2, batch_x:tensor([[0.0000e+00, 0.0000e+00, 1.4700e+00,  ..., 3.0000e+00, 3.3000e+01,
         1.7700e+02],
        [2.6000e-01, 4.6000e-01, 9.9000e-01,  ..., 1.3235e+01, 2.7200e+02,
         1.5750e+03],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 2.0450e+00, 6.0000e+00,
         4.5000e+01],
        ...,
        [4.0000e-01, 0.0000e+00, 0.0000e+00,  ..., 1.1940e+00, 5.0000e+00,
         1.2900e+02],
        [2.6000e-01, 0.0000e+00, 0.0000e+00,  ..., 1.8370e+00, 1.1000e+01,
         1.5800e+02],
        [5.0000e-02, 0.0000e+00, 1.0000e-01,  ..., 3.7150e+00, 1.0700e+02,
         1.3860e+03]], dtype=torch.float64), batch_y:tensor([1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0,
        0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0,
        1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0,
        0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0,
        0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0,
        0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0,
        0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0,
        0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1,
        0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0,
        1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0,
        0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0,
        0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0,
        1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0,
        1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1,
        0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0,
        0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0,
        0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1,
        1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0,
        1, 1, 0, 0])
steop:3, batch_x:tensor([[2.6000e-01, 0.0000e+00, 5.3000e-01,  ..., 2.6460e+00, 7.7000e+01,
         1.7200e+02],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 2.4280e+00, 5.0000e+00,
         1.7000e+01],
        [3.4000e-01, 0.0000e+00, 1.7000e+00,  ..., 6.6700e+02, 1.3330e+03,
         1.3340e+03],
        ...,
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.0000e+00, 1.0000e+00,
         7.0000e+00],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 2.7010e+00, 2.0000e+01,
         1.8100e+02],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 4.0000e+00, 1.1000e+01,
         3.6000e+01]], dtype=torch.float64), batch_y:tensor([0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0,
        1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1,
        0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0,
        1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0,
        0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
        0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0,
        1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0,
        1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0,
        0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0,
        0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1,
        0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0,
        0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1,
        0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1,
        1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0,
        1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0,
        1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0,
        1, 0, 0, 1])
steop:4, batch_x:tensor([[  0.0000,   0.0000,   0.3100,  ...,   5.7080, 138.0000, 274.0000],
        [  0.0000,   0.0000,   0.3400,  ...,   2.2570,  17.0000, 158.0000],
        [  1.0400,   0.0000,   0.0000,  ...,   1.0000,   1.0000,  17.0000],
        ...,
        [  0.0000,   0.0000,   0.0000,  ...,   4.0000,  12.0000,  28.0000],
        [  0.3300,   0.0000,   0.0000,  ...,   1.7880,   6.0000,  93.0000],
        [  0.0000,  14.2800,   0.0000,  ...,   1.8000,   5.0000,   9.0000]],
       dtype=torch.float64), batch_y:tensor([1, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1,
        0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1,
        0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0,
        1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1,
        0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0,
        1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0,
        0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0,
        0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1,
        0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0,
        1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1,
        1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0,
        0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0,
        1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1,
        0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
        0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1,
        1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0,
        0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1,
        0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1,
        1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0,
        1, 1, 0, 0])
steop:5, batch_x:tensor([[7.0000e-01, 0.0000e+00, 1.0500e+00,  ..., 1.1660e+00, 1.3000e+01,
         1.8900e+02],
        [0.0000e+00, 3.3600e+00, 1.9200e+00,  ..., 6.1370e+00, 1.0700e+02,
         1.7800e+02],
        [5.4000e-01, 0.0000e+00, 1.0800e+00,  ..., 5.4540e+00, 6.8000e+01,
         1.8000e+02],
        ...,
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 3.8330e+00, 9.0000e+00,
         2.3000e+01],
        [6.0000e-02, 6.5000e-01, 7.1000e-01,  ..., 4.7420e+00, 1.1700e+02,
         1.3420e+03],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 2.6110e+00, 1.2000e+01,
         4.7000e+01]], dtype=torch.float64), batch_y:tensor([1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1,
        1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0,
        0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0,
        0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0,
        0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1,
        0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1,
        0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0,
        0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0,
        0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1,
        1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1,
        0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1,
        1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1,
        0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1,
        0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0,
        0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1,
        0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1,
        0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0,
        1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0,
        0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
        0, 1, 1, 1])
steop:6, batch_x:tensor([[0.0000e+00, 1.4280e+01, 0.0000e+00,  ..., 1.8000e+00, 5.0000e+00,
         9.0000e+00],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.9280e+00, 1.5000e+01,
         5.4000e+01],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.0692e+01, 6.5000e+01,
         1.3900e+02],
        ...,
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.5000e+00, 5.0000e+00,
         2.4000e+01],
        [7.6000e-01, 1.9000e-01, 3.8000e-01,  ..., 3.7020e+00, 4.5000e+01,
         1.0700e+03],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 2.0000e+00, 1.2000e+01,
         8.8000e+01]], dtype=torch.float64), batch_y:tensor([0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1,
        0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1,
        0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0,
        1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1,
        1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0,
        0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1,
        0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0,
        0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0,
        0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1,
        0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
        1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0,
        0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
        1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1,
        0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1,
        0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0,
        0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1,
        1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1,
        1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1,
        1, 0, 1, 0])
steop:7, batch_x:tensor([[0.0000e+00, 2.7000e-01, 0.0000e+00,  ..., 5.8020e+00, 4.3000e+01,
         4.1200e+02],
        [0.0000e+00, 3.5000e-01, 7.0000e-01,  ..., 3.6390e+00, 6.1000e+01,
         3.1300e+02],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.5920e+00, 7.0000e+00,
         1.2900e+02],
        ...,
        [8.0000e-02, 1.6000e-01, 8.0000e-02,  ..., 2.7470e+00, 8.6000e+01,
         1.9950e+03],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.6130e+00, 1.1000e+01,
         7.1000e+01],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.9110e+00, 1.5000e+01,
         6.5000e+01]], dtype=torch.float64), batch_y:tensor([0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0,
        0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0,
        1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1,
        0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1,
        0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1,
        0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1,
        0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0,
        1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1,
        1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0,
        0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1,
        0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0,
        0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0,
        0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0,
        1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1,
        0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1,
        0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1,
        1, 0, 0, 0])
steop:8, batch_x:tensor([[1.7000e-01, 0.0000e+00, 1.7000e-01,  ..., 1.7960e+00, 1.2000e+01,
         4.5800e+02],
        [3.7000e-01, 0.0000e+00, 6.3000e-01,  ..., 1.1810e+00, 4.0000e+00,
         1.0400e+02],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.0000e+00, 1.0000e+00,
         7.0000e+00],
        ...,
        [2.3000e-01, 0.0000e+00, 4.7000e-01,  ..., 2.4200e+00, 1.2000e+01,
         3.3400e+02],
        [0.0000e+00, 0.0000e+00, 1.2900e+00,  ..., 1.3500e+00, 4.0000e+00,
         2.7000e+01],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.3730e+00, 1.1000e+01,
         1.6900e+02]], dtype=torch.float64), batch_y:tensor([1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1,
        0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0,
        1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0,
        0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1,
        1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0,
        0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0,
        0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0,
        0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1,
        0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
        1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1,
        0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0,
        1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0,
        0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0,
        1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0,
        0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1,
        1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0,
        1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0,
        0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0,
        1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1,
        0, 0, 0, 0])
steop:9, batch_x:tensor([[0.0000e+00, 6.3000e-01, 0.0000e+00,  ..., 2.2150e+00, 2.2000e+01,
         1.1300e+02],
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.0000e+00, 1.0000e+00,
         5.0000e+00],
        [0.0000e+00, 0.0000e+00, 2.0000e-01,  ..., 1.1870e+00, 1.1000e+01,
         1.1400e+02],
        ...,
        [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 2.3070e+00, 1.6000e+01,
         3.0000e+01],
        [5.1000e-01, 4.3000e-01, 2.9000e-01,  ..., 6.5900e+00, 7.3900e+02,
         2.3330e+03],
        [6.8000e-01, 6.8000e-01, 6.8000e-01,  ..., 2.4720e+00, 9.0000e+00,
         8.9000e+01]], dtype=torch.float64), batch_y:tensor([0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0,
        0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0,
        0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1,
        1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0,
        0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0,
        0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1,
        1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1,
        0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1,
        0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1,
        1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0,
        1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0,
        0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1,
        1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0,
        0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0,
        1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0,
        1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0,
        1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0,
        1, 1, 1, 1])
steop:10, batch_x:tensor([[0.0000e+00, 2.5000e-01, 7.5000e-01, 0.0000e+00, 1.0000e+00, 2.5000e-01,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 2.5000e-01, 2.5000e-01,
         1.2500e+00, 0.0000e+00, 0.0000e+00, 2.5000e-01, 0.0000e+00, 1.2500e+00,
         2.5100e+00, 0.0000e+00, 1.7500e+00, 0.0000e+00, 2.5000e-01, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 2.5000e-01, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
         0.0000e+00, 0.0000e+00, 0.0000e+00, 4.2000e-02, 0.0000e+00, 0.0000e+00,
         1.2040e+00, 7.0000e+00, 1.1800e+02]], dtype=torch.float64), batch_y:tensor([0])

一共 4601 條數(shù)據(jù),按 batch_size = 460 來分:能劃分為 11 組,前 10 組的數(shù)據(jù)量為 460,最后一組的數(shù)據(jù)量為 1 。

參考鏈接

  1. torch.Tensor.size()方法的使用舉例
  2. Pytorch筆記05-自定義數(shù)據(jù)讀取方式orch.utils.data.Dataset與Dataloader
  3. pytorch 可訓(xùn)練數(shù)據(jù)集創(chuàng)建(torch.utils.data)
  4. Pytorch的第一步:(1) Dataset類的使用
  5. pytorch中的torch.utils.data.Dataset和torch.utils.data.DataLoader

總結(jié)

————————————————
版權(quán)聲明:本文為CSDN博主「想變厲害的大白菜」的原創(chuàng)文章,遵循CC 4.0 BY-SA版權(quán)協(xié)議,轉(zhuǎn)載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/weixin_44211968/article/details/123744513

到此這篇關(guān)于PyTorch中torch.utils.data.Dataset的介紹與實(shí)戰(zhàn)的文章就介紹到這了,更多相關(guān)PyTorch torch.utils.data.Dataset內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • Python中的閉包實(shí)例詳解

    Python中的閉包實(shí)例詳解

    這篇文章主要介紹了Python中的閉包,針對閉包的定義、用法及注意事項(xiàng)進(jìn)行了實(shí)例講解,有助于讀者深入理解閉包的概念及用法,需要的朋友可以參考下
    2014-08-08
  • python實(shí)現(xiàn)給數(shù)組按片賦值的方法

    python實(shí)現(xiàn)給數(shù)組按片賦值的方法

    這篇文章主要介紹了python實(shí)現(xiàn)給數(shù)組按片賦值的方法,實(shí)例分析了Python在指定位置進(jìn)行賦值的相關(guān)技巧,需要的朋友可以參考下
    2015-07-07
  • 詳解如何理解并正確使用Python中的f字符串

    詳解如何理解并正確使用Python中的f字符串

    Python中的f字符串是一種字符串格式化語法,它可以將變量、表達(dá)式和函數(shù)等動態(tài)地嵌入到字符串中,本文就來詳細(xì)講講如何理解并正確使用它吧
    2023-06-06
  • python協(xié)程與?asyncio?庫詳情

    python協(xié)程與?asyncio?庫詳情

    這篇文章主要介紹了python協(xié)程與?asyncio?庫詳情,文章基于python展開詳細(xì)內(nèi)容,具有一定的參考價(jià)值。需要的小伙伴可以參考一下
    2022-05-05
  • Python開發(fā)中常用操作方法代碼匯總筆記

    Python開發(fā)中常用操作方法代碼匯總筆記

    Python具有易學(xué)、易用、易擴(kuò)展、可移植性強(qiáng)等特點(diǎn),被廣泛應(yīng)用于數(shù)據(jù)分析、人工智能、Web開發(fā)、自動化測試等領(lǐng)域。Python在使用過程中也會遇到一些常見技術(shù)問題,本文匯總Python開發(fā)中實(shí)用操作方法代碼筆記。
    2023-06-06
  • python實(shí)現(xiàn)學(xué)員管理系統(tǒng)

    python實(shí)現(xiàn)學(xué)員管理系統(tǒng)

    這篇文章主要為大家詳細(xì)介紹了python實(shí)現(xiàn)學(xué)員管理系統(tǒng),文中示例代碼介紹的非常詳細(xì),具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下
    2019-02-02
  • 詳解Django中Request對象的相關(guān)用法

    詳解Django中Request對象的相關(guān)用法

    這篇文章主要介紹了詳解Django中Request對象的相關(guān)用法,Django是Python重多人氣框架中最著名的一個(gè),需要的朋友可以參考下
    2015-07-07
  • cv2.imread?和?cv2.imdecode?用法及區(qū)別

    cv2.imread?和?cv2.imdecode?用法及區(qū)別

    對于路徑中含有中文的圖像,直接用cv2.imread讀取會報(bào)錯(cuò),上次看到有大佬使用cv2.imdecode就可以正常讀取,有點(diǎn)好奇,所以今天來記錄下二者用法和區(qū)別,感興趣的朋友跟隨小編一起看看吧
    2023-02-02
  • Python虛擬環(huán)境venv用法詳解

    Python虛擬環(huán)境venv用法詳解

    這篇文章主要介紹了Python虛擬環(huán)境venv用法詳解,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下
    2020-05-05
  • Python Counting Bloom Filter原理與實(shí)現(xiàn)詳細(xì)介紹

    Python Counting Bloom Filter原理與實(shí)現(xiàn)詳細(xì)介紹

    這篇文章主要介紹了Python Counting Bloom Filter原理與實(shí)現(xiàn),文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧
    2022-10-10

最新評論