pytorch中的上采樣以及各種反操作,求逆操作詳解
import torch.nn.functional as F
import torch.nn as nn
F.upsample(input, size=None, scale_factor=None,mode='nearest', align_corners=None)
r"""Upsamples the input to either the given :attr:`size` or the given :attr:`scale_factor` The algorithm used for upsampling is determined by :attr:`mode`. Currently temporal, spatial and volumetric upsampling are supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape. The input dimensions are interpreted in the form: `mini-batch x channels x [optional depth] x [optional height] x width`. The modes available for upsampling are: `nearest`, `linear` (3D-only), `bilinear` (4D-only), `trilinear` (5D-only) Args: input (Tensor): the input tensor size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int]): output spatial size. scale_factor (int): multiplier for spatial size. Has to be an integer. mode (string): algorithm used for upsampling: 'nearest' | 'linear' | 'bilinear' | 'trilinear'. Default: 'nearest' align_corners (bool, optional): if True, the corner pixels of the input and output tensors are aligned, and thus preserving the values at those pixels. This only has effect when :attr:`mode` is `linear`, `bilinear`, or `trilinear`. Default: False .. warning:: With ``align_corners = True``, the linearly interpolating modes (`linear`, `bilinear`, and `trilinear`) don't proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior is ``align_corners = False``. See :class:`~torch.nn.Upsample` for concrete examples on how this affects the outputs. """
nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1)
""" Parameters: in_channels (int) – Number of channels in the input image out_channels (int) – Number of channels produced by the convolution kernel_size (int or tuple) – Size of the convolving kernel stride (int or tuple, optional) – Stride of the convolution. Default: 1 padding (int or tuple, optional) – kernel_size - 1 - padding zero-padding will be added to both sides of each dimension in the input. Default: 0 output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0 groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional) – If True, adds a learnable bias to the output. Default: True dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1 """
計算方式:
定義:nn.MaxUnpool2d(kernel_size, stride=None, padding=0)
調(diào)用:
def forward(self, input, indices, output_size=None): return F.max_unpool2d(input, indices, self.kernel_size, self.stride, self.padding, output_size)
r"""Computes a partial inverse of :class:`MaxPool2d`. :class:`MaxPool2d` is not fully invertible, since the non-maximal values are lost. :class:`MaxUnpool2d` takes in as input the output of :class:`MaxPool2d` including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero. .. note:: `MaxPool2d` can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argument `output_size` in the forward call. See the Inputs and Example below. Args: kernel_size (int or tuple): Size of the max pooling window. stride (int or tuple): Stride of the max pooling window. It is set to ``kernel_size`` by default. padding (int or tuple): Padding that was added to the input Inputs: - `input`: the input Tensor to invert - `indices`: the indices given out by `MaxPool2d` - `output_size` (optional) : a `torch.Size` that specifies the targeted output size Shape: - Input: :math:`(N, C, H_{in}, W_{in})` - Output: :math:`(N, C, H_{out}, W_{out})` where 計算公式:見下面 Example: 見下面 """
F. max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)
見上面的用法一致!
def max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None): r"""Computes a partial inverse of :class:`MaxPool2d`. See :class:`~torch.nn.MaxUnpool2d` for details. """ pass
以上這篇pytorch中的上采樣以及各種反操作,求逆操作詳解就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持腳本之家。
相關(guān)文章
python中單例常用的幾種實現(xiàn)方法總結(jié)
Python 的模塊就是天然的單例模式,下面這篇文章主要給大家介紹了關(guān)于python中單例常用的幾種實現(xiàn)方法,文中通過示例代碼介紹的非常詳細(xì),對大家學(xué)習(xí)或者使用python單例具有一定的參考學(xué)習(xí)價值,需要的朋友們一起來看看吧2018-10-10Python pandas.DataFrame 找出有空值的行
這篇文章主要介紹了Python pandas.DataFrame 找出有空值的行,本文給大家介紹的非常詳細(xì),具有一定的參考借鑒價值,需要的朋友可以參考下2019-09-09將Python中的數(shù)據(jù)存儲到系統(tǒng)本地的簡單方法
這篇文章主要介紹了將Python中的數(shù)據(jù)存儲到系統(tǒng)本地的簡單方法,主要使用了pickle模塊,需要的朋友可以參考下2015-04-04Python可執(zhí)行文件反編譯教程(exe轉(zhuǎn)py)
python的便利性,使得如今許多軟件開發(fā)者、黑客都開始使用python打包成exe的方式進行程序的發(fā)布,那么Python如何反編譯可執(zhí)行文件,本文就來介紹一下,感興趣的可以了解一下2021-12-12Python列表list解析操作示例【整數(shù)操作、字符操作、矩陣操作】
這篇文章主要介紹了Python列表list解析操作,結(jié)合實例形式分析了Python列表針對整數(shù)、字符及矩陣的解析操作實現(xiàn)技巧,需要的朋友可以參考下2017-07-07Python實現(xiàn)登錄人人網(wǎng)并抓取新鮮事的方法
這篇文章主要介紹了Python實現(xiàn)登錄人人網(wǎng)并抓取新鮮事的方法,可實現(xiàn)Python模擬登陸并抓取新鮮事的功能,需要的朋友可以參考下2015-05-05Python爬取微信小程序Charles實現(xiàn)過程圖解
這篇文章主要介紹了Python爬取微信小程序Charles實現(xiàn)過程圖解,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友可以參考下2020-09-09