Pytorch深度學習addmm()和addmm_()函數(shù)用法解析
一、函數(shù)解釋
在torch/_C/_VariableFunctions.py的有該定義,意義就是實現(xiàn)一下公式:

換句話說,就是需要傳入5個參數(shù),mat里的每個元素乘以beta,mat1和mat2進行矩陣乘法(左行乘右列)后再乘以alpha,最后將這2個結(jié)果加在一起。但是這樣說可能沒啥概念,接下來博主為大家寫上一段代碼,大家就明白了~
def addmm(self, beta=1, mat, alpha=1, mat1, mat2, out=None): # real signature unknown; restored from __doc__
"""
addmm(beta=1, mat, alpha=1, mat1, mat2, out=None) -> Tensor
Performs a matrix multiplication of the matrices :attr:`mat1` and :attr:`mat2`.
The matrix :attr:`mat` is added to the final result.
If :attr:`mat1` is a :math:`(n \times m)` tensor, :attr:`mat2` is a
:math:`(m \times p)` tensor, then :attr:`mat` must be
:ref:`broadcastable <broadcasting-semantics>` with a :math:`(n \times p)` tensor
and :attr:`out` will be a :math:`(n \times p)` tensor.
:attr:`alpha` and :attr:`beta` are scaling factors on matrix-vector product between
:attr:`mat1` and :attr`mat2` and the added matrix :attr:`mat` respectively.
.. math::
out = \beta\ mat + \alpha\ (mat1_i \mathbin{@} mat2_i)
For inputs of type `FloatTensor` or `DoubleTensor`, arguments :attr:`beta` and
:attr:`alpha` must be real numbers, otherwise they should be integers.
Args:
beta (Number, optional): multiplier for :attr:`mat` (:math:`\beta`)
mat (Tensor): matrix to be added
alpha (Number, optional): multiplier for :math:`mat1 @ mat2` (:math:`\alpha`)
mat1 (Tensor): the first matrix to be multiplied
mat2 (Tensor): the second matrix to be multiplied
out (Tensor, optional): the output tensor
Example::
>>> M = torch.randn(2, 3)
>>> mat1 = torch.randn(2, 3)
>>> mat2 = torch.randn(3, 3)
>>> torch.addmm(M, mat1, mat2)
tensor([[-4.8716, 1.4671, -1.3746],
[ 0.7573, -3.9555, -2.8681]])
"""
pass二、代碼范例
1.先擺出代碼,大家可以先復制粘貼運行一下,在之后博主會一一講解
"""
@author:nickhuang1996
"""
import torch
rectangle_height = 3
rectangle_width = 3
inputs = torch.randn(rectangle_height, rectangle_width)
for i in range(rectangle_height):
for j in range(rectangle_width):
inputs[i] = i * torch.ones(rectangle_width)
'''
inputs and its transpose
-->inputs = tensor([[0., 0., 0.],
[1., 1., 1.],
[2., 2., 2.]])
-->inputs_t = tensor([[0., 1., 2.],
[0., 1., 2.],
[0., 1., 2.]])
'''
print("inputs:\n", inputs)
inputs_t = inputs.t()
print("inputs_t:\n", inputs_t)
'''
inputs_t @ inputs_t [[0., 1., 2.], [[0., 1., 2.], [[0., 3., 6.]
= [0., 1., 2.], @ [0., 1., 2.], = [0., 3., 6.]
[0., 1., 2.]] [0., 1., 2.]] [0., 3., 6.]]
'''
'''a, b, c and d = 1 * inputs + 1 * (inputs_t @ inputs_t)'''
a = torch.addmm(input=inputs, mat1=inputs_t, mat2=inputs_t)
b = inputs.addmm(mat1=inputs_t, mat2=inputs_t)
c = torch.addmm(input=inputs, beta=1, mat1=inputs_t, mat2=inputs_t, alpha=1)
d = inputs.addmm(beta=1, mat1=inputs_t, mat2=inputs_t, alpha=1)
'''e and f = 1 * inputs + 1 * (inputs_t @ inputs_t)'''
e = torch.addmm(inputs, inputs_t, inputs_t)
f = inputs.addmm(inputs_t, inputs_t)
'''1 * inputs + 1 * (inputs_t @ inputs_t)'''
g = inputs.addmm(1, inputs_t, inputs_t)
'''2 * inputs + 1 * (inputs_t @ inputs_t)'''
g2 = inputs.addmm(2, inputs_t, inputs_t)
'''h = 1 * inputs + 1 * (inputs_t @ inputs_t)'''
h = inputs.addmm(1, 1, inputs_t, inputs_t)
'''h12 = 1 * inputs + 2 * (inputs_t @ inputs_t)'''
h12 = inputs.addmm(1, 2, inputs_t, inputs_t)
'''h21 = 2 * inputs + 1 * (inputs_t @ inputs_t)'''
h21 = inputs.addmm(2, 1, inputs_t, inputs_t)
print("a:\n", a)
print("b:\n", b)
print("c:\n", c)
print("d:\n", d)
print("e:\n", e)
print("f:\n", f)
print("g:\n", g)
print("g2:\n", g2)
print("h:\n", h)
print("h12:\n", h12)
print("h21:\n", h21)
print("inputs:\n", inputs)
'''inputs = 1 * inputs - 2 * (inputs @ inputs_t)'''
'''
inputs @ inputs_t [[0., 0., 0.], [[0., 1., 2.], [[0., 0., 0.]
= [1., 1., 1.], @ [0., 1., 2.], = [0., 3., 6.]
[2., 2., 2.]] [0., 1., 2.]] [0., 6., 12.]]
'''
inputs.addmm_(1, -2, inputs, inputs_t) # In-place
print("inputs:\n", inputs)2.其中
inputs是一個3×3的矩陣,為
tensor([[0., 0., 0.],
[1., 1., 1.],
[2., 2., 2.]])inputs_t也是一個3×3的矩陣,是inputs的轉(zhuǎn)置矩陣,為
tensor([[0., 1., 2.],
[0., 1., 2.],
[0., 1., 2.]])* inputs_t @ inputs_t為
'''
inputs_t @ inputs_t [[0., 1., 2.], [[0., 1., 2.], [[0., 3., 6.]
= [0., 1., 2.], @ [0., 1., 2.], = [0., 3., 6.]
[0., 1., 2.]] [0., 1., 2.]] [0., 3., 6.]]
'''3.代碼中a,b,c和d展示的是完全形式,即標明了位置參數(shù)和傳入?yún)?shù)。可以看到input這個位置參數(shù)可以寫在函數(shù)的前面,即
torch.addmm(input, mat1, mat2) = inputs.addmm(mat1, mat2)
完成的公式為:
1 × inputs + 1 ×(inputs_t @ inputs_t)
'''a, b, c and d = 1 * inputs + 1 * (inputs_t @ inputs_t)''' a = torch.addmm(input=inputs, mat1=inputs_t, mat2=inputs_t) b = inputs.addmm(mat1=inputs_t, mat2=inputs_t) c = torch.addmm(input=inputs, beta=1, mat1=inputs_t, mat2=inputs_t, alpha=1) d = inputs.addmm(beta=1, mat1=inputs_t, mat2=inputs_t, alpha=1)
a:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
b:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
c:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
d:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])4.下面的例子更好了說明了input參數(shù)的位置可變性,并且beta和alpha都缺省了:
完成的公式為:
1 × inputs + 1 ×(inputs_t @ inputs_t)
'''e and f = 1 * inputs + 1 * (inputs_t @ inputs_t)''' e = torch.addmm(inputs, inputs_t, inputs_t) f = inputs.addmm(inputs_t, inputs_t)
e:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
f:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])5.加一個參數(shù),實際上是添加了beta這個參數(shù)
完成的公式為:
g = 1 × inputs + 1 ×(inputs_t @ inputs_t)
g2 = 2 × inputs + 1 ×(inputs_t @ inputs_t)
'''1 * inputs + 1 * (inputs_t @ inputs_t)''' g = inputs.addmm(1, inputs_t, inputs_t) '''2 * inputs + 1 * (inputs_t @ inputs_t)''' g2 = inputs.addmm(2, inputs_t, inputs_t)
g:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
g2:
tensor([[ 0., 3., 6.],
[ 2., 5., 8.],
[ 4., 7., 10.]])6.再加一個參數(shù),實際上是添加了alpha這個參數(shù)
完成的公式為:
h = 1 × inputs + 1 ×(inputs_t @ inputs_t)
h12 = 1 × inputs + 2 ×(inputs_t @ inputs_t)
h21 = 2 × inputs + 1 ×(inputs_t @ inputs_t)
'''h = 1 * inputs + 1 * (inputs_t @ inputs_t)''' h = inputs.addmm(1, 1, inputs_t, inputs_t) '''h12 = 1 * inputs + 2 * (inputs_t @ inputs_t)''' h12 = inputs.addmm(1, 2, inputs_t, inputs_t) '''h21 = 2 * inputs + 1 * (inputs_t @ inputs_t)''' h21 = inputs.addmm(2, 1, inputs_t, inputs_t)
h:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
h12:
tensor([[ 0., 6., 12.],
[ 1., 7., 13.],
[ 2., 8., 14.]])
h21:
tensor([[ 0., 3., 6.],
[ 2., 5., 8.],
[ 4., 7., 10.]])7.當然,以上的步驟inputs沒有變化,還是為
inputs:
tensor([[0., 0., 0.],
[1., 1., 1.],
[2., 2., 2.]])8.addmm_()的操作和addmm()函數(shù)功能相同,區(qū)別就是addmm_()有inplace的操作,也就是在原對象基礎(chǔ)上進行修改,即把改變之后的變量再賦給原來的變量。例如:
inputs的值變成了改變之后的值,不用再去寫 某個變量=addmm_() 了,因為inputs就是改變之后的變量!
*inputs@ inputs_t為
'''
inputs @ inputs_t [[0., 0., 0.], [[0., 1., 2.], [[0., 0., 0.]
= [1., 1., 1.], @ [0., 1., 2.], = [0., 3., 6.]
[2., 2., 2.]] [0., 1., 2.]] [0., 6., 12.]]
'''完成的公式為:
inputs = 1 × inputs - 2 ×(inputs @ inputs_t)
'''inputs = 1 * inputs - 2 * (inputs @ inputs_t)''' inputs.addmm_(1, -2, inputs, inputs_t) # In-place
inputs:
tensor([[ 0., 0., 0.],
[ 1., -5., -11.],
[ 2., -10., -22.]])三、代碼運行結(jié)果
inputs:
tensor([[0., 0., 0.],
[1., 1., 1.],
[2., 2., 2.]])
inputs_t:
tensor([[0., 1., 2.],
[0., 1., 2.],
[0., 1., 2.]])
a:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
b:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
c:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
d:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
e:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
f:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
g:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
g2:
tensor([[ 0., 3., 6.],
[ 2., 5., 8.],
[ 4., 7., 10.]])
h:
tensor([[0., 3., 6.],
[1., 4., 7.],
[2., 5., 8.]])
h12:
tensor([[ 0., 6., 12.],
[ 1., 7., 13.],
[ 2., 8., 14.]])
h21:
tensor([[ 0., 3., 6.],
[ 2., 5., 8.],
[ 4., 7., 10.]])
inputs:
tensor([[0., 0., 0.],
[1., 1., 1.],
[2., 2., 2.]])
inputs:
tensor([[ 0., 0., 0.],
[ 1., -5., -11.],
[ 2., -10., -22.]])以上就是Pytorch中addmm()和addmm_()函數(shù)用法解析的詳細內(nèi)容,更多關(guān)于Pytorch函數(shù)addmm() addmm_()的資料請關(guān)注腳本之家其它相關(guān)文章!
相關(guān)文章
Python應(yīng)用實現(xiàn)雙指數(shù)函數(shù)及擬合代碼實例
這篇文章主要介紹了Python應(yīng)用實現(xiàn)雙指數(shù)函數(shù)及擬合代碼實例,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下2020-06-06
通過python讀取txt文件和繪制柱形圖的實現(xiàn)代碼
這篇文章主要介紹了通過python讀取txt文件和繪制柱形圖的實現(xiàn)代碼,代碼簡單易懂,對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下2021-03-03
python3+PyQt5 自定義窗口部件--使用窗口部件樣式表的方法
今天小編就為大家分享一篇python3+PyQt5 自定義窗口部件--使用窗口部件樣式表的方法,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2019-06-06
使用IronPython把Python腳本集成到.NET程序中的教程
這篇文章主要介紹了使用IronPython把Python腳本集成到.NET程序中的教程,現(xiàn)在剛剛被微軟開源的.NET重新成為業(yè)界熱點、本文介紹了使Python和.NET交互的IronPython,需要的朋友可以參考下2015-03-03

