如何將yolov5中的PANet層改為BiFPN詳析
本文以YOLOv5-6.1版本為例
一、Add
1.在common.py后加入如下代碼
# 結(jié)合BiFPN 設(shè)置可學(xué)習(xí)參數(shù) 學(xué)習(xí)不同分支的權(quán)重
# 兩個(gè)分支add操作
class BiFPN_Add2(nn.Module):
def __init__(self, c1, c2):
super(BiFPN_Add2, self).__init__()
# 設(shè)置可學(xué)習(xí)參數(shù) nn.Parameter的作用是:將一個(gè)不可訓(xùn)練的類型Tensor轉(zhuǎn)換成可以訓(xùn)練的類型parameter
# 并且會(huì)向宿主模型注冊(cè)該參數(shù) 成為其一部分 即model.parameters()會(huì)包含這個(gè)parameter
# 從而在參數(shù)優(yōu)化的時(shí)候可以自動(dòng)一起優(yōu)化
self.w = nn.Parameter(torch.ones(2, dtype=torch.float32), requires_grad=True)
self.epsilon = 0.0001
self.conv = nn.Conv2d(c1, c2, kernel_size=1, stride=1, padding=0)
self.silu = nn.SiLU()
def forward(self, x):
w = self.w
weight = w / (torch.sum(w, dim=0) + self.epsilon)
return self.conv(self.silu(weight[0] * x[0] + weight[1] * x[1]))
# 三個(gè)分支add操作
class BiFPN_Add3(nn.Module):
def __init__(self, c1, c2):
super(BiFPN_Add3, self).__init__()
self.w = nn.Parameter(torch.ones(3, dtype=torch.float32), requires_grad=True)
self.epsilon = 0.0001
self.conv = nn.Conv2d(c1, c2, kernel_size=1, stride=1, padding=0)
self.silu = nn.SiLU()
def forward(self, x):
w = self.w
weight = w / (torch.sum(w, dim=0) + self.epsilon) # 將權(quán)重進(jìn)行歸一化
# Fast normalized fusion
return self.conv(self.silu(weight[0] * x[0] + weight[1] * x[1] + weight[2] * x[2]))2.yolov5s.yaml進(jìn)行修改
# YOLOv5 ?? by Ultralytics, GPL-3.0 license # Parameters nc: 80 # number of classes depth_multiple: 0.33 # model depth multiple width_multiple: 0.50 # layer channel multiple anchors: - [10,13, 16,30, 33,23] # P3/8 - [30,61, 62,45, 59,119] # P4/16 - [116,90, 156,198, 373,326] # P5/32 # YOLOv5 v6.0 backbone backbone: # [from, number, module, args] [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 [-1, 3, C3, [128]], [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 [-1, 6, C3, [256]], [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 [-1, 9, C3, [512]], [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 [-1, 3, C3, [1024]], [-1, 1, SPPF, [1024, 5]], # 9 ] # YOLOv5 v6.0 BiFPN head head: [[-1, 1, Conv, [512, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 6], 1, BiFPN_Add2, [256, 256]], # cat backbone P4 [-1, 3, C3, [512, False]], # 13 [-1, 1, Conv, [256, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 4], 1, BiFPN_Add2, [128, 128]], # cat backbone P3 [-1, 3, C3, [256, False]], # 17 (P3/8-small) [-1, 1, Conv, [512, 3, 2]], # 為了BiFPN正確add,調(diào)整channel數(shù) [[-1, 13, 6], 1, BiFPN_Add3, [256, 256]], # cat P4 <--- BiFPN change 注意v5s通道數(shù)是默認(rèn)參數(shù)的一半 [-1, 3, C3, [512, False]], # 20 (P4/16-medium) [-1, 1, Conv, [512, 3, 2]], [[-1, 10], 1, BiFPN_Add2, [256, 256]], # cat head P5 [-1, 3, C3, [1024, False]], # 23 (P5/32-large) [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) ]
3.修改yolo.py,在parse_model函數(shù)中找到elif m is Concat:語(yǔ)句,在其后面加上BiFPN_Add相關(guān)語(yǔ)句:

# 添加bifpn_add結(jié)構(gòu)
elif m in [BiFPN_Add2, BiFPN_Add3]:
c2 = max([ch[x] for x in f])4.修改train.py,向優(yōu)化器中添加BiFPN的權(quán)重參數(shù)
將BiFPN_Add2和BiFPN_Add3函數(shù)中定義的w參數(shù),加入g1

# BiFPN_Concat
elif isinstance(v, BiFPN_Add2) and hasattr(v, 'w') and isinstance(v.w, nn.Parameter):
g1.append(v.w)
elif isinstance(v, BiFPN_Add3) and hasattr(v, 'w') and isinstance(v.w, nn.Parameter):
g1.append(v.w)
然后導(dǎo)入一下這兩個(gè)包

二、Concat
1.在common.py后加入如下代碼
# 結(jié)合BiFPN 設(shè)置可學(xué)習(xí)參數(shù) 學(xué)習(xí)不同分支的權(quán)重
# 兩個(gè)分支concat操作
class BiFPN_Concat2(nn.Module):
def __init__(self, dimension=1):
super(BiFPN_Concat2, self).__init__()
self.d = dimension
self.w = nn.Parameter(torch.ones(2, dtype=torch.float32), requires_grad=True)
self.epsilon = 0.0001
def forward(self, x):
w = self.w
weight = w / (torch.sum(w, dim=0) + self.epsilon) # 將權(quán)重進(jìn)行歸一化
# Fast normalized fusion
x = [weight[0] * x[0], weight[1] * x[1]]
return torch.cat(x, self.d)
# 三個(gè)分支concat操作
class BiFPN_Concat3(nn.Module):
def __init__(self, dimension=1):
super(BiFPN_Concat3, self).__init__()
self.d = dimension
# 設(shè)置可學(xué)習(xí)參數(shù) nn.Parameter的作用是:將一個(gè)不可訓(xùn)練的類型Tensor轉(zhuǎn)換成可以訓(xùn)練的類型parameter
# 并且會(huì)向宿主模型注冊(cè)該參數(shù) 成為其一部分 即model.parameters()會(huì)包含這個(gè)parameter
# 從而在參數(shù)優(yōu)化的時(shí)候可以自動(dòng)一起優(yōu)化
self.w = nn.Parameter(torch.ones(3, dtype=torch.float32), requires_grad=True)
self.epsilon = 0.0001
def forward(self, x):
w = self.w
weight = w / (torch.sum(w, dim=0) + self.epsilon) # 將權(quán)重進(jìn)行歸一化
# Fast normalized fusion
x = [weight[0] * x[0], weight[1] * x[1], weight[2] * x[2]]
return torch.cat(x, self.d)2.yolov5s.yaml進(jìn)行修改
# YOLOv5 ?? by Ultralytics, GPL-3.0 license # Parameters nc: 80 # number of classes depth_multiple: 0.33 # model depth multiple width_multiple: 0.50 # layer channel multiple anchors: - [10,13, 16,30, 33,23] # P3/8 - [30,61, 62,45, 59,119] # P4/16 - [116,90, 156,198, 373,326] # P5/32 # YOLOv5 v6.0 backbone backbone: # [from, number, module, args] [[-1, 1, Conv, [64, 6, 2, 2]], # 0-P1/2 [-1, 1, Conv, [128, 3, 2]], # 1-P2/4 [-1, 3, C3, [128]], [-1, 1, Conv, [256, 3, 2]], # 3-P3/8 [-1, 6, C3, [256]], [-1, 1, Conv, [512, 3, 2]], # 5-P4/16 [-1, 9, C3, [512]], [-1, 1, Conv, [1024, 3, 2]], # 7-P5/32 [-1, 3, C3, [1024]], [-1, 1, SPPF, [1024, 5]], # 9 ] # YOLOv5 v6.0 BiFPN head head: [[-1, 1, Conv, [512, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 6], 1, BiFPN_Concat2, [1]], # cat backbone P4 <--- BiFPN change [-1, 3, C3, [512, False]], # 13 [-1, 1, Conv, [256, 1, 1]], [-1, 1, nn.Upsample, [None, 2, 'nearest']], [[-1, 4], 1, BiFPN_Concat2, [1]], # cat backbone P3 <--- BiFPN change [-1, 3, C3, [256, False]], # 17 (P3/8-small) [-1, 1, Conv, [256, 3, 2]], [[-1, 14, 6], 1, BiFPN_Concat3, [1]], # cat P4 <--- BiFPN change [-1, 3, C3, [512, False]], # 20 (P4/16-medium) [-1, 1, Conv, [512, 3, 2]], [[-1, 10], 1, BiFPN_Concat2, [1]], # cat head P5 <--- BiFPN change [-1, 3, C3, [1024, False]], # 23 (P5/32-large) [[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5) ]
3.修改yolo.py,在parse_model函數(shù)中找到elif m is Concat:語(yǔ)句,在其后面加上BiFPN_Concat相關(guān)語(yǔ)句:

# 添加bifpn_concat結(jié)構(gòu)
elif m in [Concat, BiFPN_Concat2, BiFPN_Concat3]:
c2 = sum(ch[x] for x in f)4.修改train.py,向優(yōu)化器中添加BiFPN的權(quán)重參數(shù)
添加復(fù)方式同上(Add)
# BiFPN_Concat
elif isinstance(v, BiFPN_Concat2) and hasattr(v, 'w') and isinstance(v.w, nn.Parameter):
g1.append(v.w)
elif isinstance(v, BiFPN_Concat3) and hasattr(v, 'w') and isinstance(v.w, nn.Parameter):
g1.append(v.w)至此,大功告成~~~
reference:
【YOLOv5-6.x】設(shè)置可學(xué)習(xí)權(quán)重結(jié)合BiFPN(Add操作)
【YOLOv5-6.x】設(shè)置可學(xué)習(xí)權(quán)重結(jié)合BiFPN(Concat操作)
總結(jié)
到此這篇關(guān)于如何將yolov5中的PANet層改為BiFPN的文章就介紹到這了,更多相關(guān)yolov5 PANet層改BiFPN內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
Python 進(jìn)程之間共享數(shù)據(jù)(全局變量)的方法
今天小編就為大家分享一篇Python 進(jìn)程之間共享數(shù)據(jù)(全局變量)的方法,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2019-07-07
python 中[0]*2與0*2的區(qū)別說(shuō)明
這篇文章主要介紹了python 中[0]*2與0*2的區(qū)別說(shuō)明,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2021-05-05
Python進(jìn)階之import導(dǎo)入機(jī)制原理詳解
在Python中,一個(gè).py文件代表一個(gè)Module。在Module中可以是任何的符合Python文件格式的Python腳本。了解Module導(dǎo)入機(jī)制大有用處??旄S小編一起學(xué)習(xí)一下吧2022-11-11
pycharm 實(shí)現(xiàn)光標(biāo)快速移動(dòng)到括號(hào)外或行尾的操作
這篇文章主要介紹了pycharm 實(shí)現(xiàn)光標(biāo)快速移動(dòng)到括號(hào)外或行尾的操作,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2021-02-02
跟老齊學(xué)Python之for循環(huán)語(yǔ)句
看這個(gè)標(biāo)題,有點(diǎn)匪夷所思嗎?為什么for是難以想象的呢?因?yàn)樵趐ython中,它的確是很常用而且很強(qiáng)悍,強(qiáng)悍到以至于另外一個(gè)被稱之為迭代的東西,在python中就有點(diǎn)相形見(jiàn)絀了。在別的語(yǔ)言中,for的地位從來(lái)沒(méi)有如同python中這么高的。2014-10-10
python自動(dòng)獲取微信公眾號(hào)最新文章的實(shí)現(xiàn)代碼
這篇文章主要介紹了python自動(dòng)獲取微信公眾號(hào)最新文章,本文通過(guò)實(shí)例代碼給大家介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或工作具有一定的參考借鑒價(jià)值,需要的朋友可以參考下2022-07-07
python 匿名函數(shù)相關(guān)總結(jié)
這篇文章主要介紹了python 匿名函數(shù)的的相關(guān)資料,幫助大家更好的理解和學(xué)習(xí)使用python,感興趣的朋友可以了解下2021-03-03

