欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Keras:Unet網(wǎng)絡(luò)實(shí)現(xiàn)多類(lèi)語(yǔ)義分割方式

 更新時(shí)間:2020年06月11日 10:39:44   作者:Robin Long 2018  
本文主要利用U-Net網(wǎng)絡(luò)結(jié)構(gòu)實(shí)現(xiàn)了多類(lèi)的語(yǔ)義分割,并展示了部分測(cè)試效果,希望對(duì)你有用!

1 介紹

U-Net最初是用來(lái)對(duì)醫(yī)學(xué)圖像的語(yǔ)義分割,后來(lái)也有人將其應(yīng)用于其他領(lǐng)域。但大多還是用來(lái)進(jìn)行二分類(lèi),即將原始圖像分成兩個(gè)灰度級(jí)或者色度,依次找到圖像中感興趣的目標(biāo)部分。

本文主要利用U-Net網(wǎng)絡(luò)結(jié)構(gòu)實(shí)現(xiàn)了多類(lèi)的語(yǔ)義分割,并展示了部分測(cè)試效果,希望對(duì)你有用!

2 源代碼

(1)訓(xùn)練模型

from __future__ import print_function
import os
import datetime
import numpy as np
from keras.models import Model
from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Conv2DTranspose, AveragePooling2D, Dropout, \
 BatchNormalization
from keras.optimizers import Adam
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.callbacks import ModelCheckpoint
from keras import backend as K
from keras.layers.advanced_activations import LeakyReLU, ReLU
import cv2
 
PIXEL = 512 #set your image size
BATCH_SIZE = 5
lr = 0.001
EPOCH = 100
X_CHANNEL = 3 # training images channel
Y_CHANNEL = 1 # label iamges channel
X_NUM = 422 # your traning data number
 
pathX = 'I:\\Pascal VOC Dataset\\train1\\images\\' #change your file path
pathY = 'I:\\Pascal VOC Dataset\\train1\\SegmentationObject\\' #change your file path
 
#data processing
def generator(pathX, pathY,BATCH_SIZE):
 while 1:
  X_train_files = os.listdir(pathX)
  Y_train_files = os.listdir(pathY)
  a = (np.arange(1, X_NUM))
  X = []
  Y = []
  for i in range(BATCH_SIZE):
   index = np.random.choice(a)
   # print(index)
   img = cv2.imread(pathX + X_train_files[index], 1)
   img = np.array(img).reshape(PIXEL, PIXEL, X_CHANNEL)
   X.append(img)
   img1 = cv2.imread(pathY + Y_train_files[index], 1)
   img1 = np.array(img1).reshape(PIXEL, PIXEL, Y_CHANNEL)
   Y.append(img1)
 
  X = np.array(X)
  Y = np.array(Y)
  yield X, Y
 
 #creat unet network
inputs = Input((PIXEL, PIXEL, 3))
conv1 = Conv2D(8, 3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs)
pool1 = AveragePooling2D(pool_size=(2, 2))(conv1) # 16
 
conv2 = BatchNormalization(momentum=0.99)(pool1)
conv2 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
conv2 = BatchNormalization(momentum=0.99)(conv2)
conv2 = Conv2D(64, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv2)
conv2 = Dropout(0.02)(conv2)
pool2 = AveragePooling2D(pool_size=(2, 2))(conv2) # 8
 
conv3 = BatchNormalization(momentum=0.99)(pool2)
conv3 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
conv3 = BatchNormalization(momentum=0.99)(conv3)
conv3 = Conv2D(128, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv3)
conv3 = Dropout(0.02)(conv3)
pool3 = AveragePooling2D(pool_size=(2, 2))(conv3) # 4
 
conv4 = BatchNormalization(momentum=0.99)(pool3)
conv4 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
conv4 = BatchNormalization(momentum=0.99)(conv4)
conv4 = Conv2D(256, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
conv4 = Dropout(0.02)(conv4)
pool4 = AveragePooling2D(pool_size=(2, 2))(conv4)
 
conv5 = BatchNormalization(momentum=0.99)(pool4)
conv5 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
conv5 = BatchNormalization(momentum=0.99)(conv5)
conv5 = Conv2D(512, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv5)
conv5 = Dropout(0.02)(conv5)
pool4 = AveragePooling2D(pool_size=(2, 2))(conv4)
# conv5 = Conv2D(35, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4)
# drop4 = Dropout(0.02)(conv5)
pool4 = AveragePooling2D(pool_size=(2, 2))(pool3) # 2
pool5 = AveragePooling2D(pool_size=(2, 2))(pool4) # 1
 
conv6 = BatchNormalization(momentum=0.99)(pool5)
conv6 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)
 
conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6)
up7 = (UpSampling2D(size=(2, 2))(conv7)) # 2
conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up7)
merge7 = concatenate([pool4, conv7], axis=3)
 
conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7)
up8 = (UpSampling2D(size=(2, 2))(conv8)) # 4
conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up8)
merge8 = concatenate([pool3, conv8], axis=3)
 
conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8)
up9 = (UpSampling2D(size=(2, 2))(conv9)) # 8
conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up9)
merge9 = concatenate([pool2, conv9], axis=3)
 
conv10 = Conv2D(32, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9)
up10 = (UpSampling2D(size=(2, 2))(conv10)) # 16
conv10 = Conv2D(32, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up10)
 
conv11 = Conv2D(16, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv10)
up11 = (UpSampling2D(size=(2, 2))(conv11)) # 32
conv11 = Conv2D(8, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up11)
 
# conv12 = Conv2D(3, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv11)
conv12 = Conv2D(3, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv11)
 
model = Model(input=inputs, output=conv12)
print(model.summary())
model.compile(optimizer=Adam(lr=1e-3), loss='mse', metrics=['accuracy'])
 
history = model.fit_generator(generator(pathX, pathY,BATCH_SIZE),
        steps_per_epoch=600, nb_epoch=EPOCH)
end_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
 
 #save your training model
model.save(r'V1_828.h5')
 
#save your loss data
mse = np.array((history.history['loss']))
np.save(r'V1_828.npy', mse)

(2)測(cè)試模型

from keras.models import load_model
import numpy as np
import matplotlib.pyplot as plt
import os
import cv2
 
model = load_model('V1_828.h5')
test_images_path = 'I:\\Pascal VOC Dataset\\test\\test_images\\'
test_gt_path = 'I:\\Pascal VOC Dataset\\test\\SegmentationObject\\'
pre_path = 'I:\\Pascal VOC Dataset\\test\\pre\\'
 
X = []
for info in os.listdir(test_images_path):
 A = cv2.imread(test_images_path + info)
 X.append(A)
 # i += 1
X = np.array(X)
print(X.shape)
Y = model.predict(X)
 
groudtruth = []
for info in os.listdir(test_gt_path):
 A = cv2.imread(test_gt_path + info)
 groudtruth.append(A)
groudtruth = np.array(groudtruth)
 
i = 0
for info in os.listdir(test_images_path):
 cv2.imwrite(pre_path + info,Y[i])
 i += 1
 
a = range(10)
n = np.random.choice(a)
cv2.imwrite('prediction.png',Y[n])
cv2.imwrite('groudtruth.png',groudtruth[n])
fig, axs = plt.subplots(1, 3)
# cnt = 1
# for j in range(1):
axs[0].imshow(np.abs(X[n]))
axs[0].axis('off')
axs[1].imshow(np.abs(Y[n]))
axs[1].axis('off')
axs[2].imshow(np.abs(groudtruth[n]))
axs[2].axis('off')
 # cnt += 1
fig.savefig("imagestest.png")
plt.close()

3 效果展示

說(shuō)明:從左到右依次是預(yù)測(cè)圖像,真實(shí)圖像,標(biāo)注圖像??梢钥闯觯瑢?duì)于部分?jǐn)?shù)據(jù)的分割效果還有待改進(jìn),主要原因還是數(shù)據(jù)集相對(duì)復(fù)雜,模型難于找到其中的規(guī)律。

以上這篇Keras:Unet網(wǎng)絡(luò)實(shí)現(xiàn)多類(lèi)語(yǔ)義分割方式就是小編分享給大家的全部?jī)?nèi)容了,希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。

相關(guān)文章

  • python編寫(xiě)微信遠(yuǎn)程控制電腦的程序

    python編寫(xiě)微信遠(yuǎn)程控制電腦的程序

    這篇文章主要介紹了python編寫(xiě)微信遠(yuǎn)程控制電腦的程序,使用python編寫(xiě)的基于郵件收發(fā)的遠(yuǎn)程控制電腦的代碼,具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下
    2018-01-01
  • python 2.7.13 安裝配置方法圖文教程

    python 2.7.13 安裝配置方法圖文教程

    這篇文章主要為大家詳細(xì)介紹了python 2.7.13 安裝配置方法圖文教程,具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下
    2018-09-09
  • python實(shí)現(xiàn)flappy bird游戲

    python實(shí)現(xiàn)flappy bird游戲

    這篇文章主要為大家詳細(xì)介紹了python實(shí)現(xiàn)flappy bird游戲,具有一定的參考價(jià)值,感興趣的小伙伴們可以參考一下
    2018-12-12
  • 使用python處理題庫(kù)表格并轉(zhuǎn)化為word形式的實(shí)現(xiàn)

    使用python處理題庫(kù)表格并轉(zhuǎn)化為word形式的實(shí)現(xiàn)

    這篇文章主要介紹了使用python處理題庫(kù)表格并轉(zhuǎn)化為word形式的實(shí)現(xiàn),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧
    2020-04-04
  • 詳解pandas中Series()和DataFrame()的區(qū)別與聯(lián)系

    詳解pandas中Series()和DataFrame()的區(qū)別與聯(lián)系

    本文主要介紹了詳解pandas中Series()和DataFrame()的區(qū)別與聯(lián)系,文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧
    2023-01-01
  • 使用Python進(jìn)行數(shù)獨(dú)求解詳解(二)

    使用Python進(jìn)行數(shù)獨(dú)求解詳解(二)

    對(duì)于利用Python求解數(shù)獨(dú),我們可以采用回溯算法實(shí)現(xiàn)一個(gè)簡(jiǎn)單的版本。本文將此基礎(chǔ)上,通過(guò)改進(jìn)來(lái)提升數(shù)獨(dú)問(wèn)題求解算法的性能。需要的可以參考一下
    2022-02-02
  • pycharm 中mark directory as exclude的用法詳解

    pycharm 中mark directory as exclude的用法詳解

    今天小編就為大家分享一篇pycharm 中mark directory as exclude的用法詳解,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧
    2020-02-02
  • Python中NumPy的數(shù)組重塑

    Python中NumPy的數(shù)組重塑

    這篇文章主要介紹了Python中NumPy的數(shù)組重塑,Numpy是Python科學(xué)計(jì)算庫(kù),用于快速處理任意維度的數(shù)組,NumPy使用c語(yǔ)言寫(xiě)的,底部解除了GIL,其對(duì)數(shù)組的操作速度不在受python解釋器限制<BR>
    2023-07-07
  • Python軟件包安裝的三種常見(jiàn)方法

    Python軟件包安裝的三種常見(jiàn)方法

    python擁有非常豐富的擴(kuò)展包,下面這篇文章主要給大家介紹了關(guān)于Python軟件包安裝的三種常見(jiàn)方法,文中通過(guò)示例代碼介紹的非常詳細(xì),需要的朋友可以參考下
    2022-07-07
  • 啟動(dòng)Atom并運(yùn)行python文件的步驟

    啟動(dòng)Atom并運(yùn)行python文件的步驟

    在本篇文章中我們給大家分享了啟動(dòng)Atom并運(yùn)行python文件的步驟以及具體做法,需要的朋友們參考下。
    2018-11-11

最新評(píng)論