欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

詳解TensorFlow2實現(xiàn)線性回歸

 更新時間:2021年09月03日 09:51:37   作者:我是小白呀  
這篇文章主要介紹了TensorFlow2實現(xiàn)線性回歸的詳細解析,本文給大家介紹的非常詳細,對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下

概述

線性回歸 (Linear Regression) 是利用回歸分析來確定兩種或兩種以上變量間相互依賴的定量關系.

在這里插入圖片描述

對線性回歸還不是很了解的同學可以看一下這篇文章:

python深度總結線性回歸

MSE

均方誤差 (Mean Square Error): 是用來描述連續(xù)誤差的一種方法. 公式:

在這里插入圖片描述

y_predict: 我們預測的值y_real: 真實值

線性回歸

公式

在這里插入圖片描述

w: weight, 權重系數

b: bias, 偏置頂

x: 特征值

y: 預測值

梯度下降

梯度下降 (Gradient Descent) 是一種優(yōu)化算法. 參數會沿著梯度相反的方向前進, 以實現(xiàn)損失函數 (loss function) 的最小化.

在這里插入圖片描述

計算公式:

在這里插入圖片描述

在這里插入圖片描述

w: weight, 權重參數

w': 更新后的 weight

lr : learning rate, 學習率

dloss/dw: 損失函數對 w 求導

在這里插入圖片描述

在這里插入圖片描述

w: weight, 權重參數

w': 更新后的 weight

lr : learning rate, 學習率

dloss/dw: 損失函數對 b 求導

線性回歸實現(xiàn)

計算 MSE

def calculate_MSE(w, b, points):
    """
    計算誤差MSE
    :param w: weight, 權重
    :param b: bias, 偏置頂
    :param points: 數據
    :return: 返回MSE (Mean Square Error)
    """

    total_error = 0  # 存放總誤差, 初始化為0

    # 遍歷數據
    for i in range(len(points)):
        # 取出x, y
        x = points.iloc[i, 0]  # 第一列
        y = points.iloc[i, 1]  # 第二列

        # 計算MSE
        total_error += (y - (w * x + b)) ** 2  # 計總誤差
        MSE = total_error / len(points)  # 計算平均誤差

    # 返回MSE
    return MSE

梯度下降

def step_gradient(index, w_current, b_current, points, learning_rate=0.0001):
    """
    計算梯度下降, 跟新權重
    :param index: 現(xiàn)行迭代編號
    :param w_current: weight, 權重
    :param b_current: bias, 偏置頂
    :param points: 數據
    :param learning_rate: lr, 學習率 (默認值: 0.0001)
    :return: 返回跟新過后的參數數組
    """

    b_gradient = 0  # b的導, 初始化為0
    w_gradient = 0  # w的導, 初始化為0
    N = len(points)  # 數據長度

    # 遍歷數據
    for i in range(len(points)):
        # 取出x, y
        x = points.iloc[i, 0]  # 第一列
        y = points.iloc[i, 1]  # 第二列

        # 計算w的導, w的導 = 2x(wx+b-y)
        w_gradient += (2 / N) * x * ((w_current * x + b_current) - y)

        # 計算b的導, b的導 = 2(wx+b-y)
        b_gradient += (2 / N) * ((w_current * x + b_current) - y)

    # 跟新w和b
    w_new = w_current - (learning_rate * w_gradient)  # 下降導數*學習率
    b_new = b_current - (learning_rate * b_gradient)  # 下降導數*學習率

    # 每迭代10次, 調試輸出
    if index % 10 == 0:
        print("This is the {}th iterations w = {}, b = {}, error = {}"
              .format(index, w_new, b_new,
                      calculate_MSE(w_new, b_new, points)))

    # 返回更新后的權重和偏置頂
    return [w_new, b_new]

迭代訓練

def runner(w_start, b_start, points, learning_rate, num_iterations):
    """
    迭代訓練
    :param w_start: 初始weight
    :param b_start: 初始bias
    :param points: 數據
    :param learning_rate: 學習率
    :param num_iterations: 迭代次數
    :return: 訓練好的權重和偏執(zhí)頂
    """

    # 定義w_end, b_end, 存放返回權重
    w_end = w_start
    b_end = b_start

    # 更新權重
    for i in range(1, num_iterations + 1):
        w_end, b_end = step_gradient(i, w_end, b_end, points, learning_rate)

    # 返回訓練好的b, w
    return [w_end, b_end]

主函數

def run():
    """
    主函數
    :return: 無返回值
    """

    # 讀取數據
    data = pd.read_csv("data.csv")  

    # 定義超參數
    learning_rate = 0.00001  # 學習率
    w_initial = 0  # 權重初始化
    b_initial = 0  # 偏置頂初始化
    w_end = 0  # 存放返回結果
    b_end = 0  # 存放返回結果
    num_interations = 200  # 迭代次數

    # 調試輸出初始誤差
    print("Starting gradient descent at w = {}, b = {}, error = {}"
          .format(w_initial, b_initial, calculate_MSE(w_initial, b_initial, data)))
    print("Running...")

    # 得到訓練好的值
    w_end, b_end = runner(w_initial, b_initial, data, learning_rate, num_interations, )

    # 調試輸出訓練后的誤差
    print("\nAfter {} iterations w = {}, b = {}, error = {}"
          .format(num_interations, w_end, b_end, calculate_MSE(w_end, b_end, data)))

完整代碼

import pandas as pd
import tensorflow as tf


def run():
    """
    主函數
    :return: 無返回值
    """

    # 讀取數據
    data = pd.read_csv("data.csv")

    # 定義超參數
    learning_rate = 0.00001  # 學習率
    w_initial = 0  # 權重初始化
    b_initial = 0  # 偏置頂初始化
    w_end = 0  # 存放返回結果
    b_end = 0  # 存放返回結果
    num_interations = 200  # 迭代次數

    # 調試輸出初始誤差
    print("Starting gradient descent at w = {}, b = {}, error = {}"
          .format(w_initial, b_initial, calculate_MSE(w_initial, b_initial, data)))
    print("Running...")

    # 得到訓練好的值
    w_end, b_end = runner(w_initial, b_initial, data, learning_rate, num_interations, )

    # 調試輸出訓練后的誤差
    print("\nAfter {} iterations w = {}, b = {}, error = {}"
          .format(num_interations, w_end, b_end, calculate_MSE(w_end, b_end, data)))


def calculate_MSE(w, b, points):
    """
    計算誤差MSE
    :param w: weight, 權重
    :param b: bias, 偏置頂
    :param points: 數據
    :return: 返回MSE (Mean Square Error)
    """

    total_error = 0  # 存放總誤差, 初始化為0

    # 遍歷數據
    for i in range(len(points)):
        # 取出x, y
        x = points.iloc[i, 0]  # 第一列
        y = points.iloc[i, 1]  # 第二列

        # 計算MSE
        total_error += (y - (w * x + b)) ** 2  # 計總誤差
        MSE = total_error / len(points)  # 計算平均誤差

    # 返回MSE
    return MSE


def step_gradient(index, w_current, b_current, points, learning_rate=0.0001):
    """
    計算梯度下降, 跟新權重
    :param index: 現(xiàn)行迭代編號
    :param w_current: weight, 權重
    :param b_current: bias, 偏置頂
    :param points: 數據
    :param learning_rate: lr, 學習率 (默認值: 0.0001)
    :return: 返回跟新過后的參數數組
    """

    b_gradient = 0  # b的導, 初始化為0
    w_gradient = 0  # w的導, 初始化為0
    N = len(points)  # 數據長度

    # 遍歷數據
    for i in range(len(points)):
        # 取出x, y
        x = points.iloc[i, 0]  # 第一列
        y = points.iloc[i, 1]  # 第二列

        # 計算w的導, w的導 = 2x(wx+b-y)
        w_gradient += (2 / N) * x * ((w_current * x + b_current) - y)

        # 計算b的導, b的導 = 2(wx+b-y)
        b_gradient += (2 / N) * ((w_current * x + b_current) - y)

    # 跟新w和b
    w_new = w_current - (learning_rate * w_gradient)  # 下降導數*學習率
    b_new = b_current - (learning_rate * b_gradient)  # 下降導數*學習率

    # 每迭代10次, 調試輸出
    if index % 10 == 0:
        print("This is the {}th iterations w = {}, b = {}, error = {}"
              .format(index, w_new, b_new,
                      calculate_MSE(w_new, b_new, points)))

    # 返回更新后的權重和偏置頂
    return [w_new, b_new]


def runner(w_start, b_start, points, learning_rate, num_iterations):
    """
    迭代訓練
    :param w_start: 初始weight
    :param b_start: 初始bias
    :param points: 數據
    :param learning_rate: 學習率
    :param num_iterations: 迭代次數
    :return: 訓練好的權重和偏執(zhí)頂
    """

    # 定義w_end, b_end, 存放返回權重
    w_end = w_start
    b_end = b_start

    # 更新權重
    for i in range(1, num_iterations + 1):
        w_end, b_end = step_gradient(i, w_end, b_end, points, learning_rate)

    # 返回訓練好的b, w
    return [w_end, b_end]


if __name__ == "__main__":  # 判斷是否為直接運行
    # 執(zhí)行主函數
    run()

輸出結果:

Starting gradient descent at w = 0, b = 0, error = 5611.166153823905
Running...
This is the 10th iterations w = 0.5954939346814911, b = 0.011748797759247776, error = 2077.4540105037636
This is the 20th iterations w = 0.9515563561471605, b = 0.018802975867006404, error = 814.0851271130122
This is the 30th iterations w = 1.1644557718428263, b = 0.023050105300353223, error = 362.4068500146176
This is the 40th iterations w = 1.291753898278705, b = 0.02561881917471017, error = 200.92329896151622
This is the 50th iterations w = 1.3678685455519075, b = 0.027183959773995233, error = 143.18984477036037
This is the 60th iterations w = 1.4133791147591803, b = 0.02814903475888354, error = 122.54901023376003
This is the 70th iterations w = 1.4405906232245687, b = 0.028755312994862656, error = 115.16948797045545
This is the 80th iterations w = 1.4568605956220553, b = 0.029147056093611835, error = 112.53113537539161
This is the 90th iterations w = 1.4665883081088924, b = 0.029410522232548166, error = 111.58784050644537
This is the 100th iterations w = 1.4724042147529013, b = 0.029597287663210802, error = 111.25056079777497
This is the 110th iterations w = 1.475881139890538, b = 0.029738191313600983, error = 111.12994295811941
This is the 120th iterations w = 1.477959520545057, b = 0.02985167266801462, error = 111.08678583026905
This is the 130th iterations w = 1.479201671130221, b = 0.029948757225817496, error = 111.07132237076124
This is the 140th iterations w = 1.4799438156483897, b = 0.03003603745100295, error = 111.06575992136905
This is the 150th iterations w = 1.480386992125614, b = 0.030117455167888288, error = 111.06373727064113
This is the 160th iterations w = 1.4806514069946144, b = 0.030195367306897165, error = 111.0629801653088
This is the 170th iterations w = 1.4808089351476725, b = 0.030271183144693698, error = 111.06267551686379
This is the 180th iterations w = 1.4809025526554018, b = 0.030345745328433527, error = 111.0625326308038
This is the 190th iterations w = 1.4809579561496398, b = 0.030419557701150367, error = 111.0624475783524
This is the 200th iterations w = 1.480990510387525, b = 0.030492921525124016, error = 111.06238320300855
This is the 210th iterations w = 1.4810094024003952, b = 0.030566016933760057, error = 111.06232622062124
This is the 220th iterations w = 1.4810201253791957, b = 0.030638951634017437, error = 111.0622718818556
This is the 230th iterations w = 1.4810259638611891, b = 0.030711790026994222, error = 111.06221848873447
This is the 240th iterations w = 1.481028881765914, b = 0.030784570619965538, error = 111.06216543419914
This is the 250th iterations w = 1.4810300533774932, b = 0.030857316437543122, error = 111.06211250121454
This is the 260th iterations w = 1.4810301808342632, b = 0.03093004124680784, error = 111.06205961218657
This is the 270th iterations w = 1.4810296839649824, b = 0.031002753279495907, error = 111.06200673937376
This is the 280th iterations w = 1.4810288137973704, b = 0.031075457457601333, error = 111.06195387285815
This is the 290th iterations w = 1.48102772042814, b = 0.031148156724127858, error = 111.06190100909376
This is the 300th iterations w = 1.4810264936044433, b = 0.03122085283878386, error = 111.06184814681296
This is the 310th iterations w = 1.4810251869886903, b = 0.0312935468537513, error = 111.06179528556238
This is the 320th iterations w = 1.4810238326671836, b = 0.031366239398161695, error = 111.0617424251801
This is the 330th iterations w = 1.4810224498252484, b = 0.031438930848192506, error = 111.06168956560795
This is the 340th iterations w = 1.481021049934344, b = 0.03151162142877266, error = 111.06163670682551
This is the 350th iterations w = 1.4810196398535866, b = 0.03158431127439525, error = 111.06158384882504
This is the 360th iterations w = 1.4810182236842395, b = 0.03165700046547913, error = 111.0615309916041
This is the 370th iterations w = 1.4810168038785667, b = 0.031729689050110664, error = 111.06147813516172
This is the 380th iterations w = 1.4810153819028469, b = 0.03180237705704362, error = 111.06142527949757
This is the 390th iterations w = 1.48101395863381, b = 0.03187506450347233, error = 111.06137242461139
This is the 400th iterations w = 1.48101253459568, b = 0.03194775139967933, error = 111.06131957050317
This is the 410th iterations w = 1.4810111101019028, b = 0.03202043775181446, error = 111.06126671717288
This is the 420th iterations w = 1.4810096853398989, b = 0.032093123563556446, error = 111.06121386462064
This is the 430th iterations w = 1.4810082604217312, b = 0.032165808837106485, error = 111.06116101284626
This is the 440th iterations w = 1.481006835414406, b = 0.03223849357378233, error = 111.06110816184975
This is the 450th iterations w = 1.4810054103579875, b = 0.03231117777437349, error = 111.06105531163115
This is the 460th iterations w = 1.4810039852764323, b = 0.0323838614393536, error = 111.06100246219052
This is the 470th iterations w = 1.4810025601840635, b = 0.032456544569007456, error = 111.0609496135277
This is the 480th iterations w = 1.4810011350894463, b = 0.03252922716350693, error = 111.06089676564281
This is the 490th iterations w = 1.4809997099977015, b = 0.032601909222956374, error = 111.06084391853577
This is the 500th iterations w = 1.4809982849118903, b = 0.032674590747419754, error = 111.0607910722065

After 500 iterations w = 1.4809982849118903, b = 0.032674590747419754, error = 111.0607910722065

到此這篇關于詳解TensorFlow2實現(xiàn)線性回歸的文章就介紹到這了,更多相關TensorFlow2線性回歸內容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!

相關文章

  • Python中用format函數格式化字符串的用法

    Python中用format函數格式化字符串的用法

    這篇文章主要介紹了Python中用format函數格式化字符串的用法,格式化字符串是Python學習當中的基礎知識,本文主要針對Python2.7.x版本,需要的朋友可以參考下
    2015-04-04
  • Python函數默認參數常見問題及解決方案

    Python函數默認參數常見問題及解決方案

    這篇文章主要介紹了Python函數默認參數常見問題及解決方案,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下
    2020-03-03
  • Python線性點運算數字圖像處理示例詳解

    Python線性點運算數字圖像處理示例詳解

    這篇文章主要為大家介紹了數字圖像處理基本運算如何用Python詳細點運算來處理數字圖像有需要的朋友可以借鑒參考下,希望能夠有所幫助
    2021-09-09
  • Python中使用異常處理來判斷運行的操作系統(tǒng)平臺方法

    Python中使用異常處理來判斷運行的操作系統(tǒng)平臺方法

    這篇文章主要介紹了Python中使用異常處理來判斷運行的操作系統(tǒng)平臺方法,這個方法比較新穎,,需要的朋友可以參考下
    2015-01-01
  • 一起來學習Python的列表

    一起來學習Python的列表

    這篇文章主要為大家詳細介紹了Python的列表,文中示例代碼介紹的非常詳細,具有一定的參考價值,感興趣的小伙伴們可以參考一下,希望能夠給你帶來幫助
    2022-03-03
  • python排序算法之希爾排序

    python排序算法之希爾排序

    這篇文章主要介紹了python排序算法之希爾排序,希爾排序,又叫“縮小增量排序”,是對插入排序進行優(yōu)化后產生的一種排序算法,需要的朋友可以參考下
    2023-04-04
  • Python Selenium中常用的元素定位方法總結

    Python Selenium中常用的元素定位方法總結

    在Web自動化測試中,元素定位是一項非常重要的技術,Python Selenium提供了各種元素定位方法,可以幫助我們定位頁面上的元素并與之交互,本文將詳細介紹Python Selenium中常用的元素定位方法,并提供實例代碼,需要的朋友可以參考下
    2023-11-11
  • Python數據結構之雙向鏈表詳解

    Python數據結構之雙向鏈表詳解

    單鏈表只有一個指向直接后繼的指針來表示結點間的邏輯關系,可以方便的從任一結點開始查找其后繼結點,但要找前驅結點則比較困難,雙向鏈表是為了解決這一問題,使用兩個指針表示結點間的邏輯關系。本文將重點為大家介紹雙向鏈表的相關操作,需要的可以參考一下
    2022-01-01
  • Python+wxPython實現(xiàn)個人鏈接收藏夾

    Python+wxPython實現(xiàn)個人鏈接收藏夾

    這篇文章主要介紹了如何使用wxPython和XML數據源創(chuàng)建一個具有按鈕和Web視圖的應用程序窗口,以便輕松管理和訪問各種網頁鏈接,感興趣的可以了解下
    2023-08-08
  • python實現(xiàn)調用其他python腳本的方法

    python實現(xiàn)調用其他python腳本的方法

    python實現(xiàn)調用其他python腳本的方法,是一個比較實用的技巧,需要的朋友可以參考下
    2014-10-10

最新評論