欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Keras loss函數(shù)剖析

 更新時(shí)間:2020年07月06日 16:38:03   作者:姚賢賢  
這篇文章主要介紹了Keras loss函數(shù)剖析,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧

我就廢話不多說了,大家還是直接看代碼吧~

'''
Created on 2018-4-16
'''
def compile(
self,
optimizer, #優(yōu)化器
loss, #損失函數(shù),可以為已經(jīng)定義好的loss函數(shù)名稱,也可以為自己寫的loss函數(shù)
metrics=None, #
sample_weight_mode=None, #如果你需要按時(shí)間步為樣本賦權(quán)(2D權(quán)矩陣),將該值設(shè)為“temporal”。默認(rèn)為“None”,代表按樣本賦權(quán)(1D權(quán)),和fit中sample_weight在賦值樣本權(quán)重中配合使用
weighted_metrics=None, 
target_tensors=None,
**kwargs #這里的設(shè)定的參數(shù)可以和后端交互。
)

實(shí)質(zhì)調(diào)用的是Keras\engine\training.py 中的class Model中的def compile
一般使用model.compile(loss='categorical_crossentropy',optimizer='sgd',metrics=['accuracy'])

# keras所有定義好的損失函數(shù)loss:
# keras\losses.py
# 有些loss函數(shù)可以使用簡(jiǎn)稱:
# mse = MSE = mean_squared_error
# mae = MAE = mean_absolute_error
# mape = MAPE = mean_absolute_percentage_error
# msle = MSLE = mean_squared_logarithmic_error
# kld = KLD = kullback_leibler_divergence
# cosine = cosine_proximity
# 使用到的數(shù)學(xué)方法:
# mean:求均值
# sum:求和
# square:平方
# abs:絕對(duì)值
# clip:[裁剪替換](https://blog.csdn.net/qq1483661204/article/details)
# epsilon:1e-7
# log:以e為底
# maximum(x,y):x與 y逐位比較取其大者
# reduce_sum(x,axis):沿著某個(gè)維度求和
# l2_normalize:l2正則化
# softplus:softplus函數(shù)
# 
# import cntk as C
# 1.mean_squared_error:
#  return K.mean(K.square(y_pred - y_true), axis=-1) 
# 2.mean_absolute_error:
#  return K.mean(K.abs(y_pred - y_true), axis=-1)
# 3.mean_absolute_percentage_error:
#  diff = K.abs((y_true - y_pred) / K.clip(K.abs(y_true),K.epsilon(),None))
#  return 100. * K.mean(diff, axis=-1)
# 4.mean_squared_logarithmic_error:
#  first_log = K.log(K.clip(y_pred, K.epsilon(), None) + 1.)
#  second_log = K.log(K.clip(y_true, K.epsilon(), None) + 1.)
#  return K.mean(K.square(first_log - second_log), axis=-1)
# 5.squared_hinge:
#  return K.mean(K.square(K.maximum(1. - y_true * y_pred, 0.)), axis=-1)
# 6.hinge(SVM損失函數(shù)):
#  return K.mean(K.maximum(1. - y_true * y_pred, 0.), axis=-1)
# 7.categorical_hinge:
#  pos = K.sum(y_true * y_pred, axis=-1)
#  neg = K.max((1. - y_true) * y_pred, axis=-1)
#  return K.maximum(0., neg - pos + 1.)
# 8.logcosh:
#  def _logcosh(x):
#   return x + K.softplus(-2. * x) - K.log(2.)
#  return K.mean(_logcosh(y_pred - y_true), axis=-1)
# 9.categorical_crossentropy:
#  output /= C.reduce_sum(output, axis=-1)
#  output = C.clip(output, epsilon(), 1.0 - epsilon())
#  return -sum(target * C.log(output), axis=-1)
# 10.sparse_categorical_crossentropy:
#  target = C.one_hot(target, output.shape[-1])
#  target = C.reshape(target, output.shape)
#  return categorical_crossentropy(target, output, from_logits)
# 11.binary_crossentropy:
#  return K.mean(K.binary_crossentropy(y_true, y_pred), axis=-1)
# 12.kullback_leibler_divergence:
#  y_true = K.clip(y_true, K.epsilon(), 1)
#  y_pred = K.clip(y_pred, K.epsilon(), 1)
#  return K.sum(y_true * K.log(y_true / y_pred), axis=-1)
# 13.poisson:
#  return K.mean(y_pred - y_true * K.log(y_pred + K.epsilon()), axis=-1)
# 14.cosine_proximity:
#  y_true = K.l2_normalize(y_true, axis=-1)
#  y_pred = K.l2_normalize(y_pred, axis=-1)
#  return -K.sum(y_true * y_pred, axis=-1)

補(bǔ)充知識(shí):一文總結(jié)Keras的loss函數(shù)和metrics函數(shù)

Loss函數(shù)

定義:

keras.losses.mean_squared_error(y_true, y_pred)

用法很簡(jiǎn)單,就是計(jì)算均方誤差平均值,例如

loss_fn = keras.losses.mean_squared_error
a1 = tf.constant([1,1,1,1])
a2 = tf.constant([2,2,2,2])
loss_fn(a1,a2)
<tf.Tensor: id=718367, shape=(), dtype=int32, numpy=1>

Metrics函數(shù)

Metrics函數(shù)也用于計(jì)算誤差,但是功能比Loss函數(shù)要復(fù)雜。

定義

tf.keras.metrics.Mean(
  name='mean', dtype=None
)

這個(gè)定義過于簡(jiǎn)單,舉例說明

mean_loss([1, 3, 5, 7])
mean_loss([1, 3, 5, 7])
mean_loss([1, 1, 1, 1])
mean_loss([2,2])

輸出結(jié)果

<tf.Tensor: id=718929, shape=(), dtype=float32, numpy=2.857143>

這個(gè)結(jié)果等價(jià)于

np.mean([1, 3, 5, 7, 1, 3, 5, 7, 1, 1, 1, 1, 2, 2])

這是因?yàn)镸etrics函數(shù)是狀態(tài)函數(shù),在神經(jīng)網(wǎng)絡(luò)訓(xùn)練過程中會(huì)持續(xù)不斷地更新狀態(tài),是有記憶的。因?yàn)镸etrics函數(shù)還帶有下面幾個(gè)Methods

reset_states()
Resets all of the metric state variables.
This function is called between epochs/steps, when a metric is evaluated during training.

result()
Computes and returns the metric value tensor.
Result computation is an idempotent operation that simply calculates the metric value using the state variables

update_state(
  values, sample_weight=None
)
Accumulates statistics for computing the reduction metric.

另外注意,Loss函數(shù)和Metrics函數(shù)的調(diào)用形式,

loss_fn = keras.losses.mean_squared_error mean_loss = keras.metrics.Mean()

mean_loss(1)等價(jià)于keras.metrics.Mean()(1),而不是keras.metrics.Mean(1),這個(gè)從keras.metrics.Mean函數(shù)的定義可以看出。

但是必須先令生成一個(gè)實(shí)例mean_loss=keras.metrics.Mean(),而不能直接使用keras.metrics.Mean()本身。

以上這篇Keras loss函數(shù)剖析就是小編分享給大家的全部?jī)?nèi)容了,希望能給大家一個(gè)參考,也希望大家多多支持腳本之家。

相關(guān)文章

最新評(píng)論