成人国产在线小视频_日韩寡妇人妻调教在线播放_色成人www永久在线观看_2018国产精品久久_亚洲欧美高清在线30p_亚洲少妇综合一区_黄色在线播放国产_亚洲另类技巧小说校园_国产主播xx日韩_a级毛片在线免费

資訊專(zhuān)欄INFORMATION COLUMN

TensorFlow學(xué)習(xí)筆記(2):多元線(xiàn)性回歸

ky0ncheng / 1789人閱讀

摘要:前言本文使用訓(xùn)練多元線(xiàn)性回歸模型,并將其與做比較。在這個(gè)例子中,變量一個(gè)是面積,一個(gè)是房間數(shù),量級(jí)相差很大,如果不歸一化,面積在目標(biāo)函數(shù)和梯度中就會(huì)占據(jù)主導(dǎo)地位,導(dǎo)致收斂極慢。

前言

本文使用tensorflow訓(xùn)練多元線(xiàn)性回歸模型,并將其與scikit-learn做比較。數(shù)據(jù)集來(lái)自Andrew Ng的網(wǎng)上公開(kāi)課程Deep Learning

代碼
#!/usr/bin/env python
# -*- coding=utf-8 -*-
# @author: 陳水平
# @date: 2016-12-30
# @description: compare multi linear regression of tensor flow to scikit-learn based on data from deep learning cource of Andrew Ng
# @ref: http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=DeepLearning&doc=exercises/ex3/ex3.html
#

import numpy as np
import tensorflow as tf
from sklearn import linear_model
from sklearn import preprocessing

# Read x and y
x_data = np.loadtxt("ex3x.dat").astype(np.float32)
y_data = np.loadtxt("ex3y.dat").astype(np.float32)


# We evaluate the x and y by sklearn to get a sense of the coefficients.
reg = linear_model.LinearRegression()
reg.fit(x_data, y_data)
print "Coefficients of sklearn: K=%s, b=%f" % (reg.coef_, reg.intercept_)


# Now we use tensorflow to get similar results.

# Before we put the x_data into tensorflow, we need to standardize it
# in order to achieve better performance in gradient descent;
# If not standardized, the convergency speed could not be tolearated.
# Reason:  If a feature has a variance that is orders of magnitude larger than others, 
# it might dominate the objective function 
# and make the estimator unable to learn from other features correctly as expected.
scaler = preprocessing.StandardScaler().fit(x_data)
print scaler.mean_, scaler.scale_
x_data_standard = scaler.transform(x_data)


W = tf.Variable(tf.zeros([2, 1]))
b = tf.Variable(tf.zeros([1, 1]))
y = tf.matmul(x_data_standard, W) + b

loss = tf.reduce_mean(tf.square(y - y_data.reshape(-1, 1)))/2
optimizer = tf.train.GradientDescentOptimizer(0.3)
train = optimizer.minimize(loss)

init = tf.initialize_all_variables()


sess = tf.Session()
sess.run(init)
for step in range(100):
    sess.run(train)
    if step % 10 == 0:
        print step, sess.run(W).flatten(), sess.run(b).flatten()

print "Coefficients of tensorflow (input should be standardized): K=%s, b=%s" % (sess.run(W).flatten(), sess.run(b).flatten())
print "Coefficients of tensorflow (raw input): K=%s, b=%s" % (sess.run(W).flatten() / scaler.scale_, sess.run(b).flatten() - np.dot(scaler.mean_ / scaler.scale_, sess.run(W)))

輸出如下:

Coefficients of sklearn: K=[  139.21066284 -8738.02148438], b=89597.927966
[ 2000.6809082      3.17021275] [  7.86202576e+02   7.52842903e-01]
0 [ 31729.23632812  16412.6484375 ] [ 102123.7890625]
10 [ 97174.78125      5595.25585938] [ 333681.59375]
20 [ 106480.5703125    -3611.31201172] [ 340222.53125]
30 [ 108727.5390625    -5858.10302734] [ 340407.28125]
40 [ 109272.953125     -6403.52148438] [ 340412.5]
50 [ 109405.3515625    -6535.91503906] [ 340412.625]
60 [ 109437.4921875    -6568.05371094] [ 340412.625]
70 [ 109445.296875     -6575.85644531] [ 340412.625]
80 [ 109447.1875       -6577.75097656] [ 340412.625]
90 [ 109447.640625     -6578.20654297] [ 340412.625]
Coefficients of tensorflow (input should be standardized): K=[ 109447.7421875    -6578.31152344], b=[ 340412.625]
Coefficients of tensorflow (raw input): K=[  139.21061707 -8737.9609375 ], b=[ 89597.78125]
思考

對(duì)于梯度下降算法,變量是否標(biāo)準(zhǔn)化很重要。在這個(gè)例子中,變量一個(gè)是面積,一個(gè)是房間數(shù),量級(jí)相差很大,如果不歸一化,面積在目標(biāo)函數(shù)和梯度中就會(huì)占據(jù)主導(dǎo)地位,導(dǎo)致收斂極慢。

文章版權(quán)歸作者所有,未經(jīng)允許請(qǐng)勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉(zhuǎn)載請(qǐng)注明本文地址:http://systransis.cn/yun/38316.html

相關(guān)文章

  • ApacheCN 人工智能知識(shí)樹(shù) v1.0

    摘要:貢獻(xiàn)者飛龍版本最近總是有人問(wèn)我,把這些資料看完一遍要用多長(zhǎng)時(shí)間,如果你一本書(shū)一本書(shū)看的話(huà),的確要用很長(zhǎng)時(shí)間。為了方便大家,我就把每本書(shū)的章節(jié)拆開(kāi),再按照知識(shí)點(diǎn)合并,手動(dòng)整理了這個(gè)知識(shí)樹(shù)。 Special Sponsors showImg(https://segmentfault.com/img/remote/1460000018907426?w=1760&h=200); 貢獻(xiàn)者:飛龍版...

    劉厚水 評(píng)論0 收藏0
  • TensorFlow學(xué)習(xí)筆記(1):線(xiàn)性回歸

    摘要:前言本文使用訓(xùn)練線(xiàn)性回歸模型,并將其與做比較。數(shù)據(jù)集來(lái)自的網(wǎng)上公開(kāi)課程代碼陳水平輸出如下思考對(duì)于,梯度下降的步長(zhǎng)參數(shù)需要很仔細(xì)的設(shè)置,步子太大容易扯到蛋導(dǎo)致無(wú)法收斂步子太小容易等得蛋疼。迭代次數(shù)也需要細(xì)致的嘗試。 前言 本文使用tensorflow訓(xùn)練線(xiàn)性回歸模型,并將其與scikit-learn做比較。數(shù)據(jù)集來(lái)自Andrew Ng的網(wǎng)上公開(kāi)課程Deep Learning 代碼 #!/...

    amuqiao 評(píng)論0 收藏0

發(fā)表評(píng)論

0條評(píng)論

最新活動(dòng)
閱讀需要支付1元查看
<