【機器學習快速入門教程6】Keras神經網絡

章節6:Keras神經網絡

Keras介紹

在這一節里,我們將介紹Keras,機器學習中一個高級的庫。Keras是在Tensorflow的基礎上建立的,也是一個開源的框架,內部實現了許多機器學習的方法,尤其是在深度神經網絡方面,優化了計算的效率。為了我們快速入門的目的,我們不會去深入理解Keras框架內部的具體細節。Tensorflow是一個比較低級的庫,因此,Keras通過將Tensorflow包裝,提供給了我們一系列便利的函數來實現機器學習中一些可重復使用的環節,從而降低了我們編寫神經網絡的復雜度。更重要的是,在實現反向傳播時,Keras是在GPU上訓練神經網絡的,所以比Tensorflow更高效。
我們使用上一章節的例子,用神經網絡分類Iris數據集,但這次用Keras實現一遍,來入門Keras。先導入相關的庫,

import os
os.environ['KERAS_BACKEND'] = 'tensorflow'

import matplotlib.pyplot as plt
import numpy as np
import random

import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout

導入Iris數據集,

from sklearn.datasets import load_iris

iris = load_iris()
data, labels = iris.data[:,0:3], iris.data[:,3]

在上一章節中,我們手動配置并訓練了一個神經網絡,來預測鳶尾花的花萼寬度。這次使用Keras庫來實現一遍(輸入變量增加至3個),首先,我們需要重新洗牌數據(去除數據集原先存在的一些特性)和預處理數據,預處理在該數據集上主要指正規化數據,并將其轉換成正確的格式。

num_samples = len(labels)  # size of our dataset
shuffle_order = np.random.permutation(num_samples)
data = data[shuffle_order, :]
labels = labels[shuffle_order]

# normalize data and labels to between 0 and 1 and make sure it's float32
data = data / np.amax(data, axis=0)
data = data.astype('float32')
labels = labels / np.amax(labels, axis=0)
labels = labels.astype('float32')

# print out the data
print("shape of X", data.shape)
print("first 5 rows of X\n", data[0:5, :])
print("first 5 labels\n", labels[0:5])
shape of X (150, 3)
first 5 rows of X
 [[0.79746836 0.77272725 0.8115942 ]
 [0.6962025  0.5681818  0.5797101 ]
 [0.96202534 0.6818182  0.95652175]
 [0.8101266  0.70454544 0.79710144]
 [0.5949367  0.72727275 0.23188406]]
first 5 labels
 [0.96 0.52 0.84 0.72 0.08]

過擬合和測試

在我們之前的章節中,我們總是在訓練集上評估我們網絡的性能。但這是不正確的,因為我們的網絡能在訓練集上過度擬合(像記憶這些數據一樣)從而獲得一個高的分數,來“欺騙”我們。當遇到未知的數據樣本時,神經網絡就不能很好的泛化了。
在機器學習中,這就叫做"過擬合",但我們可以通過以下幾種方法來避免過擬合。首先,我們可以把數據集分割成“訓練集”——在訓練集上使用梯度下降法來訓練網絡,“測試集”——在測試集上做最后的評估得到網絡對未知數據樣本準確的估計。
先讓我們來分割數據吧,使用前30%的數據作為測試集,其余的作為訓練集,

# let's rename the data and labels to X, y
X, y = data, labels

test_split = 0.3  # percent split

n_test = int(test_split * num_samples)

x_train, x_test = X[n_test:, :], X[:n_test, :] 
y_train, y_test = y[n_test:], y[:n_test] 

print('%d training samples, %d test samples' % (x_train.shape[0], x_test.shape[0]))
105 training samples, 45 test samples

在Keras中,可以使用Sequential類來實例化一個神經網絡模型,Sequential類是一個從輸入層傳播到輸出層的簡單的網絡模型。

model = Sequential()

現在,我們有一個空的神經網絡模型model,讓我們往其中加入第一層,作為我們的輸入層。我們可以使用Keras的Dense類來實例化我們的輸入層。
Dense類是一個內部全連接的網絡層,該層的神經元與前面一層的神經元全部連接了起來,因此叫做“Dense”(密集型)。對此,也許你會感到困惑,因為我們還沒見過沒有全連接的網絡層,沒事!我們將會在下面章節中介紹卷積神經網絡。
創建一個Dense網絡層,我們有兩個參數需要指定,神經元個數和激活函數。對于輸入層,我們還要指定輸入數據的維度。

model.add(Dense(8, activation='sigmoid', input_dim=3))

我們可以使用model.summary函數讀出我們當前網絡的狀態:

model.summary()

從輸出可以看出,我們的網絡目前只有一層,共有32個參數:輸入層3個神經元,乘上中間層的8個神經元(3x8=24),再加上8個偏差系數(24+8=32)。
接下來,我們將加入輸出層,也是一個全連接網絡層,但只有一個神經元,該神經元包含我們最后的輸出。這次的激活函數不再是sigmoid函數,而是“linear”激活函數,

model.add(Dense(1, activation='linear'))
model.summary()

所以,我們加入了9個參數,隱藏層到輸出層的8x1個權重參數,輸出層的1個偏差參數,因此,一共41個參數。
現在我們已經構建了模型的基本結構。接下來我們需要指定我們的損失函數和優化器,然后編譯我們的模型。
首先,我們來指定損失函數。線性回歸問題標準的損失函數是平方和誤差(SSE)和均方誤差(MSE)函數。SSE和MSE基本相同,唯一的不同點是它們之間差了一個縮放因子。Keras更傾向于用MSE,所以我們將使用它。
優化器是我們所選擇的梯度下降的方式,最基本的優化器是“隨機梯度下降”(SGD),我們到目前最常使用的是批量梯度下降,指的是在整個數據集上計算梯度(當深入了解機器學習算法時,你會對這些概念有更清晰的理解)。在訓練集上取子集計算梯度的方法叫小批量梯度下降。
一旦我們指定了損失函數和優化器,我們的模型就可以編譯了。編譯具體指Keras開始分配內存給我們構建的模型的“計算圖(神經網絡圖)”,為后面的計算提供了優化。

model.compile(loss='mean_squared_error', optimizer='sgd')

最后,我們準備好訓練模型了。使用fit命令將開始訓練的過程,fit函數有幾個重要的參數要指定,首先是訓練數據(x_train和y_train)和測試數據(x_test和y_test)。另外,我們必須指定batch_size參數,該參數會在訓練集中取指定batch_size的樣本來計算梯度(使用SGD),還有epochs參數,用來指定訓練樣本數據的次數。通常來說,epochs越大越好。
因為我們的數據集比較小(105個樣本),所以我們應該設置一個小的batch_size和一個大的epochs:

history = model.fit(x_train, y_train,
                    batch_size=4,
                    epochs=200,
                    verbose=1,
                    validation_data=(x_test, y_test))
Train on 105 samples, validate on 45 samples
Epoch 1/200
105/105 [==============================] - 0s 2ms/step - loss: 0.1224 - val_loss: 0.1100
Epoch 2/200
105/105 [==============================] - 0s 310us/step - loss: 0.0996 - val_loss: 0.1084
Epoch 3/200
105/105 [==============================] - 0s 406us/step - loss: 0.0991 - val_loss: 0.1059
Epoch 4/200
105/105 [==============================] - 0s 307us/step - loss: 0.0966 - val_loss: 0.1050
Epoch 5/200
105/105 [==============================] - 0s 305us/step - loss: 0.0956 - val_loss: 0.1038
Epoch 6/200
105/105 [==============================] - 0s 323us/step - loss: 0.0945 - val_loss: 0.1023
Epoch 7/200
105/105 [==============================] - 0s 307us/step - loss: 0.0938 - val_loss: 0.1010
Epoch 8/200
105/105 [==============================] - 0s 314us/step - loss: 0.0922 - val_loss: 0.1000
Epoch 9/200
105/105 [==============================] - 0s 324us/step - loss: 0.0908 - val_loss: 0.0990
Epoch 10/200
105/105 [==============================] - 0s 310us/step - loss: 0.0900 - val_loss: 0.0975
Epoch 11/200
105/105 [==============================] - 0s 314us/step - loss: 0.0888 - val_loss: 0.0966
Epoch 12/200
105/105 [==============================] - 0s 416us/step - loss: 0.0880 - val_loss: 0.0957
Epoch 13/200
105/105 [==============================] - 0s 416us/step - loss: 0.0869 - val_loss: 0.0942
Epoch 14/200
105/105 [==============================] - 0s 351us/step - loss: 0.0857 - val_loss: 0.0930
Epoch 15/200
105/105 [==============================] - 0s 327us/step - loss: 0.0850 - val_loss: 0.0919
Epoch 16/200
105/105 [==============================] - 0s 327us/step - loss: 0.0841 - val_loss: 0.0916
Epoch 17/200
105/105 [==============================] - 0s 336us/step - loss: 0.0832 - val_loss: 0.0898
Epoch 18/200
105/105 [==============================] - 0s 335us/step - loss: 0.0818 - val_loss: 0.0891
Epoch 19/200
105/105 [==============================] - 0s 324us/step - loss: 0.0813 - val_loss: 0.0876
Epoch 20/200
105/105 [==============================] - 0s 332us/step - loss: 0.0797 - val_loss: 0.0874
Epoch 21/200
105/105 [==============================] - 0s 334us/step - loss: 0.0796 - val_loss: 0.0863
Epoch 22/200
105/105 [==============================] - 0s 364us/step - loss: 0.0783 - val_loss: 0.0854
Epoch 23/200
105/105 [==============================] - 0s 339us/step - loss: 0.0776 - val_loss: 0.0835
Epoch 24/200
105/105 [==============================] - 0s 360us/step - loss: 0.0761 - val_loss: 0.0825
Epoch 25/200
105/105 [==============================] - 0s 359us/step - loss: 0.0753 - val_loss: 0.0816
Epoch 26/200
105/105 [==============================] - 0s 340us/step - loss: 0.0741 - val_loss: 0.0810
Epoch 27/200
105/105 [==============================] - 0s 322us/step - loss: 0.0734 - val_loss: 0.0796
Epoch 28/200
105/105 [==============================] - 0s 364us/step - loss: 0.0725 - val_loss: 0.0787
Epoch 29/200
105/105 [==============================] - 0s 330us/step - loss: 0.0715 - val_loss: 0.0778
Epoch 30/200
105/105 [==============================] - 0s 339us/step - loss: 0.0712 - val_loss: 0.0768
Epoch 31/200
105/105 [==============================] - 0s 355us/step - loss: 0.0698 - val_loss: 0.0759
Epoch 32/200
105/105 [==============================] - 0s 333us/step - loss: 0.0693 - val_loss: 0.0752
Epoch 33/200
105/105 [==============================] - 0s 341us/step - loss: 0.0683 - val_loss: 0.0743
Epoch 34/200
105/105 [==============================] - 0s 349us/step - loss: 0.0674 - val_loss: 0.0731
Epoch 35/200
105/105 [==============================] - 0s 334us/step - loss: 0.0665 - val_loss: 0.0722
Epoch 36/200
105/105 [==============================] - 0s 350us/step - loss: 0.0655 - val_loss: 0.0714
Epoch 37/200
105/105 [==============================] - 0s 339us/step - loss: 0.0650 - val_loss: 0.0712
Epoch 38/200
105/105 [==============================] - 0s 362us/step - loss: 0.0641 - val_loss: 0.0698
Epoch 39/200
105/105 [==============================] - 0s 381us/step - loss: 0.0631 - val_loss: 0.0688
Epoch 40/200
105/105 [==============================] - 0s 414us/step - loss: 0.0627 - val_loss: 0.0679
Epoch 41/200
105/105 [==============================] - 0s 332us/step - loss: 0.0616 - val_loss: 0.0671
Epoch 42/200
105/105 [==============================] - 0s 336us/step - loss: 0.0611 - val_loss: 0.0665
Epoch 43/200
105/105 [==============================] - 0s 350us/step - loss: 0.0601 - val_loss: 0.0654
Epoch 44/200
105/105 [==============================] - 0s 397us/step - loss: 0.0596 - val_loss: 0.0646
Epoch 45/200
105/105 [==============================] - 0s 404us/step - loss: 0.0586 - val_loss: 0.0638
Epoch 46/200
105/105 [==============================] - 0s 375us/step - loss: 0.0582 - val_loss: 0.0635
Epoch 47/200
105/105 [==============================] - 0s 348us/step - loss: 0.0577 - val_loss: 0.0621
Epoch 48/200
105/105 [==============================] - 0s 333us/step - loss: 0.0568 - val_loss: 0.0614
Epoch 49/200
105/105 [==============================] - 0s 347us/step - loss: 0.0556 - val_loss: 0.0610
Epoch 50/200
105/105 [==============================] - 0s 337us/step - loss: 0.0547 - val_loss: 0.0603
Epoch 51/200
105/105 [==============================] - 0s 373us/step - loss: 0.0545 - val_loss: 0.0592
Epoch 52/200
105/105 [==============================] - 0s 350us/step - loss: 0.0536 - val_loss: 0.0581
Epoch 53/200
105/105 [==============================] - 0s 338us/step - loss: 0.0529 - val_loss: 0.0574
Epoch 54/200
105/105 [==============================] - 0s 345us/step - loss: 0.0520 - val_loss: 0.0574
Epoch 55/200
105/105 [==============================] - 0s 334us/step - loss: 0.0518 - val_loss: 0.0560
Epoch 56/200
105/105 [==============================] - 0s 330us/step - loss: 0.0508 - val_loss: 0.0551
Epoch 57/200
105/105 [==============================] - 0s 340us/step - loss: 0.0499 - val_loss: 0.0547
Epoch 58/200
105/105 [==============================] - 0s 351us/step - loss: 0.0495 - val_loss: 0.0538
Epoch 59/200
105/105 [==============================] - 0s 341us/step - loss: 0.0487 - val_loss: 0.0529
Epoch 60/200
105/105 [==============================] - 0s 335us/step - loss: 0.0475 - val_loss: 0.0527
Epoch 61/200
105/105 [==============================] - 0s 346us/step - loss: 0.0475 - val_loss: 0.0518
Epoch 62/200
105/105 [==============================] - 0s 317us/step - loss: 0.0467 - val_loss: 0.0508
Epoch 63/200
105/105 [==============================] - 0s 323us/step - loss: 0.0460 - val_loss: 0.0509
Epoch 64/200
105/105 [==============================] - 0s 312us/step - loss: 0.0458 - val_loss: 0.0494
Epoch 65/200
105/105 [==============================] - 0s 316us/step - loss: 0.0447 - val_loss: 0.0487
Epoch 66/200
105/105 [==============================] - 0s 310us/step - loss: 0.0442 - val_loss: 0.0482
Epoch 67/200
105/105 [==============================] - 0s 339us/step - loss: 0.0435 - val_loss: 0.0478
Epoch 68/200
105/105 [==============================] - 0s 376us/step - loss: 0.0435 - val_loss: 0.0468
Epoch 69/200
105/105 [==============================] - 0s 329us/step - loss: 0.0424 - val_loss: 0.0462
Epoch 70/200
105/105 [==============================] - 0s 321us/step - loss: 0.0417 - val_loss: 0.0454
Epoch 71/200
105/105 [==============================] - 0s 413us/step - loss: 0.0410 - val_loss: 0.0451
Epoch 72/200
105/105 [==============================] - 0s 308us/step - loss: 0.0406 - val_loss: 0.0441
Epoch 73/200
105/105 [==============================] - 0s 343us/step - loss: 0.0400 - val_loss: 0.0435
Epoch 74/200
105/105 [==============================] - 0s 327us/step - loss: 0.0390 - val_loss: 0.0429
Epoch 75/200
105/105 [==============================] - 0s 354us/step - loss: 0.0386 - val_loss: 0.0422
Epoch 76/200
105/105 [==============================] - 0s 329us/step - loss: 0.0382 - val_loss: 0.0417
Epoch 77/200
105/105 [==============================] - 0s 325us/step - loss: 0.0375 - val_loss: 0.0410
Epoch 78/200
105/105 [==============================] - 0s 321us/step - loss: 0.0370 - val_loss: 0.0404
Epoch 79/200
105/105 [==============================] - 0s 338us/step - loss: 0.0362 - val_loss: 0.0398
Epoch 80/200
105/105 [==============================] - 0s 322us/step - loss: 0.0358 - val_loss: 0.0394
Epoch 81/200
105/105 [==============================] - 0s 327us/step - loss: 0.0354 - val_loss: 0.0386
Epoch 82/200
105/105 [==============================] - 0s 336us/step - loss: 0.0348 - val_loss: 0.0380
Epoch 83/200
105/105 [==============================] - 0s 341us/step - loss: 0.0343 - val_loss: 0.0378
Epoch 84/200
105/105 [==============================] - 0s 325us/step - loss: 0.0338 - val_loss: 0.0369
Epoch 85/200
105/105 [==============================] - 0s 344us/step - loss: 0.0334 - val_loss: 0.0364
Epoch 86/200
105/105 [==============================] - 0s 331us/step - loss: 0.0328 - val_loss: 0.0361
Epoch 87/200
105/105 [==============================] - 0s 347us/step - loss: 0.0323 - val_loss: 0.0356
Epoch 88/200
105/105 [==============================] - 0s 361us/step - loss: 0.0319 - val_loss: 0.0348
Epoch 89/200
105/105 [==============================] - 0s 335us/step - loss: 0.0315 - val_loss: 0.0342
Epoch 90/200
105/105 [==============================] - 0s 353us/step - loss: 0.0309 - val_loss: 0.0337
Epoch 91/200
105/105 [==============================] - 0s 329us/step - loss: 0.0305 - val_loss: 0.0332
Epoch 92/200
105/105 [==============================] - 0s 351us/step - loss: 0.0304 - val_loss: 0.0326
Epoch 93/200
105/105 [==============================] - 0s 313us/step - loss: 0.0294 - val_loss: 0.0322
Epoch 94/200
105/105 [==============================] - 0s 318us/step - loss: 0.0290 - val_loss: 0.0317
Epoch 95/200
105/105 [==============================] - 0s 350us/step - loss: 0.0287 - val_loss: 0.0312
Epoch 96/200
105/105 [==============================] - 0s 445us/step - loss: 0.0282 - val_loss: 0.0307
Epoch 97/200
105/105 [==============================] - 0s 337us/step - loss: 0.0277 - val_loss: 0.0303
Epoch 98/200
105/105 [==============================] - 0s 338us/step - loss: 0.0272 - val_loss: 0.0299
Epoch 99/200
105/105 [==============================] - 0s 334us/step - loss: 0.0270 - val_loss: 0.0293
Epoch 100/200
105/105 [==============================] - 0s 309us/step - loss: 0.0261 - val_loss: 0.0291
Epoch 101/200
105/105 [==============================] - 0s 336us/step - loss: 0.0260 - val_loss: 0.0287
Epoch 102/200
105/105 [==============================] - 0s 339us/step - loss: 0.0258 - val_loss: 0.0280
Epoch 103/200
105/105 [==============================] - 0s 324us/step - loss: 0.0253 - val_loss: 0.0277
Epoch 104/200
105/105 [==============================] - 0s 335us/step - loss: 0.0250 - val_loss: 0.0279
Epoch 105/200
105/105 [==============================] - 0s 387us/step - loss: 0.0247 - val_loss: 0.0268
Epoch 106/200
105/105 [==============================] - 0s 330us/step - loss: 0.0241 - val_loss: 0.0264
Epoch 107/200
105/105 [==============================] - 0s 311us/step - loss: 0.0237 - val_loss: 0.0260
Epoch 108/200
105/105 [==============================] - 0s 299us/step - loss: 0.0235 - val_loss: 0.0256
Epoch 109/200
105/105 [==============================] - 0s 349us/step - loss: 0.0230 - val_loss: 0.0252
Epoch 110/200
105/105 [==============================] - 0s 340us/step - loss: 0.0228 - val_loss: 0.0248
Epoch 111/200
105/105 [==============================] - 0s 326us/step - loss: 0.0224 - val_loss: 0.0244
Epoch 112/200
105/105 [==============================] - 0s 378us/step - loss: 0.0221 - val_loss: 0.0242
Epoch 113/200
105/105 [==============================] - 0s 316us/step - loss: 0.0218 - val_loss: 0.0242
Epoch 114/200
105/105 [==============================] - 0s 316us/step - loss: 0.0214 - val_loss: 0.0235
Epoch 115/200
105/105 [==============================] - 0s 312us/step - loss: 0.0212 - val_loss: 0.0229
Epoch 116/200
105/105 [==============================] - 0s 313us/step - loss: 0.0207 - val_loss: 0.0229
Epoch 117/200
105/105 [==============================] - 0s 304us/step - loss: 0.0204 - val_loss: 0.0222
Epoch 118/200
105/105 [==============================] - 0s 333us/step - loss: 0.0202 - val_loss: 0.0219
Epoch 119/200
105/105 [==============================] - 0s 406us/step - loss: 0.0197 - val_loss: 0.0219
Epoch 120/200
105/105 [==============================] - 0s 416us/step - loss: 0.0197 - val_loss: 0.0213
Epoch 121/200
105/105 [==============================] - 0s 374us/step - loss: 0.0192 - val_loss: 0.0209
Epoch 122/200
105/105 [==============================] - 0s 362us/step - loss: 0.0191 - val_loss: 0.0207
Epoch 123/200
105/105 [==============================] - 0s 338us/step - loss: 0.0189 - val_loss: 0.0203
Epoch 124/200
105/105 [==============================] - 0s 345us/step - loss: 0.0185 - val_loss: 0.0200
Epoch 125/200
105/105 [==============================] - 0s 352us/step - loss: 0.0183 - val_loss: 0.0198
Epoch 126/200
105/105 [==============================] - 0s 360us/step - loss: 0.0178 - val_loss: 0.0194
Epoch 127/200
105/105 [==============================] - 0s 339us/step - loss: 0.0177 - val_loss: 0.0192
Epoch 128/200
105/105 [==============================] - 0s 330us/step - loss: 0.0174 - val_loss: 0.0190
Epoch 129/200
105/105 [==============================] - 0s 333us/step - loss: 0.0171 - val_loss: 0.0186
Epoch 130/200
105/105 [==============================] - 0s 337us/step - loss: 0.0170 - val_loss: 0.0184
Epoch 131/200
105/105 [==============================] - 0s 353us/step - loss: 0.0166 - val_loss: 0.0181
Epoch 132/200
105/105 [==============================] - 0s 349us/step - loss: 0.0165 - val_loss: 0.0178
Epoch 133/200
105/105 [==============================] - 0s 360us/step - loss: 0.0161 - val_loss: 0.0176
Epoch 134/200
105/105 [==============================] - 0s 332us/step - loss: 0.0160 - val_loss: 0.0175
Epoch 135/200
105/105 [==============================] - 0s 307us/step - loss: 0.0158 - val_loss: 0.0171
Epoch 136/200
105/105 [==============================] - 0s 328us/step - loss: 0.0154 - val_loss: 0.0171
Epoch 137/200
105/105 [==============================] - 0s 325us/step - loss: 0.0152 - val_loss: 0.0166
Epoch 138/200
105/105 [==============================] - 0s 357us/step - loss: 0.0151 - val_loss: 0.0165
Epoch 139/200
105/105 [==============================] - 0s 363us/step - loss: 0.0149 - val_loss: 0.0163
Epoch 140/200
105/105 [==============================] - 0s 325us/step - loss: 0.0147 - val_loss: 0.0166
Epoch 141/200
105/105 [==============================] - 0s 336us/step - loss: 0.0146 - val_loss: 0.0168
Epoch 142/200
105/105 [==============================] - 0s 328us/step - loss: 0.0147 - val_loss: 0.0160
Epoch 143/200
105/105 [==============================] - 0s 336us/step - loss: 0.0144 - val_loss: 0.0154
Epoch 144/200
105/105 [==============================] - 0s 339us/step - loss: 0.0140 - val_loss: 0.0152
Epoch 145/200
105/105 [==============================] - 0s 326us/step - loss: 0.0138 - val_loss: 0.0151
Epoch 146/200
105/105 [==============================] - 0s 316us/step - loss: 0.0137 - val_loss: 0.0154
Epoch 147/200
105/105 [==============================] - 0s 318us/step - loss: 0.0136 - val_loss: 0.0148
Epoch 148/200
105/105 [==============================] - 0s 309us/step - loss: 0.0133 - val_loss: 0.0152
Epoch 149/200
105/105 [==============================] - 0s 305us/step - loss: 0.0132 - val_loss: 0.0145
Epoch 150/200
105/105 [==============================] - 0s 304us/step - loss: 0.0130 - val_loss: 0.0145
Epoch 151/200
105/105 [==============================] - 0s 323us/step - loss: 0.0128 - val_loss: 0.0143
Epoch 152/200
105/105 [==============================] - 0s 352us/step - loss: 0.0128 - val_loss: 0.0142
Epoch 153/200
105/105 [==============================] - 0s 307us/step - loss: 0.0125 - val_loss: 0.0136
Epoch 154/200
105/105 [==============================] - 0s 312us/step - loss: 0.0124 - val_loss: 0.0134
Epoch 155/200
105/105 [==============================] - 0s 300us/step - loss: 0.0123 - val_loss: 0.0133
Epoch 156/200
105/105 [==============================] - 0s 314us/step - loss: 0.0122 - val_loss: 0.0133
Epoch 157/200
105/105 [==============================] - 0s 315us/step - loss: 0.0120 - val_loss: 0.0129
Epoch 158/200
105/105 [==============================] - 0s 303us/step - loss: 0.0119 - val_loss: 0.0132
Epoch 159/200
105/105 [==============================] - 0s 313us/step - loss: 0.0118 - val_loss: 0.0127
Epoch 160/200
105/105 [==============================] - 0s 317us/step - loss: 0.0117 - val_loss: 0.0126
Epoch 161/200
105/105 [==============================] - 0s 321us/step - loss: 0.0116 - val_loss: 0.0131
Epoch 162/200
105/105 [==============================] - 0s 302us/step - loss: 0.0115 - val_loss: 0.0127
Epoch 163/200
105/105 [==============================] - 0s 307us/step - loss: 0.0113 - val_loss: 0.0122
Epoch 164/200
105/105 [==============================] - 0s 319us/step - loss: 0.0112 - val_loss: 0.0120
Epoch 165/200
105/105 [==============================] - 0s 311us/step - loss: 0.0111 - val_loss: 0.0120
Epoch 166/200
105/105 [==============================] - 0s 304us/step - loss: 0.0110 - val_loss: 0.0118
Epoch 167/200
105/105 [==============================] - 0s 329us/step - loss: 0.0108 - val_loss: 0.0116
Epoch 168/200
105/105 [==============================] - 0s 305us/step - loss: 0.0108 - val_loss: 0.0116
Epoch 169/200
105/105 [==============================] - 0s 310us/step - loss: 0.0107 - val_loss: 0.0118
Epoch 170/200
105/105 [==============================] - 0s 324us/step - loss: 0.0107 - val_loss: 0.0114
Epoch 171/200
105/105 [==============================] - 0s 308us/step - loss: 0.0106 - val_loss: 0.0112
Epoch 172/200
105/105 [==============================] - 0s 308us/step - loss: 0.0105 - val_loss: 0.0111
Epoch 173/200
105/105 [==============================] - 0s 314us/step - loss: 0.0104 - val_loss: 0.0111
Epoch 174/200
105/105 [==============================] - 0s 309us/step - loss: 0.0103 - val_loss: 0.0111
Epoch 175/200
105/105 [==============================] - 0s 314us/step - loss: 0.0102 - val_loss: 0.0110
Epoch 176/200
105/105 [==============================] - 0s 309us/step - loss: 0.0102 - val_loss: 0.0109
Epoch 177/200
105/105 [==============================] - 0s 313us/step - loss: 0.0101 - val_loss: 0.0108
Epoch 178/200
105/105 [==============================] - 0s 314us/step - loss: 0.0100 - val_loss: 0.0112
Epoch 179/200
105/105 [==============================] - 0s 302us/step - loss: 0.0100 - val_loss: 0.0107
Epoch 180/200
105/105 [==============================] - 0s 316us/step - loss: 0.0098 - val_loss: 0.0104
Epoch 181/200
105/105 [==============================] - 0s 315us/step - loss: 0.0098 - val_loss: 0.0107
Epoch 182/200
105/105 [==============================] - 0s 310us/step - loss: 0.0097 - val_loss: 0.0103
Epoch 183/200
105/105 [==============================] - 0s 317us/step - loss: 0.0096 - val_loss: 0.0104
Epoch 184/200
105/105 [==============================] - 0s 331us/step - loss: 0.0095 - val_loss: 0.0101
Epoch 185/200
105/105 [==============================] - 0s 299us/step - loss: 0.0094 - val_loss: 0.0104
Epoch 186/200
105/105 [==============================] - 0s 301us/step - loss: 0.0094 - val_loss: 0.0100
Epoch 187/200
105/105 [==============================] - 0s 328us/step - loss: 0.0094 - val_loss: 0.0102
Epoch 188/200
105/105 [==============================] - 0s 306us/step - loss: 0.0093 - val_loss: 0.0100
Epoch 189/200
105/105 [==============================] - 0s 302us/step - loss: 0.0093 - val_loss: 0.0099
Epoch 190/200
105/105 [==============================] - 0s 322us/step - loss: 0.0092 - val_loss: 0.0097
Epoch 191/200
105/105 [==============================] - 0s 315us/step - loss: 0.0092 - val_loss: 0.0097
Epoch 192/200
105/105 [==============================] - 0s 303us/step - loss: 0.0092 - val_loss: 0.0097
Epoch 193/200
105/105 [==============================] - 0s 307us/step - loss: 0.0091 - val_loss: 0.0098
Epoch 194/200
105/105 [==============================] - 0s 352us/step - loss: 0.0090 - val_loss: 0.0096
Epoch 195/200
105/105 [==============================] - 0s 313us/step - loss: 0.0089 - val_loss: 0.0100
Epoch 196/200
105/105 [==============================] - 0s 359us/step - loss: 0.0090 - val_loss: 0.0103
Epoch 197/200
105/105 [==============================] - 0s 341us/step - loss: 0.0089 - val_loss: 0.0096
Epoch 198/200
105/105 [==============================] - 0s 322us/step - loss: 0.0088 - val_loss: 0.0094
Epoch 199/200
105/105 [==============================] - 0s 318us/step - loss: 0.0088 - val_loss: 0.0093
Epoch 200/200
105/105 [==============================] - 0s 295us/step - loss: 0.0088 - val_loss: 0.0092

從上面的結果可以看到,我們將網絡測試集的MSE訓練到了小于0.01的程度,注意,上述結果同時顯示了訓練樣本和測試樣本的誤差。訓練誤差比測試誤差小是正常的,因為模型是在訓練樣本上訓練的。但如果訓練誤差比測試誤差小很多,就意味著出現了過擬合的現象。
我們可以使用evaluate評估模型在測試集上的損失,

score = model.evaluate(x_test, y_test)
print('Test loss:', score)
45/45 [==============================] - 0s 97us/step
Test loss: 0.00922720053543647

獲取更原始的預測結果:

y_pred = model.predict(x_test)

for yp, ya in list(zip(y_pred, y_test))[0:10]:
    print("predicted %0.2f, actual %0.2f" % (yp, ya))
predicted 0.72, actual 0.96
predicted 0.53, actual 0.52
predicted 0.87, actual 0.84
predicted 0.72, actual 0.72
predicted 0.16, actual 0.08
predicted 0.13, actual 0.08
predicted 0.13, actual 0.08
predicted 0.15, actual 0.08
predicted 0.62, actual 0.60
predicted 0.54, actual 0.52

我們也可以手動計算MSE,

def MSE(y_pred, y_test):
    return (1.0/len(y_test)) * np.sum([((y1[0]-y2)**2) for y1, y2 in list(zip(y_pred, y_test))])

print("MSE is %0.4f" % MSE(y_pred, y_test))
MSE is 0.0092

也可以用以下方法預測單個樣本的輸出,

x_sample = x_test[0].reshape(1, 3)   # shape must be (num_samples, 3), even if num_samples = 1
y_prob = model.predict(x_sample)

print("predicted %0.3f, actual %0.3f" % (y_prob[0][0], y_test[0]))
predicted 0.723, actual 0.960

到目前為止,我們介紹了Keras在回歸問題上的應用。Keras庫強大的作用將在我們后面學習到分類、卷積神經網絡和其他各種優化時體現出來,讓我們拭目以待吧!

?著作權歸作者所有,轉載或內容合作請聯系作者
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發布,文章內容僅代表作者本人觀點,簡書系信息發布平臺,僅提供信息存儲服務。
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現的幾起案子,更是在濱河造成了極大的恐慌,老刑警劉巖,帶你破解...
    沈念sama閱讀 228,197評論 6 531
  • 序言:濱河連續發生了三起死亡事件,死亡現場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機,發現死者居然都...
    沈念sama閱讀 98,415評論 3 415
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事。” “怎么了?”我有些...
    開封第一講書人閱讀 176,104評論 0 373
  • 文/不壞的土叔 我叫張陵,是天一觀的道長。 經常有香客問我,道長,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 62,884評論 1 309
  • 正文 為了忘掉前任,我火速辦了婚禮,結果婚禮上,老公的妹妹穿的比我還像新娘。我一直安慰自己,他們只是感情好,可當我...
    茶點故事閱讀 71,647評論 6 408
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著,像睡著了一般。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發上,一...
    開封第一講書人閱讀 55,130評論 1 323
  • 那天,我揣著相機與錄音,去河邊找鬼。 笑死,一個胖子當著我的面吹牛,可吹牛的內容都是我干的。 我是一名探鬼主播,決...
    沈念sama閱讀 43,208評論 3 441
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 42,366評論 0 288
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后,有當地人在樹林里發現了一具尸體,經...
    沈念sama閱讀 48,887評論 1 334
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 40,737評論 3 354
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發現自己被綠了。 大學時的朋友給我發了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 42,939評論 1 369
  • 序言:一個原本活蹦亂跳的男人離奇死亡,死狀恐怖,靈堂內的尸體忽然破棺而出,到底是詐尸還是另有隱情,我是刑警寧澤,帶...
    沈念sama閱讀 38,478評論 5 358
  • 正文 年R本政府宣布,位于F島的核電站,受9級特大地震影響,放射性物質發生泄漏。R本人自食惡果不足惜,卻給世界環境...
    茶點故事閱讀 44,174評論 3 347
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧,春花似錦、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 34,586評論 0 26
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至,卻和暖如春,著一層夾襖步出監牢的瞬間,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 35,827評論 1 283
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人。 一個月前我還...
    沈念sama閱讀 51,608評論 3 390
  • 正文 我出身青樓,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 47,914評論 2 372

推薦閱讀更多精彩內容