神經(jīng)結(jié)構(gòu)進(jìn)步、GPU深度學(xué)習(xí)訓(xùn)練效率突破。RNN,時(shí)間序列數(shù)據(jù)有效,每個(gè)神經(jīng)元通過內(nèi)部組件保存輸入信息。
卷積神經(jīng)網(wǎng)絡(luò),圖像分類,無法對(duì)視頻每幀圖像發(fā)生事情關(guān)聯(lián)分析,無法利用前幀圖像信息。RNN最大特點(diǎn),神經(jīng)元某些輸出作為輸入再次傳輸?shù)缴窠?jīng)元,可以利用之前信息。
xt是RNN輸入,A是RNN節(jié)點(diǎn),ht是輸出。對(duì)RNN輸入數(shù)據(jù)xt,網(wǎng)絡(luò)計(jì)算得輸出結(jié)果ht,某些信息(state,狀態(tài))傳到網(wǎng)絡(luò)輸入。輸出ht與label比較得誤差,用梯度下降(Gradient Descent)和Back-Propagation Through Time(BPTT)方法訓(xùn)練網(wǎng)絡(luò)。BPTT,用反向傳播求解梯度,更新網(wǎng)絡(luò)參數(shù)權(quán)重。Real_Time Recurrent Learning(RTRL),正向求解梯度,計(jì)算復(fù)雜度高。介于BPTT和RTRL之間混合方法,緩解時(shí)間序列間隔過長帶來梯度彌散問題。
RNN循環(huán)展開串聯(lián)結(jié)構(gòu),類似系列輸入x和系列輸出串聯(lián)普通神經(jīng)網(wǎng)絡(luò),上層神經(jīng)網(wǎng)絡(luò)傳遞信息給下層。適合時(shí)間序列數(shù)據(jù)處理分析。展開每層級(jí)神經(jīng)網(wǎng)絡(luò),參數(shù)相同,只需要訓(xùn)練一層RNN參數(shù)。共享參數(shù)思想與卷積神經(jīng)網(wǎng)絡(luò)權(quán)值共享類似。
RNN處理整個(gè)時(shí)間序列信息,記憶最深是最后輸入信號(hào)。前信號(hào)強(qiáng)度越來越低。Long Sort Term Memory(LSTM)突破,語音識(shí)別、文本分類、語言模型、自動(dòng)對(duì)話、機(jī)器翻譯、圖像標(biāo)注領(lǐng)域。
長程依賴(Long-term Dependencies),傳統(tǒng)RNN關(guān)鍵缺陷。LSTM,Schmidhuber教授1997年提出,解決長程依賴,不需要特別復(fù)雜調(diào)試超參數(shù),默認(rèn)記住長期信息。
LSTM內(nèi)部結(jié)構(gòu),4層神經(jīng)網(wǎng)絡(luò),小圓圈是point-wise操作(向量加法、點(diǎn)乘等),小矩形是一層可學(xué)習(xí)參數(shù)神經(jīng)網(wǎng)絡(luò)。LSTM單元上直線代表LSTM狀態(tài)state,貫穿所有串聯(lián)LSTM單元,從第一個(gè)流向最后一個(gè),只有少量線性干預(yù)和改變。狀態(tài)state傳遞,LSTM單兇添加或刪減信息,LSTM Gates控制信息流修改操作。Gates包含Sigmoid層和向量點(diǎn)乘操作。Sigmoid層輸出0到1間值,直接控制信息傳遞比例。0不允許信息傳遞,1讓信息全部通過。每個(gè)LSTM單元3個(gè)Gates,維護(hù)控制單元狀態(tài)信息。狀態(tài)信息儲(chǔ)存、修改,LSTM單元實(shí)現(xiàn)長程記憶。
RNN變種,LSTM,Gated Recurrent Unit(GRU)。GRU結(jié)構(gòu),比LSTM少一個(gè)Gate。計(jì)算效率更高(每個(gè)單元計(jì)算節(jié)約幾個(gè)矩陣運(yùn)算),占用內(nèi)存少。GRU收斂所需迭代更少,訓(xùn)練速度更快。
循環(huán)神經(jīng)網(wǎng)絡(luò),自然語言處理,語言模型。語言模型,預(yù)測語句概率模型,給定上下文語境,歷史出現(xiàn)單詞,預(yù)測下一個(gè)單詞出現(xiàn)概率,NLP、語音識(shí)別、機(jī)器翻譯、圖片標(biāo)注任務(wù)基礎(chǔ)關(guān)鍵。Penn Tree Bank(PTB)常用數(shù)據(jù)集,質(zhì)量高,不大,訓(xùn)練快。《Recurrent Neural Network Regularization》。
下載PTB數(shù)據(jù)集,解壓。確保解壓文件路徑和Python執(zhí)行路徑一致。1萬個(gè)不同單詞,有句尾標(biāo)記,罕見詞匯統(tǒng)一處理為特殊字符。wget http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examplex.tgz 。tar xvf simple-examples.tgz 。
下載TensorFlow Models庫(git clone https://github.com/tensorflow/models.git),進(jìn)入目錄models/tutorials/rnn/ptb(cd)。載入常用庫,TensorFlow Models PTB reader,讀取數(shù)據(jù)內(nèi)容。單詞轉(zhuǎn)唯一數(shù)字編碼。
定義語言模型處理輸入數(shù)據(jù)class,PTBInput。初始化方法init(),讀取參數(shù)config的batch_size、num_steps到本地變量。num_steps,LSTM展開步數(shù)(unrolled steps of LSTM)。計(jì)算epoth size ,epoch內(nèi)訓(xùn)練迭代輪數(shù),數(shù)據(jù)長度整除batch_size、num_steps。reader.ptb_producer獲取特征數(shù)據(jù)input_data、label數(shù)據(jù)targets。每次執(zhí)行獲取一個(gè)batch數(shù)據(jù)。
定義語言模型class,PTBModel。初始化函數(shù)init(),參數(shù),訓(xùn)練標(biāo)記is_training、配置參數(shù)config、PTBInput類實(shí)例input_。讀取input_的batch_size、num_steps,讀取config的hidden_size(LSTM節(jié)點(diǎn)數(shù))、vocab_size(詞匯表大小)到本地變量。
tf.contrib.rnn.BasicLSTMCell設(shè)置默認(rèn)LSTM單元,隱含節(jié)點(diǎn)數(shù)hidden_size、gorget_bias(forget gate bias) 0,state_is_tuple True,接受返回state是2-tuple形式。訓(xùn)練狀態(tài)且Dropout keep_prob小于1,1stm_cell接Dropout層,tf.contrib.rnn.DropoutWrapper函數(shù)。RNN堆疊函數(shù) tf.contrib.rnn.MultiRNNCell 1stm_cell多層堆疊到cell,堆疊次數(shù) config num_layers,state_is_truple設(shè)True,cell.zero_state設(shè)LSTM單元初始化狀態(tài)0。LSTM單元讀放單詞,結(jié)合儲(chǔ)存狀態(tài)state計(jì)算下一單詞出現(xiàn)概率分布,每次讀取單詞,狀態(tài)state更新。
創(chuàng)建網(wǎng)絡(luò)詞嵌入embedding,將one-hot編碼格式單詞轉(zhuǎn)向量表達(dá)形式。with tf.device("/cpu:0") 計(jì)算限定CPU進(jìn)行。初始化embedding矩陣,行數(shù)設(shè)詞匯表數(shù)vocab_size,列數(shù)(單詞向量表達(dá)維數(shù))hidden_size,和LST單元陷含節(jié)點(diǎn)數(shù)一致。訓(xùn)練過程,embedding參數(shù)優(yōu)化更新。tf.nn.embedding_lookup查詢單對(duì)應(yīng)向量表達(dá)獲得inputs。訓(xùn)練狀態(tài)加一層Dropout。
定義輸出outputs,tf.variable_scope設(shè)名RNN。控制訓(xùn)練過程,限制梯度反向傳播展開步數(shù)固定值,num_steps.設(shè)置循環(huán)長度 num-steps,控制梯度傳播。從第2次循環(huán),tf.get_varible_scope.reuse_variables設(shè)置復(fù)用變量。每次循環(huán),傳入inputs、state到堆疊LSTM單元(cell)。inputs 3維度,第1維 batch第幾個(gè)樣本,第2維 樣本第幾個(gè)單詞,第3維 單詞向量表達(dá)維度。inputs[:,time_step,:] 所有樣本第time_step個(gè)單詞。輸出cell_output和更新state。 結(jié)果cell_output添加輸出列表outputs。
tf.concat串接output內(nèi)容,tf.reshape轉(zhuǎn)長一維向量。Softmax層,定義權(quán)重softmax_w、偏置softmax_b。tf.matmul 輸出output乘權(quán)重加偏置得網(wǎng)絡(luò)最后輸出logits。定久損失loss,tf.contrib.legacy_seq2seq.sequence_loss_by_example計(jì)算輸出logits和targets偏差。sequence_loss,target words average negative log probability,定義loss=1/N add i=1toN ln Ptargeti。tf.reduce_sum匯總batch誤差,計(jì)算平均樣本誤差cost。保留最終狀態(tài)final_state。不是訓(xùn)練狀態(tài)直接返回。
定義學(xué)習(xí)速率變量lr,設(shè)不可訓(xùn)練。tf.trainable_variables獲取全部可訓(xùn)練參數(shù)tvars。針對(duì)cost,計(jì)算tvars梯度,tf.clip_by_global_norm設(shè)梯度最大范數(shù),起正則化效果。Gradient Clipping防止Gradient Explosion梯度爆炸問題。不限制梯度,迭代梯度過大,訓(xùn)練難收斂。定義優(yōu)化器Gradient Descent。創(chuàng)建訓(xùn)練操作_train_op,optimizer.apply_gradients,clip過梯度用到所有可訓(xùn)練參數(shù)tvars,tf.contrib.framework.get_or_create_global_step生成全局統(tǒng)一訓(xùn)練步數(shù)。
設(shè)置_new_lr(new learning rate) placeholder控制學(xué)習(xí)速率。定義操作_lr_update,tf.assign 賦_new_lr值給當(dāng)前學(xué)習(xí)速率_lr。定義assign_lr函數(shù),外部控制模型學(xué)習(xí)速率,學(xué)習(xí)速率值傳入_new_lr placeholder,執(zhí)行_update_lr操作修改學(xué)習(xí)速率。
定義PTBModel class property。Python @property裝飾器,返回變量設(shè)只讀,防止修改變量引發(fā)問題。input、initial_state、cost、final_state、lr、train_op。
定義模型設(shè)置。init_scale,網(wǎng)絡(luò)權(quán)重初始scale。learning_rate,學(xué)習(xí)速率初始值。max_grad_norm,梯度最大范數(shù)。num_lyers,LSTM堆疊層數(shù)。num_steps,LSTM梯度反向傳播展開步數(shù)。hidden_size,LSTM內(nèi)隱含節(jié)點(diǎn)數(shù)。max_epoch,初始學(xué)習(xí)速率可訓(xùn)練epoch數(shù),需要調(diào)整學(xué)習(xí)速率。max_max_epoch,總共可訓(xùn)練epoch數(shù)。keep_prob,dropout層保留節(jié)點(diǎn)比例。lr_decay學(xué)習(xí)速率衰減速度。batch_size,每個(gè)batch樣本數(shù)量。
MediumConfig中型模型,減小init_scale,希望權(quán)重初值不要過大,小有利溫和訓(xùn)練。學(xué)習(xí)速率、最大梯度范數(shù)不變,LSTM層數(shù)不變。梯度反向傳播展開步數(shù)num_steps從20增大到35。hidden_size、max_max_epoch增大3倍。設(shè)置dropout keep_prob 0.5。學(xué)習(xí)迭代次數(shù)增大,學(xué)習(xí)速率衰減速率lr_decay減小。batch_size、詞匯表vocab_size不變。
LargeConfig大型模型,進(jìn)一步縮小init_scale。放寬最大梯度范數(shù)max_grad_norm到10。hidden_size提升到1500。max_epoch、max_max_epoch增大。keep_prob因模型復(fù)雜度上升繼續(xù)下降。學(xué)習(xí)速率衰減速率lr_decay進(jìn)一步減小。
TestConfig測試用。參數(shù)盡量最小值。
定義訓(xùn)練epoch數(shù)據(jù)函數(shù)run_epoch。記錄當(dāng)前時(shí)間,初始化損失costs、迭代數(shù)據(jù)iters,執(zhí)行model.initial_state初始化狀態(tài),獲得初始狀態(tài)。創(chuàng)建輸出結(jié)果字典表fetches,包括cost、final_state。如果有評(píng)測操作,也加入fetches。訓(xùn)練循環(huán),次數(shù)epoch_size。循環(huán),生成訓(xùn)練feed_dict,全部LSTM單元state加入feed_dict,傳入feed_dict,執(zhí)行fetches訓(xùn)練網(wǎng)絡(luò),拿到cost、state。累加cost到costs,累加num_steps到iters。每完成10%epoch,展示結(jié)果,當(dāng)前epoch進(jìn)度,perplexity(平均cost自然常數(shù)指數(shù),語言模型比較性能重要指標(biāo),越低模型輸出概率分布在預(yù)測樣本越好),訓(xùn)練速度(單詞數(shù)每秒)。返回perplexity函數(shù)結(jié)果。
reader.ptb_raw_data讀取解壓后數(shù)據(jù),得訓(xùn)練數(shù)據(jù)、驗(yàn)證數(shù)據(jù)、測試數(shù)據(jù)。定義訓(xùn)練模型配置SmallConfig。測試配置eval_config需和訓(xùn)練配置一致。測試配置batch_size、num_steps 1。
創(chuàng)建默認(rèn)Graph,tf.random_uniform_initializer設(shè)置參數(shù)初始化器,參數(shù)范圍在[-init_scale,init_scale]之間。PTBInput和PTBModel創(chuàng)建訓(xùn)練模型m,驗(yàn)證模型mvalid,測試模型mtest。訓(xùn)練、驗(yàn)證模型用config,測試模型用測試配置eval_config。
tf.train.supervisor()創(chuàng)建訓(xùn)練管理器sv,sv.managed_session創(chuàng)建默認(rèn)session,執(zhí)行訓(xùn)練多個(gè)epoch數(shù)據(jù)循環(huán)。每個(gè)epoch循環(huán),計(jì)算累計(jì)學(xué)習(xí)速率衰減值,只需計(jì)算超過max_epoch輪數(shù),求lr_decay超出輪數(shù)次冪。初始學(xué)習(xí)速率乘累計(jì)衰減速,更新學(xué)習(xí)速率。循環(huán)內(nèi)執(zhí)行epoch訓(xùn)練和驗(yàn)證,輸出當(dāng)前學(xué)習(xí)速率、訓(xùn)練驗(yàn)證集perplexity。完成全部訓(xùn)練,計(jì)算輸出模型測試集perplexity。
SmallConfig小型模型,i7 6900K GTX 1080 訓(xùn)練速率21000單詞每秒,最后epoch,訓(xùn)練集36.9 perplexity,驗(yàn)證集122.3、測試集116.7。
中型模型,訓(xùn)練集48.45,驗(yàn)證集86.16、測試集82.07。大型模型,訓(xùn)練集37.87,驗(yàn)證集82.62、測試集78.29。
LSTM存儲(chǔ)狀態(tài),依靠狀態(tài)對(duì)當(dāng)前輸入處理分析預(yù)測。RNN、LSTM賦預(yù)神經(jīng)網(wǎng)絡(luò)記憶和儲(chǔ)存過往信息能力,模仿人類簡單記憶、推理功能。注意力(attention)機(jī)制是RNN、NLP領(lǐng)域研究熱點(diǎn),機(jī)器更好模擬人腦功能。圖像標(biāo)題生成任務(wù),注意力機(jī)制RNN對(duì)區(qū)域圖像分析,生成對(duì)應(yīng)文字描述。《Show,Attend and Tell:Neural Image Caption Generation with Visual Attention》。
import time
import numpy as np
import tensorflow as tf
import reader
#flags = tf.flags
#logging = tf.logging
#flags.DEFINE_string("save_path", None,
# "Model output directory.")
#flags.DEFINE_bool("use_fp16", False,
# "Train using 16-bit floats instead of 32bit floats")
#FLAGS = flags.FLAGS
#def data_type():
# return tf.float16 if FLAGS.use_fp16 else tf.float32
class PTBInput(object):
"""The input data."""
def __init__(self, config, data, name=None):
self.batch_size = batch_size = config.batch_size
self.num_steps = num_steps = config.num_steps
self.epoch_size = ((len(data) // batch_size) - 1) // num_steps
self.input_data, self.targets = reader.ptb_producer(
data, batch_size, num_steps, name=name)
class PTBModel(object):
"""The PTB model."""
def __init__(self, is_training, config, input_):
self._input = input_
batch_size = input_.batch_size
num_steps = input_.num_steps
size = config.hidden_size
vocab_size = config.vocab_size
# Slightly better results can be obtained with forget gate biases
# initialized to 1 but the hyperparameters of the model would need to be
# different than reported in the paper.
def lstm_cell():
return tf.contrib.rnn.BasicLSTMCell(
size, forget_bias=0.0, state_is_tuple=True)
attn_cell = lstm_cell
if is_training and config.keep_prob < 1:
def attn_cell():
return tf.contrib.rnn.DropoutWrapper(
lstm_cell(), output_keep_prob=config.keep_prob)
cell = tf.contrib.rnn.MultiRNNCell(
[attn_cell() for _ in range(config.num_layers)], state_is_tuple=True)
self._initial_state = cell.zero_state(batch_size, tf.float32)
with tf.device("/cpu:0"):
embedding = tf.get_variable(
"embedding", [vocab_size, size], dtype=tf.float32)
inputs = tf.nn.embedding_lookup(embedding, input_.input_data)
if is_training and config.keep_prob < 1:
inputs = tf.nn.dropout(inputs, config.keep_prob)
# Simplified version of models/tutorials/rnn/rnn.py's rnn().
# This builds an unrolled LSTM for tutorial purposes only.
# In general, use the rnn() or state_saving_rnn() from rnn.py.
#
# The alternative version of the code below is:
#
# inputs = tf.unstack(inputs, num=num_steps, axis=1)
# outputs, state = tf.nn.rnn(cell, inputs,
# initial_state=self._initial_state)
outputs = []
state = self._initial_state
with tf.variable_scope("RNN"):
for time_step in range(num_steps):
if time_step > 0: tf.get_variable_scope().reuse_variables()
(cell_output, state) = cell(inputs[:, time_step, :], state)
outputs.append(cell_output)
output = tf.reshape(tf.concat(outputs, 1), [-1, size])
softmax_w = tf.get_variable(
"softmax_w", [size, vocab_size], dtype=tf.float32)
softmax_b = tf.get_variable("softmax_b", [vocab_size], dtype=tf.float32)
logits = tf.matmul(output, softmax_w) + softmax_b
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
[logits],
[tf.reshape(input_.targets, [-1])],
[tf.ones([batch_size * num_steps], dtype=tf.float32)])
self._cost = cost = tf.reduce_sum(loss) / batch_size
self._final_state = state
if not is_training:
return
self._lr = tf.Variable(0.0, trainable=False)
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),
config.max_grad_norm)
optimizer = tf.train.GradientDescentOptimizer(self._lr)
self._train_op = optimizer.apply_gradients(
zip(grads, tvars),
global_step=tf.contrib.framework.get_or_create_global_step())
self._new_lr = tf.placeholder(
tf.float32, shape=[], name="new_learning_rate")
self._lr_update = tf.assign(self._lr, self._new_lr)
def assign_lr(self, session, lr_value):
session.run(self._lr_update, feed_dict={self._new_lr: lr_value})
@property
def input(self):
return self._input
@property
def initial_state(self):
return self._initial_state
@property
def cost(self):
return self._cost
@property
def final_state(self):
return self._final_state
@property
def lr(self):
return self._lr
@property
def train_op(self):
return self._train_op
class SmallConfig(object):
"""Small config."""
init_scale = 0.1
learning_rate = 1.0
max_grad_norm = 5
num_layers = 2
num_steps = 20
hidden_size = 200
max_epoch = 4
max_max_epoch = 13
keep_prob = 1.0
lr_decay = 0.5
batch_size = 20
vocab_size = 10000
class MediumConfig(object):
"""Medium config."""
init_scale = 0.05
learning_rate = 1.0
max_grad_norm = 5
num_layers = 2
num_steps = 35
hidden_size = 650
max_epoch = 6
max_max_epoch = 39
keep_prob = 0.5
lr_decay = 0.8
batch_size = 20
vocab_size = 10000
class LargeConfig(object):
"""Large config."""
init_scale = 0.04
learning_rate = 1.0
max_grad_norm = 10
num_layers = 2
num_steps = 35
hidden_size = 1500
max_epoch = 14
max_max_epoch = 55
keep_prob = 0.35
lr_decay = 1 / 1.15
batch_size = 20
vocab_size = 10000
class TestConfig(object):
"""Tiny config, for testing."""
init_scale = 0.1
learning_rate = 1.0
max_grad_norm = 1
num_layers = 1
num_steps = 2
hidden_size = 2
max_epoch = 1
max_max_epoch = 1
keep_prob = 1.0
lr_decay = 0.5
batch_size = 20
vocab_size = 10000
def run_epoch(session, model, eval_op=None, verbose=False):
"""Runs the model on the given data."""
start_time = time.time()
costs = 0.0
iters = 0
state = session.run(model.initial_state)
fetches = {
"cost": model.cost,
"final_state": model.final_state,
}
if eval_op is not None:
fetches["eval_op"] = eval_op
for step in range(model.input.epoch_size):
feed_dict = {}
for i, (c, h) in enumerate(model.initial_state):
feed_dict[c] = state[i].c
feed_dict[h] = state[i].h
vals = session.run(fetches, feed_dict)
cost = vals["cost"]
state = vals["final_state"]
costs += cost
iters += model.input.num_steps
if verbose and step % (model.input.epoch_size // 10) == 10:
print("%.3f perplexity: %.3f speed: %.0f wps" %
(step * 1.0 / model.input.epoch_size, np.exp(costs / iters),
iters * model.input.batch_size / (time.time() - start_time)))
return np.exp(costs / iters)
raw_data = reader.ptb_raw_data('simple-examples/data/')
train_data, valid_data, test_data, _ = raw_data
config = SmallConfig()
eval_config = SmallConfig()
eval_config.batch_size = 1
eval_config.num_steps = 1
with tf.Graph().as_default():
initializer = tf.random_uniform_initializer(-config.init_scale,
config.init_scale)
with tf.name_scope("Train"):
train_input = PTBInput(config=config, data=train_data, name="TrainInput")
with tf.variable_scope("Model", reuse=None, initializer=initializer):
m = PTBModel(is_training=True, config=config, input_=train_input)
#tf.scalar_summary("Training Loss", m.cost)
#tf.scalar_summary("Learning Rate", m.lr)
with tf.name_scope("Valid"):
valid_input = PTBInput(config=config, data=valid_data, name="ValidInput")
with tf.variable_scope("Model", reuse=True, initializer=initializer):
mvalid = PTBModel(is_training=False, config=config, input_=valid_input)
#tf.scalar_summary("Validation Loss", mvalid.cost)
with tf.name_scope("Test"):
test_input = PTBInput(config=eval_config, data=test_data, name="TestInput")
with tf.variable_scope("Model", reuse=True, initializer=initializer):
mtest = PTBModel(is_training=False, config=eval_config,
input_=test_input)
sv = tf.train.Supervisor()
with sv.managed_session() as session:
for i in range(config.max_max_epoch):
lr_decay = config.lr_decay ** max(i + 1 - config.max_epoch, 0.0)
m.assign_lr(session, config.learning_rate * lr_decay)
print("Epoch: %d Learning rate: %.3f" % (i + 1, session.run(m.lr)))
train_perplexity = run_epoch(session, m, eval_op=m.train_op,
verbose=True)
print("Epoch: %d Train Perplexity: %.3f" % (i + 1, train_perplexity))
valid_perplexity = run_epoch(session, mvalid)
print("Epoch: %d Valid Perplexity: %.3f" % (i + 1, valid_perplexity))
test_perplexity = run_epoch(session, mtest)
print("Test Perplexity: %.3f" % test_perplexity)
# if FLAGS.save_path:
# print("Saving model to %s." % FLAGS.save_path)
# sv.saver.save(session, FLAGS.save_path, global_step=sv.global_step)
#if __name__ == "__main__":
# tf.app.run()
參考資料:
《TensorFlow實(shí)戰(zhàn)》
歡迎付費(fèi)咨詢(150元每小時(shí)),我的微信:qingxingfengzi