Predict LeBron James’s Game Results with RNN

栏目: IT技术 · 发布时间: 5年前

Predict LeBron James’s Game Results with RNN

The Modeling Results

To start with, I build an ANN with densely connected layers as my baseline model to compare my other models with.

from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSpropmodel_ann = Sequential()
model_ann.add(layers.Flatten(input_shape = (lookback, data_u.shape[-1])))
model_ann.add(layers.Dense(32,activation = 'relu'))
model_ann.add(layers.Dropout(0.3))
model_ann.add(layers.Dense(1,activation = 'sigmoid'))
model_ann.summary()
ANN model structure

Then, I compile the model and record the fitting process.

model_ann.compile(optimizer = RMSprop(lr = 1e-2),
 loss = 'binary_crossentropy',
 metrics = ['acc'])
history = model_ann.fit_generator(train_generator,
 steps_per_epoch=steps_per_epc,
 epochs = 20, 
 validation_data = val_generator,
 validation_steps = val_steps)

To check the performance on the validation dataset, I plot the loss curve.

acc_ = history_dic['loss']
val_acc_ = history_dic['val_loss']
epochs = range(1,21)
#plt.clf()
plt.plot(epochs,acc_, 'bo', label = "training loss")
plt.plot(epochs, val_acc_, 'r', label = "validation loss")
plt.xlabel('Epochs')
plt.ylabel('loss')
plt.legend()
plt.show()
training and validation ANN loss

As expected, the model becomes overfitting after several epochs. To evaluate the model objectively, I apply it to the test set and get accuracy as 60%.

scores = model_ann.evaluate_generator(test_generator,test_steps) 
print("Accuracy = ", scores[1]," Loss = ", scores[0])
ANN test set performance

Next, I implement an RNN by using one LSTM layer followed by two densely connected layers.

model_rnn = Sequential()
model_rnn.add(layers.LSTM(32,
 dropout=0.2,
 recurrent_dropout=0.2,
 input_shape=(None,data_u.shape[-1])))model_rnn.add(layers.Dense(32,activation = 'relu'))
model_rnn.add(layers.Dropout(0.3))
model_rnn.add(layers.Dense(1,activation='sigmoid'))
model_rnn.summary()
One layer of LSTM structure

The model training is similar to that of ANN above.

model_rnn.compile(optimizer = RMSprop(lr = 1e-2),
 loss = 'binary_crossentropy',
 metrics = ['acc'])
history = model_rnn.fit_generator(train_generator, 
 steps_per_epoch=steps_per_epc,
 epochs = 20, 
 validation_data = val_generator,
 validation_steps = val_steps)

The training and validation set performance is as below.

1 layer LSTM loss

The overfitting is not as severe as that of the ANN. I also evaluate the model on the test data, which yields an accuracy of 62.5%. Even though the performance on the test set is better than that of the ANN with densely connected layers, the improvement is tiny.

To gain better performance, I try to increase the complexity of the model by adding one more recurrent layer. However, to reduce the computational cost, I replace the LSTM layer by the Gated Recurrent Unit (GRU). The model is shown below.

model_rnn = Sequential()
model_rnn.add(layers.GRU(32,
 dropout=0.2,
 recurrent_dropout=0.2,
 return_sequences = True,
 input_shape=(None,data_u.shape[-1])))
model_rnn.add(layers.GRU(64, activation = 'relu',dropout=0.2,recurrent_dropout=0.2))
model_rnn.add(layers.Dense(32,activation = 'relu'))
model_rnn.add(layers.Dropout(0.3))model_rnn.add(layers.Dense(1,activation = 'sigmoid'))
model_rnn.summary()
2 layers of GRU structure

The training and validation set performance is as below.

2 layers of GRU loss

No serious overfitting is detected on the plot. Even though the accuracy of the test data has increased to 64%, the improvement is still tiny. I begin to doubt whether RNN can do the job.

However, I give my last try by further increasing the complexity of the model. Specifically, I enable the recurrent layer to be bidirectional.

model_rnn = Sequential()
model_rnn.add(layers.Bidirectional(layers.GRU(32,
 dropout=0.2,
 recurrent_dropout=0.2,
 return_sequences = True),
 input_shape=(None,data_u.shape[-1])))
model_rnn.add(layers.Bidirectional(layers.GRU(64, activation = 'relu',dropout=0.2,recurrent_dropout=0.2)))model_rnn.add(layers.Dense(32,activation = 'relu'))
model_rnn.add(layers.Dropout(0.3))
model_rnn.add(layers.Dense(1,activation='sigmoid'))
model_rnn.summary()
Bidirectional RNN structure

This time, the training and validation set performance is as below.

bidirectional loss

Actually, before the model starts overfitting, there is not much difference between this model and the previous one on the validation loss. The accuracy of the test set is 64% as well.

By exploring all the models above, I kind of realize that the RNN may not be a good fit for the NBA game result prediction problem. There are indeed tens of hyperparameters that can be tuned, the difference between the ANN and RNN, however, is too small.


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

知识发现

知识发现

史忠植 / 2011-1 / 59.00元

《知识发现(第2版)》全面而又系统地介绍了知识发现的方法和技术,反映了当前知识发现研究的最新成果和进展。全书共分15章。第1章是绪论,概述知识发现的重要概念和发展过程。下面三章重点讨论分类问题,包括决策树、支持向量机和迁移学习。第5章阐述聚类分析。第6章是关联规则。第7章讨论粗糙集和粒度计算。第8章介绍神经网络,书中着重介绍几种实用的算法。第9章探讨贝叶斯网络。第10章讨论隐马尔可夫模型。第11章......一起来看看 《知识发现》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

HTML 编码/解码
HTML 编码/解码

HTML 编码/解码

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换