| |
|
开发:
C++知识库
Java知识库
JavaScript
Python
PHP知识库
人工智能
区块链
大数据
移动开发
嵌入式
开发工具
数据结构与算法
开发测试
游戏开发
网络协议
系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程 数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁 |
-> 人工智能 -> 05 多元时间序列回归(终) -> 正文阅读 |
|
[人工智能]05 多元时间序列回归(终) |
? ? ? ?到目前为止,我们的建模工作仅限于单个时间序列。 RNN 自然非常适合多变量时间序列,并且也是我们在时间序列模型中介绍的向量自回归 (VAR) 模型的非线性替代方案。 导入各类包
加载数据 ? ? ? ?为了进行比较,使用我们用于 VAR 示例的相同数据集、消费者情绪的月度数据以及来自美联储 FRED 服务的工业生产。
准备数据 ?平稳性:我们使用先验对数变换来实现我们在第 8 章时间序列模型中使用的平稳性要求:
缩放:然后我们将转换后的数据缩放到 [0,1] 区间:
绘制原始序列和转换后序列的图
原始数据序列和转换后数据序列如下图: ?将数据重塑为 RNN 格式 ? ? ? ? ?我们可以直接重塑以获得不重叠的序列,即把每年的数据作为一个样本(仅当样本数可被窗口大小整除时才有效):
? ? ? ?但是,我们想要动态而不是非重叠的滞后值。 create_multivariate_rnn_data 函数将多个时间序列的数据集转换为 Keras RNN 层所需的大小,即 n_samples x window_size x n_series。
? ? ? ? 我们将使用 24 个月的 window_size 并为我们的 RNN 模型获取所需的输入。
((450, 18, 2), (450, 2)) ? ? ? ?最后,我们将数据拆分为一个训练集和一个测试集,使用过去 24 个月来进行交叉验证。
((426, 18, 2), (24, 18, 2)) ?定义模型架构 ? ? ? ? 我们同样使用堆叠的LSTM,其中两个堆叠的 LSTM 层分别具有 12 和 6 个单元,然后是一个具有 10 个单元的全连接层。 输出层有两个单元,每个时间序列一个。 我们使用平均绝对损失和推荐的 RMSProp 优化器编译它们。
该模型有 1,268 个参数。
训练模型 ? ? ? ?其中我们训练了?100?个epoch,其中batch_size 值为 20:
Epoch 1/100 19/22 [========================>.....] - ETA: 0s - loss: 0.2743 Epoch 00001: val_loss improved from inf to 0.04285, saving model to results/multivariate_time_series/lstm.h5 22/22 [==============================] - 1s 25ms/step - loss: 0.2536 - val_loss: 0.0429 Epoch 2/100 20/22 [==========================>...] - ETA: 0s - loss: 0.1013 Epoch 00002: val_loss improved from 0.04285 to 0.03912, saving model to results/multivariate_time_series/lstm.h5 22/22 [==============================] - 0s 13ms/step - loss: 0.0991 - val_loss: 0.0391 Epoch 3/100 20/22 [==========================>...] - ETA: 0s - loss: 0.0956 Epoch 00003: val_loss did not improve from 0.03912 22/22 [==============================] - 0s 12ms/step - loss: 0.0941 - val_loss: 0.0404 Epoch 4/100 19/22 [========================>.....] - ETA: 0s - loss: 0.0965 Epoch 00004: val_loss improved from 0.03912 to 0.03764, saving model to results/multivariate_time_series/lstm.h5 22/22 [==============================] - 0s 14ms/step - loss: 0.0945 - val_loss: 0.0376 Epoch 5/100 18/22 [=======================>......] - ETA: 0s - loss: 0.0910 Epoch 00005: val_loss did not improve from 0.03764 22/22 [==============================] - 0s 12ms/step - loss: 0.0918 - val_loss: 0.0504 Epoch 6/100 21/22 [===========================>..] - ETA: 0s - loss: 0.0903 Epoch 00006: val_loss improved from 0.03764 to 0.03714, saving model to results/multivariate_time_series/lstm.h5 22/22 [==============================] - 0s 13ms/step - loss: 0.0898 - val_loss: 0.0371 Epoch 7/100 20/22 [==========================>...] - ETA: 0s - loss: 0.0898 Epoch 00007: val_loss did not improve from 0.03714 22/22 [==============================] - 0s 12ms/step - loss: 0.0885 - val_loss: 0.0376 Epoch 8/100 19/22 [========================>.....] - ETA: 0s - loss: 0.0908 Epoch 00008: val_loss did not improve from 0.03714 22/22 [==============================] - 0s 13ms/step - loss: 0.0884 - val_loss: 0.0491 Epoch 9/100 19/22 [========================>.....] - ETA: 0s - loss: 0.0899 Epoch 00009: val_loss did not improve from 0.03714 22/22 [==============================] - 0s 12ms/step - loss: 0.0876 - val_loss: 0.0418 Epoch 10/100 19/22 [========================>.....] - ETA: 0s - loss: 0.0906 Epoch 00010: val_loss improved from 0.03714 to 0.03557, saving model to results/multivariate_time_series/lstm.h5 22/22 [==============================] - 0s 13ms/step - loss: 0.0892 - val_loss: 0.0356 Epoch 11/100 19/22 [========================>.....] - ETA: 0s - loss: 0.0916 Epoch 00011: val_loss did not improve from 0.03557 22/22 [==============================] - 0s 13ms/step - loss: 0.0894 - val_loss: 0.0463 Epoch 12/100 18/22 [=======================>......] - ETA: 0s - loss: 0.0883 Epoch 00012: val_loss did not improve from 0.03557 22/22 [==============================] - 0s 13ms/step - loss: 0.0877 - val_loss: 0.0389 Epoch 13/100 18/22 [=======================>......] - ETA: 0s - loss: 0.0882 Epoch 00013: val_loss did not improve from 0.03557 22/22 [==============================] - 0s 13ms/step - loss: 0.0873 - val_loss: 0.0451 Epoch 14/100 18/22 [=======================>......] - ETA: 0s - loss: 0.0879 Epoch 00014: val_loss improved from 0.03557 to 0.03552, saving model to results/multivariate_time_series/lstm.h5 22/22 [==============================] - 0s 14ms/step - loss: 0.0867 - val_loss: 0.0355 Epoch 15/100 20/22 [==========================>...] - ETA: 0s - loss: 0.0854 Epoch 00015: val_loss improved from 0.03552 to 0.03534, saving model to results/multivariate_time_series/lstm.h5 22/22 [==============================] - 0s 12ms/step - loss: 0.0837 - val_loss: 0.0353 Epoch 16/100 19/22 [========================>.....] - ETA: 0s - loss: 0.0864 Epoch 00016: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 13ms/step - loss: 0.0841 - val_loss: 0.0412 Epoch 17/100 22/22 [==============================] - ETA: 0s - loss: 0.0837 Epoch 00017: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 14ms/step - loss: 0.0837 - val_loss: 0.0356 Epoch 18/100 20/22 [==========================>...] - ETA: 0s - loss: 0.0859 Epoch 00018: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 15ms/step - loss: 0.0845 - val_loss: 0.0357 Epoch 19/100 20/22 [==========================>...] - ETA: 0s - loss: 0.0845 Epoch 00019: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 14ms/step - loss: 0.0832 - val_loss: 0.0376 Epoch 20/100 20/22 [==========================>...] - ETA: 0s - loss: 0.0837 Epoch 00020: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 13ms/step - loss: 0.0824 - val_loss: 0.0357 Epoch 21/100 18/22 [=======================>......] - ETA: 0s - loss: 0.0839 Epoch 00021: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 14ms/step - loss: 0.0825 - val_loss: 0.0379 Epoch 22/100 21/22 [===========================>..] - ETA: 0s - loss: 0.0827 Epoch 00022: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 14ms/step - loss: 0.0822 - val_loss: 0.0359 Epoch 23/100 22/22 [==============================] - ETA: 0s - loss: 0.0818 Epoch 00023: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 13ms/step - loss: 0.0818 - val_loss: 0.0375 Epoch 24/100 21/22 [===========================>..] - ETA: 0s - loss: 0.0823 Epoch 00024: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 15ms/step - loss: 0.0820 - val_loss: 0.0359 Epoch 25/100 18/22 [=======================>......] - ETA: 0s - loss: 0.0823 Epoch 00025: val_loss did not improve from 0.03534 22/22 [==============================] - 0s 13ms/step - loss: 0.0810 - val_loss: 0.0471 评估结果 ? ? ? ?训练在 22 个 epoch 后提前停止,测试集MAE的值为1.71,在VAR模型中测试集的MAE值为1.91,所以RNN更有优势。然而,这两个结果并不完全具有可比性,因为 RNN 模型产生 24 个先验预测,而 VAR 模型使用自己的预测作为其样本外预测的输入。 我们需要调整 VAR 设置以获得可比较的预测并比较它们的性能:
?
输出测试集的MAE: 0.03533523602534612
?
? ? ? ?样本外数据的波动明显大于预测值,但整体走势保持一致。 |
|
|
上一篇文章 下一篇文章 查看所有文章 |
|
开发:
C++知识库
Java知识库
JavaScript
Python
PHP知识库
人工智能
区块链
大数据
移动开发
嵌入式
开发工具
数据结构与算法
开发测试
游戏开发
网络协议
系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程 数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁 |
360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 | -2024/11/17 20:37:22- |
|
网站联系: qq:121756557 email:121756557@qq.com IT数码 |