为了账号安全,请及时绑定邮箱和手机立即绑定

使用机器学习、生成式AI和深度学习预测时间序列数据

更多

时间序列数据预测是各个行业中的一个关键方面,从金融和医疗保健到市场营销和物流。基于历史数据预测未来值的能力可以显著提高决策过程和运营效率。随着机器学习、生成式AI和深度学习的进步,现在有更多复杂的方法可用于解决时间序列预测问题。这篇博客将探讨可用于时间序列数据预测的不同方法和模型。

理解时间序列数据

时间序列数据是在特定时间间隔收集或记录的数据点序列。示例包括股票价格、天气数据、销售数据和传感器读数。时间序列预测的目标是利用过去的观测值来预测未来的值,这可能由于数据内在的复杂性和模式而具有挑战性。

1. 机器学习方法

1.1 ARIMA (自回归积分滑动平均)

  • ARIMA 是一种经典的时间序列预测统计方法。它结合了自回归(AR)模型、差分(使数据变得平稳)和移动平均(MA)模型。

示例用法 :

     import pandas as pd  
    from statsmodels.tsa.arima.model import ARIMA  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 拟合ARIMA模型  
    model = ARIMA(time_series_data['Value'], order=(5, 1, 0))  # (p,d,q)  
    model_fit = model.fit()  

    # 进行预测  
    predictions = model_fit.forecast(steps=10)  
    print(predictions)

1.2 SARIMA(季节性ARIMA)

  • SARIMA 通过考虑季节性效应来扩展 ARIMA。它对于具有季节性模式的数据非常有用,例如月度销售数据。

示例用法:

     import pandas as pd  
    import numpy as np  
    from statsmodels.tsa.statespace.sarimax import SARIMAX  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 拟合SARIMA模型  
    model = SARIMAX(time_series_data['Value'], order=(1, 1, 1), seasonal_order=(1, 1, 1, 12))  # (p,d,q) (P,D,Q,s)  
    model_fit = model.fit(disp=False)  

    # 进行预测  
    predictions = model_fit.forecast(steps=10)  
    print(predictions)

1.3 Prophet

  • 由 Facebook 开发,Prophet 是一个强大的工具,设计用于预测时间序列数据,可以处理缺失数据和异常值,并提供可靠的不确定性区间。

示例用法 :

     from fbprophet import Prophet  
    import pandas as pd  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.rename(columns={'Date': 'ds', 'Value': 'y'}, inplace=True)  

    # 拟合Prophet模型  
    model = Prophet()  
    model.fit(time_series_data)  

    # 创建未来数据框并进行预测  
    future = model.make_future_dataframe(periods=10)  
    forecast = model.predict(future)  
    print(forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']])

1.4 XGBoost

  • XGBoost 是一个梯度提升框架,可以通过将问题转化为监督学习任务来进行时间序列预测,将之前的 timestep 视为特征。

示例用法 :

     import pandas as pd  
    import numpy as np  
    from xgboost import XGBRegressor  
    from sklearn.model_selection import train_test_split  
    from sklearn.metrics import mean_squared_error  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 准备监督学习的数据  
    def create_lag_features(data, lag=1):  
        df = data.copy()  
        for i in range(1, lag + 1):  
            df[f'lag_{i}'] = df['Value'].shift(i)  
        return df.dropna()  

    lag = 5  
    data_with_lags = create_lag_features(time_series_data, lag=lag)  
    X = data_with_lags.drop('Value', axis=1)  
    y = data_with_lags['Value']  

    # 将数据拆分为训练集和测试集  
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)  

    # 拟合XGBoost模型  
    model = XGBRegressor(objective='reg:squarederror', n_estimators=1000)  
    model.fit(X_train, y_train)  

    # 进行预测  
    y_pred = model.predict(X_test)  
    mse = mean_squared_error(y_test, y_pred)  
    print(f'Mean Squared Error: {mse}')
2. 生成式AI方法

2.1 GANs(生成对抗网络)

  • GANs 由生成器和判别器组成。对于时间序列预测,GANs 可以通过学习底层数据分布来生成合理的未来序列。

示例用法:

     import numpy as np  
    import pandas as pd  
    from tensorflow.keras.models import Sequential  
    from tensorflow.keras.layers import Dense, LSTM, Conv1D, MaxPooling1D, Flatten, LeakyReLU, Reshape  
    from tensorflow.keras.optimizers import Adam  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 准备GAN的数据  
    def create_dataset(dataset, time_step=1):  
        X, Y = [], []  
        for i in range(len(dataset)-time_step-1):  
            a = dataset[i:(i+time_step), 0]  
            X.append(a)  
            Y.append(dataset[i + time_step, 0])  
        return np.array(X), np.array(Y)  

    time_step = 10  
    scaler = MinMaxScaler(feature_range=(0, 1))  
    scaled_data = scaler.fit_transform(time_series_data['Value'].values.reshape(-1, 1))  

    X_train, y_train = create_dataset(scaled_data, time_step)  
    X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)  

    # GAN组件  
    def build_generator():  
        model = Sequential()  
        model.add(Dense(100, input_dim=time_step))  
        model.add(LeakyReLU(alpha=0.2))  
        model.add(Dense(time_step, activation='tanh'))  
        model.add(Reshape((time_step, 1)))  
        return model  

    def build_discriminator():  
        model = Sequential()  
        model.add(LSTM(50, input_shape=(time_step, 1)))  
        model.add(Dense(1, activation='sigmoid'))  
        return model  

    # 构建并编译判别器  
    discriminator = build_discriminator()  
    discriminator.compile(loss='binary_crossentropy', optimizer=Adam(0.0002, 0.5), metrics=['accuracy'])  

    # 构建生成器  
    generator = build_generator()  

    # 生成器以噪声为输入并生成数据  
    z = Input(shape=(time_step,))  
    generated_data = generator(z)  

    # 对于组合模型,我们只会训练生成器  
    discriminator.trainable = False  

    # 判别器以生成的数据为输入并确定其有效性  
    validity = discriminator(generated_data)  

    # 组合模型(堆叠生成器和判别器)  
    combined = Model(z, validity)  
    combined.compile(loss='binary_crossentropy', optimizer=Adam(0.0002, 0.5))  

    # 训练GAN  
    epochs = 10000  
    batch_size = 32  
    valid = np.ones((batch_size, 1))  
    fake = np.zeros((batch_size, 1))  

    for epoch in range(epochs):  
        # ---------------------  
        # 训练判别器  
        # ---------------------  

        # 选择一批真实数据  
        idx = np.random.randint(0, X_train.shape[0], batch_size)  
        real_data = X_train[idx]  

        # 生成一批假数据  
        noise = np.random.normal(0, 1, (batch_size, time_step))  
        gen_data = generator.predict(noise)  

        # 训练判别器  
        d_loss_real = discriminator.train_on_batch(real_data, valid)  
        d_loss_fake = discriminator.train_on_batch(gen_data, fake)  
        d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)  

        # ---------------------  
        # 训练生成器  
        # ---------------------  

        noise = np.random.normal(0, 1, (batch_size, time_step))  

        # 训练生成器(使其生成的数据被判别器标记为有效)  
        g_loss = combined.train_on_batch(noise, valid)  

        # 打印进度  
        if epoch % 1000 == 0:  
            print(f"{epoch} [D loss: {d_loss[0]} | D accuracy: {100*d_loss[1]}] [G loss: {g_loss}]")  

    # 进行预测  
    noise = np.random.normal(0, 1, (1, time_step))  
    generated_prediction = generator.predict(noise)  
    generated_prediction = scaler.inverse_transform(generated_prediction)  
    print(generated_prediction)

2.2 WaveNet

  • 由 DeepMind 开发的 WaveNet 是一个深度生成模型,最初设计用于音频生成,但已被改编用于时间序列预测,特别是在音频和语音领域。

示例用法:

     import numpy as np  
    import pandas as pd  
    import tensorflow as tf  
    from sklearn.preprocessing import MinMaxScaler  
    from tensorflow.keras.models import Model  
    from tensorflow.keras.layers import Input, Conv1D, Add, Activation, Multiply, Lambda, Dense, Flatten  
    from tensorflow.keras.optimizers import Adam  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 准备WaveNet所需的数据  
    scaler = MinMaxScaler(feature_range=(0, 1))  
    scaled_data = scaler.fit_transform(time_series_data['Value'].values.reshape(-1, 1))  

    def create_dataset(dataset, time_step=1):  
        X, Y = [], []  
        for i in range(len(dataset)-time_step-1):  
            a = dataset[i:(i+time_step), 0]  
            X.append(a)  
            Y.append(dataset[i + time_step, 0])  
        return np.array(X), np.array(Y)  

    time_step = 10  
    X, y = create_dataset(scaled_data, time_step)  
    X = X.reshape(X.shape[0], X.shape[1], 1)  

    # 定义WaveNet模型  
    def residual_block(x, dilation_rate):  
        tanh_out = Conv1D(32, kernel_size=2, dilation_rate=dilation_rate, padding='causal', activation='tanh')(x)  
        sigm_out = Conv1D(32, kernel_size=2, dilation_rate=dilation_rate, padding='causal', activation='sigmoid')(x)  
        out = Multiply()([tanh_out, sigm_out])  
        out = Conv1D(32, kernel_size=1, padding='same')(out)  
        out = Add()([out, x])  
        return out  

    input_layer = Input(shape=(time_step, 1))  
    out = Conv1D(32, kernel_size=2, padding='causal', activation='tanh')(input_layer)  
    skip_connections = []  
    for i in range(10):  
        out = residual_block(out, 2**i)  
        skip_connections.append(out)  

    out = Add()(skip_connections)  
    out = Activation('relu')(out)  
    out = Conv1D(1, kernel_size=1, activation='relu')(out)  
    out = Flatten()(out)  
    out = Dense(1)(out)  

    model = Model(input_layer, out)  
    model.compile(optimizer=Adam(learning_rate=0.001), loss='mean_squared_error')  

    # 训练模型  
    model.fit(X, y, epochs=10, batch_size=16)  

    # 进行预测  
    predictions = model.predict(X)  
    predictions = scaler.inverse_transform(predictions)  
    print(predictions)
3. 深度学习方法

3.1 LSTM (长短期记忆)

LSTM 网络是一种循环神经网络(RNN),能够学习长期依赖关系。由于能够捕捉时间模式,它们常用于时间序列预测。

示例用法:

     import numpy as np  
    import pandas as pd  
    from tensorflow.keras.models import Sequential  
    from tensorflow.keras.layers import LSTM, Dense  
    from sklearn.preprocessing import MinMaxScaler  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 准备LSTM的数据  
    scaler = MinMaxScaler(feature_range=(0, 1))  
    scaled_data = scaler.fit_transform(time_series_data['Value'].values.reshape(-1, 1))  

    train_size = int(len(scaled_data) * 0.8)  
    train_data = scaled_data[:train_size]  
    test_data = scaled_data[train_size:]  

    def create_dataset(dataset, time_step=1):  
        X, Y = [], []  
        for i in range(len(dataset)-time_step-1):  
            a = dataset[i:(i+time_step), 0]  
            X.append(a)  
            Y.append(dataset[i + time_step, 0])  
        return np.array(X), np.array(Y)  

    time_step = 10  
    X_train, y_train = create_dataset(train_data, time_step)  
    X_test, y_test = create_dataset(test_data, time_step)  

    X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)  
    X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)  

    # 构建LSTM模型  
    model = Sequential()  
    model.add(LSTM(50, return_sequences=True, input_shape=(time_step, 1)))  
    model.add(LSTM(50, return_sequences=False))  
    model.add(Dense(25))  
    model.add(Dense(1))  

    model.compile(optimizer='adam', loss='mean_squared_error')  
    model.fit(X_train, y_train, batch_size=1, epochs=1)  

    # 进行预测  
    train_predict = model.predict(X_train)  
    test_predict = model.predict(X_test)  

    train_predict = scaler.inverse_transform(train_predict)  
    test_predict = scaler.inverse_transform(test_predict)  
    print(test_predict)

3.2 GRU (Gated Recurrent Unit)

GRU 是 LSTM 的一种变体,它更简单,并且通常在时间序列任务中表现同样出色。GRUs 用于建模序列并捕获时间依赖性。

示例用法 :

     import numpy as np  
    import pandas as pd  
    from tensorflow.keras.models import Sequential  
    from tensorflow.keras.layers import GRU, Dense  
    from sklearn.preprocessing import MinMaxScaler  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 准备GRU的数据  
    scaler = MinMaxScaler(feature_range=(0, 1))  
    scaled_data = scaler.fit_transform(time_series_data['Value'].values.reshape(-1, 1))  

    train_size = int(len(scaled_data) * 0.8)  
    train_data = scaled_data[:train_size]  
    test_data = scaled_data[train_size:]  

    def create_dataset(dataset, time_step=1):  
        X, Y = [], []  
        for i in range(len(dataset)-time_step-1):  
            a = dataset[i:(i+time_step), 0]  
            X.append(a)  
            Y.append(dataset[i + time_step, 0])  
        return np.array(X), np.array(Y)  

    time_step = 10  
    X_train, y_train = create_dataset(train_data, time_step)  
    X_test, y_test = create_dataset(test_data, time_step)  

    X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)  
    X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)  

    # 构建GRU模型  
    model = Sequential()  
    model.add(GRU(50, return_sequences=True, input_shape=(time_step, 1)))  
    model.add(GRU(50, return_sequences=False))  
    model.add(Dense(25))  
    model.add(Dense(1))  

    model.compile(optimizer='adam', loss='mean_squared_error')  
    model.fit(X_train, y_train, batch_size=1, epochs=1)  

    # 进行预测  
    train_predict = model.predict(X_train)  
    test_predict = model.predict(X_test)  

    train_predict = scaler.inverse_transform(train_predict)  
    test_predict = scaler.inverse_transform(test_predict)  
    print(test_predict)

3.3 Transformer 模型

Transformer模型,因其在NLP任务中的成功而为人所知,已经被应用于时间序列预测。像Temporal Fusion Transformer (TFT)这样的模型利用注意力机制有效地处理时间序列数据。

示例用法 :

     import numpy as np  
    import pandas as pd  
    from sklearn.preprocessing import MinMaxScaler  
    from tensorflow.keras.models import Sequential  
    from tensorflow.keras.layers import Dense, LSTM, Conv1D, MaxPooling1D, Flatten, MultiHeadAttention, LayerNormalization, Dropout  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 准备数据  
    scaler = MinMaxScaler(feature_range=(0, 1))  
    scaled_data = scaler.fit_transform(time_series_data['Value'].values.reshape(-1, 1))  

    train_size = int(len(scaled_data) * 0.8)  
    train_data = scaled_data[:train_size]  
    test_data = scaled_data[train_size:]  

    def create_dataset(dataset, time_step=1):  
        X, Y = [], []  
        for i in range(len(dataset)-time_step-1):  
            a = dataset[i:(i+time_step), 0]  
            X.append(a)  
            Y.append(dataset[i + time_step, 0])  
        return np.array(X), np.array(Y)  

    time_step = 10  
    X_train, y_train = create_dataset(train_data, time_step)  
    X_test, y_test = create_dataset(test_data, time_step)  

    X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1)  
    X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1)  

    # 构建Transformer模型  
    model = Sequential()  
    model.add(MultiHeadAttention(num_heads=4, key_dim=2, input_shape=(time_step, 1)))  
    model.add(LayerNormalization())  
    model.add(Dense(50, activation='relu'))  
    model.add(Dropout(0.1))  
    model.add(Dense(1))  

    model.compile(optimizer='adam', loss='mean_squared_error')  
    model.fit(X_train, y_train, batch_size=1, epochs=1)  

    # 进行预测  
    train_predict = model.predict(X_train)  
    test_predict = model.predict(X_test)  

    train_predict = scaler.inverse_transform(train_predict)  
    test_predict = scaler.inverse_transform(test_predict)  
    print(test_predict)

3.4 Seq2Seq(序列到序列)

Seq2Seq 模型用于预测数据序列。最初为语言翻译而开发,它们通过学习从输入序列到输出序列的映射,对于时间序列预测也非常有效。

示例用法 :

     import numpy as np  
    import pandas as pd  
    from tensorflow.keras.models import Model  
    from tensorflow.keras.layers import Input, LSTM, Dense  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 准备 Seq2Seq 数据  
    def create_dataset(dataset, time_step=1):  
        X, Y = [], []  
        for i in range(len(dataset)-time_step-1):  
            a = dataset[i:(i+time_step), 0]  
            X.append(a)  
            Y.append(dataset[i + time_step, 0])  
        return np.array(X), np.array(Y)  

    time_step = 10  
    scaler = MinMaxScaler(feature_range=(0, 1))  
    scaled_data = scaler.fit_transform(time_series_data['Value'].values.reshape(-1, 1))  

    X, y = create_dataset(scaled_data, time_step)  
    X = X.reshape(X.shape[0], X.shape[1], 1)  

    # 定义 Seq2Seq 模型  
    encoder_inputs = Input(shape=(time_step, 1))  
    encoder = LSTM(50, return_state=True)  
    encoder_outputs, state_h, state_c = encoder(encoder_inputs)  

    decoder_inputs = Input(shape=(time_step, 1))  
    decoder_lstm = LSTM(50, return_sequences=True, return_state=True)  
    decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=[state_h, state_c])  
    decoder_dense = Dense(1)  
    decoder_outputs = decoder_dense(decoder_outputs)  

    model = Model([encoder_inputs, decoder_inputs], decoder_outputs)  
    model.compile(optimizer='adam', loss='mean_squared_error')  

    # 训练模型  
    model.fit([X, X], y, epochs=10, batch_size=16)  

    # 进行预测  
    predictions = model.predict([X, X])  
    predictions = scaler.inverse_transform(predictions)  
    print(predictions)

3.5 TCN (Temporal Convolutional Networks)

TCN 使用扩张卷积来捕获时间序列数据中的长期依赖关系。它们为顺序数据建模提供了比 RNN 更稳健的替代方案。

示例用法:

     import numpy as np  
    import pandas as pd  
    from sklearn.preprocessing import MinMaxScaler  
    from tensorflow.keras.models import Sequential  
    from tensorflow.keras.layers import Conv1D, Dense, Flatten  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 准备TCN的数据  
    def create_dataset(dataset, time_step=1):  
        X, Y = [], []  
        for i in range(len(dataset)-time_step-1):  
            a = dataset[i:(i+time_step), 0]  
            X.append(a)  
            Y.append(dataset[i + time_step, 0])  
        return np.array(X), np.array(Y)  

    time_step = 10  
    scaler = MinMaxScaler(feature_range=(0, 1))  
    scaled_data = scaler.fit_transform(time_series_data['Value'].values.reshape(-1, 1))  

    X, y = create_dataset(scaled_data, time_step)  
    X = X.reshape(X.shape[0], X.shape[1], 1)  

    # 定义TCN模型  
    model = Sequential()  
    model.add(Conv1D(filters=64, kernel_size=2, dilation_rate=1, activation='relu', input_shape=(time_step, 1)))  
    model.add(Conv1D(filters=64, kernel_size=2, dilation_rate=2, activation='relu'))  
    model.add(Conv1D(filters=64, kernel_size=2, dilation_rate=4, activation='relu'))  
    model.add(Flatten())  
    model.add(Dense(1))  

    model.compile(optimizer='adam', loss='mean_squared_error')  

    # 训练模型  
    model.fit(X, y, epochs=10, batch_size=16)  

    # 进行预测  
    predictions = model.predict(X)  
    predictions = scaler.inverse_transform(predictions)  
    print(predictions)

3.6 DeepAR

由亚马逊开发的DeepAR是一种自回归循环网络,专为时间序列预测设计。它可以处理多个时间序列并能捕捉复杂模式。

示例用法 :

     import numpy as np  
    import pandas as pd  
    from sklearn.preprocessing import MinMaxScaler  
    from tensorflow.keras.models import Sequential  
    from tensorflow.keras.layers import LSTM, Dense, Flatten  

    # 加载您的时间序列数据  
    time_series_data = pd.read_csv('time_series_data.csv')  
    time_series_data['Date'] = pd.to_datetime(time_series_data['Date'])  
    time_series_data.set_index('Date', inplace=True)  

    # 准备DeepAR-like模型的数据  
    def create_dataset(dataset, time_step=1):  
        X, Y = [], []  
        for i in range(len(dataset)-time_step-1):  
            a = dataset[i:(i+time_step), 0]  
            X.append(a)  
            Y.append(dataset[i + time_step, 0])  
        return np.array(X), np.array(Y)  

    time_step = 10  
    scaler = MinMaxScaler(feature_range=(0, 1))  
    scaled_data = scaler.fit_transform(time_series_data['Value'].values.reshape(-1, 1))  

    X, y = create_dataset(scaled_data, time_step)  
    X = X.reshape(X.shape[0], X.shape[1], 1)  

    # 定义DeepAR-like模型  
    model = Sequential()  
    model.add(LSTM(50, return_sequences=True, input_shape=(time_step, 1)))  
    model.add(LSTM(50))  
    model.add(Dense(1))  

    model.compile(optimizer='adam', loss='mean_squared_error')  

    # 训练模型  
    model.fit(X, y, epochs=10, batch_size=16)  

    # 进行预测  
    predictions = model.predict(X)  
    predictions = scaler.inverse_transform(predictions)  
    print(predictions)

时间序列数据预测是一个复杂而又迷人的领域,从机器学习、生成式AI和深度学习的进步中获益良多。通过利用ARIMA、Prophet、LSTM和Transformer等模型,从业者可以揭示数据中的隐藏模式并进行准确的预测。随着技术的不断进步,用于时间序列预测的工具和方法将变得更加复杂和先进,为各个领域的创新和改进提供新的机会。

点击查看更多内容
TA 点赞

若觉得本文不错,就分享一下吧!

评论

作者其他优质文章

正在加载中
  • 推荐
  • 评论
  • 收藏
  • 共同学习,写下你的评论
感谢您的支持,我会继续努力的~
扫码打赏,你说多少就多少
赞赏金额会直接到老师账户
支付方式
打开微信扫一扫,即可进行扫码打赏哦
今天注册有机会得

100积分直接送

付费专栏免费学

大额优惠券免费领

立即参与 放弃机会
微信客服

购课补贴
联系客服咨询优惠详情

帮助反馈 APP下载

慕课网APP
您的移动学习伙伴

公众号

扫描二维码
关注慕课网微信公众号

举报

0/150
提交
取消