martedì 25 febbraio 2025

ConvLSTM (2)

Proviamo un approccio differente

Fino ad adesso avevo usato solo i dati di allungamento come variabile dipendente e pioggia e temperatura come variabili indipendenti. Adesso provo a mettere la derivata seconda dell'allungamento

Deformazione

Derivata prima della deformazione

Derivata seconda della deformazione

Confronto tra pioggia e derivata seconda della deformazione

Grafico di previsione tramite ConvLSTM

# -*- coding: utf-8 -*-
"""convlstm.ipynb

Automatically generated by Colab.

Original file is located at
    https://colab.research.google.com/drive/1yXzW-fOBePUvgY63uMJTjSprP5vMy0tn
"""

import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import ConvLSTM2D, BatchNormalization, Flatten, Dense
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split

df = pd.read_csv("prima2.csv", parse_dates=["Data"])
df.set_index("Data", inplace=True)

features = df[['Temp', 'Rain']]  # Input variables
target = df[['Acc']]  # What we are predicting

scaler = MinMaxScaler()
features_scaled = scaler.fit_transform(features)

def create_sequences(data, labels, seq_length=10):
    X, y = [], []
    for i in range(len(data) - seq_length):
        X.append(data[i:i+seq_length])  # Take the last `seq_length` time steps
        y.append(labels[i+seq_length])  # Predict the next step
    return np.array(X), np.array(y)

seq_length = 20  # Lookback period
X, y = create_sequences(features_scaled, target.to_numpy(), seq_length)

X = X.reshape((X.shape[0], seq_length, 1, X.shape[2], 1))  # (samples, time, height, width, channels)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, shuffle=False)

model = Sequential([
    ConvLSTM2D(filters=64, kernel_size=(1, 1), activation='relu', return_sequences=True,
               input_shape=(seq_length, 1, X.shape[3], 1)),
    BatchNormalization(),
    ConvLSTM2D(filters=32, kernel_size=(1, 1), activation='relu', return_sequences=False),
    BatchNormalization(),
    Flatten(),
    Dense(32, activation='relu'),
    Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_test, y_test))

y_pred = model.predict(X_test)
#y_pred = (y_pred > 0.5).astype(int)  # Convert probabilities to binary (0 or 1)

# Commented out IPython magic to ensure Python compatibility.
import matplotlib.pyplot as plt
# %matplotlib inline

plt.plot(y_test, label="Actual Acc,")
plt.plot(y_pred, linestyle="dashed", label="Predicted Acc")
plt.legend()
plt.show()








Nessun commento:

Posta un commento

XGBoost

Leggendo alcuni articoli ho visto che viene consigliato l'utlilizzo di XGBoost al posto di LSTM sul forecasting di serie tempo...proviam...