Visualizzazione post con etichetta Python. Mostra tutti i post
Visualizzazione post con etichetta Python. Mostra tutti i post

lunedì 4 novembre 2024

Dockerizza Flask

Un esempio semplice per inserire in un container Docker una applicazione Flask

Partiamo da una semplice applicazione che ha un file app.py ed un requirements.txt

Si crea nello stesso folder dei due files precedenti il Dockerfile

---------------------------------------------------------------------------------------------

FROM python:3.8-slim-buster

WORKDIR /python-docker

COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt

COPY . .

CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]

---------------------------------------------------------------------------------------------

e si effettua la build con

docker build --tag flask-docker .

terminata la build del container

docker run -d -p 5000:5000 flask-docker

la applicazione sara' visibile su localhost:5000

lunedì 21 ottobre 2024

Change detection monocular metric

Ho provato ad usare la rete di neurale a questo indirizzo https://github.com/apple/ml-depth-pro per ottenere una mappa di profondita' da un unico jpg

 Questa l'immagine di partenza


con una NVidia 4070 l'elaborazione e' inferiore al secondo e questo il risultato
 

La rete viene definita metrica ma i risultati sono lontanissimi dalle misure reali degli oggetti fotografati

Se si prendono due immagini distanti nel tempo



 si puo' calcolare la differenza delle distanze ed eventuali cambiamenti

 Una volta ottenuta la mappa di profondita' e' possibile avere anche la nuvola dei punti utilizzando il seguente programmino in Python

import open3d as o3d
import sys

a23 = np.load("agosto23.npz")
a24 = np.load("agosto24.npz")
dd = a24['depth']-a23['depth']

plt.imshow(dd, interpolation='nearest',cmap='plasma')
plt.colorbar()
#plt.show()
plt.savefig('grafico.png',dpi=600)


#versione O3D
depth = o3d.geometry.Image(a23['depth'])
print("Prof"+str(np.shape(depth)))
color = o3d.io.read_image("rgb.jpg")
print(np.shape(color))

rgbd = o3d.geometry.RGBDImage.create_from_color_and_depth(color, depth,convert_rgb_to_intensity = False)
intrinsic = o3d.camera.PinholeCameraIntrinsic(width=3840, height=2160, fx=2520.7, fy=2518.2, cx=1918, cy=1022)

pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd, intrinsic)
pcd.transform([[1,0,0,0],[0,-1,0,0],[0,0,-1,0],[0,0,0,1]])
o3d.visualization.draw_geometries([pcd])
o3d.io.write_point_cloud("nuvola2.ply", pcd)


questo il risultato






 

 

 

 

 


lunedì 14 ottobre 2024

Inner Join con Pandas

 Avevo la necessita' di unire due file csv (misura di tempo; misura del sensore) ma i due sensori hanno avuto dei fermi per cui non era sufficiente mettere le colonne affiancate ed era necessaria una inner join (ma sulla macchine di lavoro non ho accesso un sql server) ..sono oltre 85000 dati quindi e' escluso fare a mano

La libreria Pandas e' venuta in aiuto




CSV Portata

tempo;valore
38923.54;37
38923.58;36
38923.63;36
38923.67;36

CSV Tordibita

tempo;valore
38923.54;0
38923.58;0


import pandas as pd
from matplotlib import pyplot as plt

por = pd.read_csv('portata.csv',sep=';')
tor = pd.read_csv('torbidita.csv',sep=';')
por['tempo'] = por['tempo'].astype(str)
tor['tempo'] = tor['tempo'].astype(str)

unione = por.merge(tor, on="tempo", how='inner')
unione.plot(
x='valore_x',
xlabel='Portata',
y='valore_y',
ylabel='Torbidita',
title= '25/07/06 - 27/02/2017',
kind='scatter'
)
plt.show()
#unione.to_csv("unione.csv", sep=';')


venerdì 11 ottobre 2024

Aruco e Image Stacking

Uno dei problemi maggiori dell'uso dei tag aruco in esterno per misurare distanze e' che le condizioni di illuminazione solare sono molto variabili e la compressione jpg completa l'opera aggiungendo rumore su rumore

Per cercare di limitare il problema ho provato a fare image stacking sovrapponendo 8 foto (una ogni ora dalle 09 alle 16) con lo script al link https://github.com/maitek/image_stacking ed applicando il programma visto qui


Il miglioramento non e' trascurabile perche' la standard deviation e' passata

3.1 m => da 0.19% a 0.16% (0.5 cm)

5.4 m => da 0.35% a 0.28% (1.5 cm)

7.6 m => da 0.71% a 0.38% (2.8 cm)

9.6 m => da 0.5% a 0.41% (4.8 cm)

 

import os
import cv2
import numpy as np
from time import time



# Align and stack images with ECC method
# Slower but more accurate
def stackImagesECC(file_list):
M = np.eye(3, 3, dtype=np.float32)

first_image = None
stacked_image = None

for file in file_list:
image = cv2.imread(file,1).astype(np.float32) / 255
print(file)
if first_image is None:
# convert to gray scale floating point image
first_image = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
stacked_image = image
else:
# Estimate perspective transform
s, M = cv2.findTransformECC(cv2.cvtColor(image,cv2.COLOR_BGR2GRAY), first_image, M, cv2.MOTION_HOMOGRAPHY)
w, h, _ = image.shape
# Align image to first image
image = cv2.warpPerspective(image, M, (h, w))
stacked_image += image

stacked_image /= len(file_list)
stacked_image = (stacked_image*255).astype(np.uint8)
return stacked_image


# Align and stack images by matching ORB keypoints
# Faster but less accurate
def stackImagesKeypointMatching(file_list):

orb = cv2.ORB_create()

# disable OpenCL to because of bug in ORB in OpenCV 3.1
cv2.ocl.setUseOpenCL(False)

stacked_image = None
first_image = None
first_kp = None
first_des = None
for file in file_list:
print(file)
image = cv2.imread(file,1)
imageF = image.astype(np.float32) / 255

# compute the descriptors with ORB
kp = orb.detect(image, None)
kp, des = orb.compute(image, kp)

# create BFMatcher object
matcher = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

if first_image is None:
# Save keypoints for first image
stacked_image = imageF
first_image = image
first_kp = kp
first_des = des
else:
# Find matches and sort them in the order of their distance
matches = matcher.match(first_des, des)
matches = sorted(matches, key=lambda x: x.distance)

src_pts = np.float32(
[first_kp[m.queryIdx].pt for m in matches]).reshape(-1, 1, 2)
dst_pts = np.float32(
[kp[m.trainIdx].pt for m in matches]).reshape(-1, 1, 2)

# Estimate perspective transformation
M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 5.0)
w, h, _ = imageF.shape
imageF = cv2.warpPerspective(imageF, M, (h, w))
stacked_image += imageF

stacked_image /= len(file_list)
stacked_image = (stacked_image*255).astype(np.uint8)
return stacked_image

# ===== MAIN =====
# Read all files in directory
import argparse


if __name__ == '__main__':

parser = argparse.ArgumentParser(description='')
parser.add_argument('input_dir', help='Input directory of images ()')
parser.add_argument('output_image', help='Output image name')
parser.add_argument('--method', help='Stacking method ORB (faster) or ECC (more precise)')
parser.add_argument('--show', help='Show result image',action='store_true')
args = parser.parse_args()

image_folder = args.input_dir
if not os.path.exists(image_folder):
print("ERROR {} not found!".format(image_folder))
exit()

file_list = os.listdir(image_folder)
file_list = [os.path.join(image_folder, x)
for x in file_list if x.endswith(('.jpg', '.png','.bmp'))]

if args.method is not None:
method = str(args.method)
else:
method = 'KP'

tic = time()

if method == 'ECC':
# Stack images using ECC method
description = "Stacking images using ECC method"
print(description)
stacked_image = stackImagesECC(file_list)

elif method == 'ORB':
#Stack images using ORB keypoint method
description = "Stacking images using ORB method"
print(description)
stacked_image = stackImagesKeypointMatching(file_list)

else:
print("ERROR: method {} not found!".format(method))
exit()

print("Stacked {0} in {1} seconds".format(len(file_list), (time()-tic) ))

print("Saved {}".format(args.output_image))
cv2.imwrite(str(args.output_image),stacked_image)

# Show image
if args.show:
cv2.imshow(description, stacked_image)
cv2.waitKey(0)

 

 

 

 

mercoledì 2 ottobre 2024

Aruco Tag e filtri Kalman

Usando gli Aruco tag in ambiente esterno con illuminazione solare a diverse ore e in diversi periodi dell'anno ho trovato un errore nella stima della distanza tra due tag compresi tra 0.7% e 1.3% della distanza misurata. La domanda e' stata: e' possibile ridurre l'errore ?

Ho provato ad applicare il filtro Kalman per condizioni statiche (le distanze tra i tag nel tempo non sono variate) usando usando il codice a questo indirizzo 


Nel  grafico sottostante i punti blu sono i dati non filtrati, in linea continua blu i dati filtrati, in linea rossa il valore reale della misura di distanza

questo il codice impiegato con modestissime modifiche rispetto a quello di esempio

 

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd



def kalman_1d(x, P, measurement, R_est, Q_est):
x_pred = x
P_pred = P + Q_est
K = P_pred / (P_pred + R_est)
x_est = x_pred + K * (measurement - x_pred)
P_est = (1 - K) * P_pred
return x_est, P_est


def plot_1d_comparison(measurements_made, estimate, true_value, axis):
axis.plot(measurements_made ,'k+' ,label='measurements' ,alpha=0.3)
axis.plot(estimate ,'-' ,label='KF estimate')
if not isinstance(true_value, (list, tuple, np.ndarray)):
# plot line for a constant value
axis.axhline(true_value ,color='r' ,label='true value', alpha=0.5)
else:
# for a list, tuple or array, plot the points
axis.plot(true_value ,color='r' ,label='true value', alpha=0.5)
axis.legend(loc = 'lower right')
axis.set_title('Estimated position vs. time step')
axis.set_xlabel('Time')
axis.set_ylabel('$x_t$')

def plot_1d_error(estimated_error, lower_limit, upper_limit, axis):
# lower_limit and upper_limit are the lower and upper limits of the vertical axis
axis.plot(estimated_error, label='KF estimate for $P$')
axis.legend(loc = 'upper right')
axis.set_title('Estimated error vs. time step')
axis.set_xlabel('Time')
axis.set_ylabel('$P_t$')
plt.setp(axis ,'ylim' ,[lower_limit, upper_limit])


nome_file = "20_40"
df = pd.read_csv(nome_file+'.csv',header=0,sep=";")

#11 5.38
#15 3.39
#25 7.46
#40 4.26

mu = 4.26 # Actual position
R = 0.1 # Actual standard deviation of actual measurements (R)

# Generate measurements
#n_measurements = 1000 # Change the number of points to see how the convergence changes
#Z = np.random.normal(mu, np.sqrt(R), size=n_measurements)
Z = df[df.columns[2]].to_numpy()
# Estimated covariances
Q_est = 1e-4
R_est = 2e-2

# initial guesses
x = 5.1 # Use an integer (imagine the initial guess is determined with a meter stick)
P = 0.04 # error covariance P

KF_estimate=[] # To store the position estimate at each time point
KF_error=[] # To store estimated error at each time point
for z in Z:
x, P = kalman_1d(x, P, z, R_est, Q_est)
KF_estimate.append(x)
KF_error.append(P)

fig, axes = plt.subplots(1 ,2, figsize=(12, 5))
plot_1d_comparison(Z, KF_estimate, mu, axes[0])
np.savetxt(nome_file+"_stima.csv",KF_estimate)
plot_1d_error(KF_error, 0, 0.015, axes[1])
plt.tight_layout()
plt.savefig(nome_file+'.png')

 

 

 

 

lunedì 23 settembre 2024

Spot Robot Calibration Board Boston Bynamics e OpenCV

Ho avuto la fortuna di poter usare una enorma Charuco Board legata al cane robot Spot della Boston Dynamics


Nonostante non sia esplicitamente indicato si tratta di una Charuco Board 4x4_100 di 9 colonne e 4 righe

Per usarla per calibrare una camera si puo' usare il seguente script OpenCV

Attenzione: per funzionare lo script necessita setLegacyPattern(True)

import os
import numpy as np
import cv2

# ------------------------------
# ENTER YOUR REQUIREMENTS HERE:
ARUCO_DICT = cv2.aruco.DICT_4X4_250
SQUARES_VERTICALLY = 9
SQUARES_HORIZONTALLY = 4
SQUARE_LENGTH = 0.115
MARKER_LENGTH = 0.09
# ...
PATH_TO_YOUR_IMAGES = './hikvision'
# ------------------------------

def calibrate_and_save_parameters():
# Define the aruco dictionary and charuco board
dictionary = cv2.aruco.getPredefinedDictionary(ARUCO_DICT)
board = cv2.aruco.CharucoBoard((SQUARES_VERTICALLY, SQUARES_HORIZONTALLY), SQUARE_LENGTH, MARKER_LENGTH, dictionary)
board.setLegacyPattern(True)
params = cv2.aruco.DetectorParameters()
params.cornerRefinementMethod = 0


image_files = [os.path.join(PATH_TO_YOUR_IMAGES, f) for f in os.listdir(PATH_TO_YOUR_IMAGES) if f.endswith(".jpg")]
image_files.sort()

all_charuco_corners = []
all_charuco_ids = []

for image_file in image_files:
print(image_file)
image = cv2.imread(image_file)
marker_corners, marker_ids, _ = cv2.aruco.detectMarkers(image, dictionary, parameters=params)

# If at least one marker is detected
if len(marker_ids) > 0:
charuco_retval, charuco_corners, charuco_ids = cv2.aruco.interpolateCornersCharuco(marker_corners, marker_ids, image, board)

if charuco_retval:
all_charuco_corners.append(charuco_corners)
all_charuco_ids.append(charuco_ids)

# Calibrate camera
retval, camera_matrix, dist_coeffs, rvecs, tvecs = cv2.aruco.calibrateCameraCharuco(all_charuco_corners, all_charuco_ids, board, image.shape[:2], None, None)

# Save calibration data
np.save('hik_camera_matrix.npy', camera_matrix)
np.save('hik_dist_coeffs.npy', dist_coeffs)


cv2.destroyAllWindows()

calibrate_and_save_parameters()







giovedì 23 maggio 2024

Regressione umidita' Random Forest con YDF

 Volevo sperimentare la libreria YDF (Decision Trees) ed ho preso i dati del post precedente (soil moisture su terreni naturali

Non ho fatto nessuna ottimizzazione, ho usato l'esempio base della regressione

(da notare che i grafici funzionano su Colab non riesco a visualizzarli su jupyter notebook locali)

nel codice sottostante e' stato attivato il GradientBoostTreeLearner ma vi sono anche RandomForestLearner, CartLearner

 

from google.colab import drive
drive.mount('/content/drive')

!pip install ydf
import ydf # Yggdrasil Decision Forests
import pandas as pd # We use Pandas to load small datasets


all_ds = pd.read_csv(f"/content/drive/MyDrive/soilmoisture.csv")

# Randomly split the dataset into a training (70%) and testing (30%) dataset
all_ds = all_ds.sample(frac=1)
split_idx = len(all_ds) * 7 // 10
train_ds = all_ds.iloc[:split_idx]
test_ds = all_ds.iloc[split_idx:]

# Print the first 5 training examples
train_ds.head(5)

model = ydf.GradientBoostedTreesLearner(label="soil_moisture",
task=ydf.Task.REGRESSION).train(train_ds)

evaluation = model.evaluate(test_ds)

print(evaluation)

evaluation


La sintassi e' estremamente concisa ma il risultato della regressione e' ottimale




per vedere quale ' la feature che influenza il modello si puo' usare

model.analyze(test_ds, sampling=0.1)


martedì 30 aprile 2024

Ricampionare un segnale con SciPy

Un sistema rapido per ricampionare dati con spaziatura non omogenea  usando una curva di interpolazione

 

Dati originali

 

import matplotlib.pyplot as plt
from scipy import interpolate
import numpy as np

x = np.array([1, 2, 5, 10])
y = np.array([1, 4, 25, 100])
fcubic = interpolate.interp1d(x, y,kind='cubic')

xnew = np.arange(1, 10, 0.25)
ynew = fcubic(xnew)
plt.plot(x, y, 'X', xnew, ynew,'bo')
plt.show()
print(x)
print(y)
print(xnew)
print(ynew)

 


da osservare che l'interpolazione avviene tramite una spline cubica per cui su cuspidi non e' detto che il risultato sia accettabile. SciPy offre altre opzioni per la curva di interpolazione

 

giovedì 14 marzo 2024

Pyrealsense

ATTENZIONE: per funzionare alla massima risoluzione la Realsense deve usare una porta USB 3 ed un cavo idoneo ad USB 3 altrimenti si limita a 640x480

 

==================================================

Solo Ottico 1920x1080

import pyrealsense2 as rs
import numpy as np
import cv2
import time
import math

pipeline = rs.pipeline()
config = rs.config()

config.enable_stream(rs.stream.color, 1920, 1080, rs.format.bgr8, 30)

profile = pipeline.start(config)

align_to = rs.stream.color
align = rs.align(align_to)

frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)
color_frame = aligned_frames.get_color_frame()
color_image = np.asanyarray(color_frame.get_data())
imageName1 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_Color.png'
cv2.imwrite(imageName1, color_image)
pipeline.stop()


 

================================================== 

Ottico + Profondita' 1280x720 

 

================================================== 

import pyrealsense2 as rs
import numpy as np
import cv2
import time
import math

pipeline = rs.pipeline()
config = rs.config()

config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)

profile = pipeline.start(config)
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()

# We will be removing the background of objects more than
#  clipping_distance_in_meters meters away
clipping_distance_in_meters = 1.5
clipping_distance = clipping_distance_in_meters / depth_scale


align_to = rs.stream.color
align = rs.align(align_to)

frames = pipeline.wait_for_frames()

aligned_frames = align.process(frames)
aligned_depth_frame = aligned_frames.get_depth_frame()
color_frame = aligned_frames.get_color_frame()

depth_image = np.asanyarray(aligned_depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())


# Remove background - Set pixels further than clipping_distance to grey
grey_color = 153
depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) #depth image is 1 channel, color is 3 channels
bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)

# Render images
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
images = np.hstack((bg_removed, depth_colormap))
#cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)

# Filename
imageName1 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_Color.png'
imageName2 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_Depth.png'
imageName3 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_bg_removed.png'
imageName4 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_ColorDepth.png'
imageName5 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_DepthColormap.png'

# Saving the image
cv2.imwrite(imageName1, color_image)
cv2.imwrite(imageName2, depth_image)
cv2.imwrite(imageName3, images)
cv2.imwrite(imageName4, bg_removed )
cv2.imwrite(imageName5, depth_colormap )

pipeline.stop()



================================================== 

Infrarosso' 1280x720

================================================== 

import pyrealsense2 as rs
import numpy as np
import cv2
import time
import math
 
pipeline = rs.pipeline()
config = rs.config()

config.enable_stream(rs.stream.infrared, 1, 1280, 720, rs.format.y8, 30)
profile = pipeline.start(config)
 
frames = pipeline.wait_for_frames()
ir1_frame = frames.get_infrared_frame(1)
image = np.asanyarray(ir1_frame.get_data())
imageIR = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_IR.png'
cv2.imwrite(imageIR, image)
pipeline.stop()




================================================== 

Infrarosso' IR Emitter OFF 1280x720

==================================================

import pyrealsense2 as rs
import numpy as np
import cv2
import time
import math
 
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.infrared, 1, 1280, 720, rs.format.y8, 30)

#disabilita disable IR emitter
pipeline_profile = pipeline.start(config)
device = pipeline_profile.get_device()
depth_sensor = device.query_sensors()[0]
if depth_sensor.supports(rs.option.emitter_enabled):
    depth_sensor.set_option(rs.option.emitter_enabled, 0)

frames = pipeline.wait_for_frames()
ir1_frame = frames.get_infrared_frame(1)
image = np.asanyarray(ir1_frame.get_data())
imageIR = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_IR_OFF.png'
cv2.imwrite(imageIR, image)
pipeline.stop() 

 


================================================== 

IMU

==================================================

import pyrealsense2 as rs

import numpy as np


def initialize_camera():
    # start the frames pipe
    p = rs.pipeline()
    conf = rs.config()
    conf.enable_stream(rs.stream.accel)
    conf.enable_stream(rs.stream.gyro)
    prof = p.start(conf)
    return p


def gyro_data(gyro):
    return np.asarray([gyro.x, gyro.y, gyro.z])


def accel_data(accel):
    return np.asarray([accel.x, accel.y, accel.z])

p = initialize_camera()
try:
    while True:
        f = p.wait_for_frames()
        accel = accel_data(f[0].as_motion_frame().get_motion_data())
        gyro = gyro_data(f[1].as_motion_frame().get_motion_data())
        print("accelerometer: ", accel)
        print("gyro: ", gyro)

finally:
    p.stop()

 

gyro:  [0.         0.         0.00349066]
accelerometer:  [-0.24516624 -8.78675842 -2.75566864]

 

================================================== 

REGOLA ESPOSIZIONE

==================================================

import pyrealsense2 as rs
pipeline = rs.pipeline()
config = rs.config()
profile = pipeline.start(config) # Start streaming
sensor_dep = profile.get_device().first_depth_sensor()
print("Trying to set Exposure")
exp = sensor_dep.get_option(rs.option.exposure)
print ("exposure = %d" % exp)
print ("Setting exposure to new value")
exp = sensor_dep.set_option(rs.option.exposure, 25000)
exp = sensor_dep.get_option(rs.option.exposure)
print ("New exposure = %d" % exp)
profile = pipeline.stop

l'esposizione si puo' regolare anche su ROI

p = rs.pipeline()
prof = p.start()
s = prof.get_device().first_roi_sensor()
roi = s.get_region_of_interest()
s.set_region_of_interest(roi)

 

================================================== 

ADVANVCED MODE (regola parametri di dettaglio)

==================================================

 

import pyrealsense2 as rs
import time
import json

DS5_product_ids = ["0AD1", "0AD2", "0AD3", "0AD4", "0AD5", "0AF6", "0AFE", "0AFF", "0B00", "0B01", "0B03", "0B07", "0B3A", "0B5C"]

def find_device_that_supports_advanced_mode() :
    ctx = rs.context()
    ds5_dev = rs.device()
    devices = ctx.query_devices();
    for dev in devices:
        if dev.supports(rs.camera_info.product_id) and str(dev.get_info(rs.camera_info.product_id)) in DS5_product_ids:
            if dev.supports(rs.camera_info.name):
                print("Found device that supports advanced mode:", dev.get_info(rs.camera_info.name))
            return dev
    raise Exception("No D400 product line device that supports advanced mode was found")

try:
    dev = find_device_that_supports_advanced_mode()
    advnc_mode = rs.rs400_advanced_mode(dev)
    print("Advanced mode is", "enabled" if advnc_mode.is_enabled() else "disabled")

    # Loop until we successfully enable advanced mode
    while not advnc_mode.is_enabled():
        print("Trying to enable advanced mode...")
        advnc_mode.toggle_advanced_mode(True)
        # At this point the device will disconnect and re-connect.
        print("Sleeping for 5 seconds...")
        time.sleep(5)
        # The 'dev' object will become invalid and we need to initialize it again
        dev = find_device_that_supports_advanced_mode()
        advnc_mode = rs.rs400_advanced_mode(dev)
        print("Advanced mode is", "enabled" if advnc_mode.is_enabled() else "disabled")

    # Get each control's current value
    print("Depth Control: \n", advnc_mode.get_depth_control())
    print("RSM: \n", advnc_mode.get_rsm())
    print("RAU Support Vector Control: \n", advnc_mode.get_rau_support_vector_control())
    print("Color Control: \n", advnc_mode.get_color_control())
    print("RAU Thresholds Control: \n", advnc_mode.get_rau_thresholds_control())
    print("SLO Color Thresholds Control: \n", advnc_mode.get_slo_color_thresholds_control())
    print("SLO Penalty Control: \n", advnc_mode.get_slo_penalty_control())
    print("HDAD: \n", advnc_mode.get_hdad())
    print("Color Correction: \n", advnc_mode.get_color_correction())
    print("Depth Table: \n", advnc_mode.get_depth_table())
    print("Auto Exposure Control: \n", advnc_mode.get_ae_control())
    print("Census: \n", advnc_mode.get_census())

    #To get the minimum and maximum value of each control use the mode value:
    query_min_values_mode = 1
    query_max_values_mode = 2
    current_std_depth_control_group = advnc_mode.get_depth_control()
    min_std_depth_control_group = advnc_mode.get_depth_control(query_min_values_mode)
    max_std_depth_control_group = advnc_mode.get_depth_control(query_max_values_mode)
    print("Depth Control Min Values: \n ", min_std_depth_control_group)
    print("Depth Control Max Values: \n ", max_std_depth_control_group)

    # Set some control with a new (median) value
    current_std_depth_control_group.scoreThreshA = int((max_std_depth_control_group.scoreThreshA - min_std_depth_control_group.scoreThreshA) / 2)
    advnc_mode.set_depth_control(current_std_depth_control_group)
    print("After Setting new value, Depth Control: \n", advnc_mode.get_depth_control())

    # Serialize all controls to a Json string
    serialized_string = advnc_mode.serialize_json()
    print("Controls as JSON: \n", serialized_string)
    as_json_object = json.loads(serialized_string)

    # We can also load controls from a json string
    # For Python 2, the values in 'as_json_object' dict need to be converted from unicode object to utf-8
    if type(next(iter(as_json_object))) != str:
        as_json_object = {k.encode('utf-8'): v.encode("utf-8") for k, v in as_json_object.items()}
    # The C++ JSON parser requires double-quotes for the json object so we need
    # to replace the single quote of the pythonic json to double-quotes
    json_string = str(as_json_object).replace("'", '\"')
    advnc_mode.load_json(json_string)

except Exception as e:
    print(e)
    pass

venerdì 12 gennaio 2024

Rimozione trend da serie tempo

Una prova per rimuovere la componente stagionale da una serie tempo usando la libreria StatsModels di Python


In alto dati originali. al centro trend, seguend dati periodici stimati ed in basso calcolo dei residui

i

#!/usr/bin/env python
# coding: utf-8

# In[1]:


from IPython.display import display

import numpy as np
import pandas as pd
pd.set_option('display.max_rows', 15)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)

import matplotlib.pyplot as plt
from datetime import datetime
from datetime import timedelta
from pandas.plotting import register_matplotlib_converters
from mpl_toolkits.mplot3d import Axes3D

from statsmodels.tsa.stattools import acf, pacf
from statsmodels.tsa.statespace.sarimax import SARIMAX
import statsmodels.api as sm
import statsmodels.formula.api as smf


register_matplotlib_converters()
from time import time
import seaborn as sns
sns.set(style="whitegrid")

from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.covariance import EllipticEnvelope

import warnings
warnings.filterwarnings('ignore')

RANDOM_SEED = np.random.seed(0)


# # Carica i dati

# In[2]:


def parser(s):
return datetime.strptime(s, '%Y-%m-%d %H:%M:%S')



# In[3]:


#read data
est = pd.read_csv('/home/luca/luca/liscione.csv',sep=";",parse_dates=[0], index_col=0, date_parser=parser,header=0)
print(est)


# In[4]:


est = est.asfreq(freq='1H', method='ffill')



# ### Introduce an Anomaly

# In[5]:


idx = pd.IndexSlice


# In[6]:


start_date = datetime(2023,10,5,0,0,0)
end_date = datetime(2023,12,31,23,59,0)
lim_est = est[start_date:end_date]


# In[7]:


lim_est


# In[8]:


del lim_est['Temp']


# In[9]:


plt.figure(figsize=(10,4))
plt.plot(lim_est)
plt.title('Est', fontsize=20)
plt.ylabel('Est(mm)', fontsize=16)
for year in range(start_date.year,end_date.year):
plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.2)


# ## Seasonal Decompose

# In[10]:


from statsmodels.tsa.seasonal import seasonal_decompose
import matplotlib.dates as mdates


# In[18]:


plt.rc('figure',figsize=(12,8))
plt.rc('font',size=15)

result = seasonal_decompose(lim_est,model='additive')
print(result.trend)
plt.savefig('trend')

fig = result.plot()
fig.savefig('trend.png') # save the figure to file
result.trend.to_csv('/home/luca/tempo.csv', date_format='%Y-%m-%d_%H',sep=";")



# In[12]:


plt.rc('figure',figsize=(12,6))
plt.rc('font',size=15)

fig, ax = plt.subplots()
x = result.resid.index
y = result.resid.values
print(x.size)
ax.plot_date(x, y, color='black',linestyle='--')
ax.set_ylim([-0.7, 0.7])

fig.autofmt_xdate()
plt.savefig('residual')

plt.show()


# In[13]:


temp = est[start_date:end_date]

del temp['Est']
print(temp.index)
temp.index.to_csv('tempo2.csv', date_format='%Y-%m-%d_%H',delimiter=";")

#np.savetxt('tempo.csv',temp.index,delimiter=";")


# In[ ]:





# In[ ]:


fig, ax1 = plt.subplots()

color = 'tab:red'
ax1.set_xlabel('time')
ax1.set_ylabel('Temp', color=color)
ax1.plot(temp, color=color)
ax1.tick_params(axis='y', labelcolor=color)


ax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis

color = 'tab:blue'
ax2.set_ylabel('Est', color=color) # we already handled the x-label with ax1
ax2.plot(result.trend, color=color)
ax2.tick_params(axis='y', labelcolor=color)

plt.savefig('sovrapposto')



plt.show()


# In[ ]:


np.savetxt('trend.csv',result.trend.values,delimiter=";")
np.savetxt('temp.csv',temp.values,delimiter=";")


# In[ ]:


import matplotlib.animation as animation
fig,ax = plt.subplots()

scat = ax.scatter(result.trend.values, temp.values)
plt.gca().invert_xaxis()
ax.set(xlabel='Estensimetro (mm)')
ax.set(ylabel='Temperatura (C)')

def update(i):
scat.set_offsets((result.trend.values[i],temp.values[i]))
return scat,

ani = animation.FuncAnimation(fig,update, repeat=True, frames=len(temp)-1, interval=10)
#writer = animation.PillowWriter(fps=15)
#ani.save('animation.gif',writer=writer)
#plt.savefig('scatterplot')




plt.show()


Animazione di grafico con Matplotlib

 







import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import pandas as pd
df = pd.read_csv("tempo.csv", dtype=str,sep=";")
print(df)
my_data = np.genfromtxt('./scatterplot.csv', delimiter=';')
x = my_data[:,1]
y = my_data[:,0]
fig = plt.figure()
ax = fig.add_subplot()
plt.ylim(-5, 22)
plt.ylabel ('Temperatura (C)')
plt.xlim(-59, -53)
plt.xlabel ('Estensimetro (mm)')
tempo = ax.text(0.8, 0.9, '', horizontalalignment='center',verticalalignment='center', transform=ax.transAxes)
graph, = plt.plot([], [], 'o', markersize=1)
def animate(i):
    graph.set_data(x[:i+1], y[:i+1])
    tempo.set_text(df.Data[i])
    return graph,
ani = FuncAnimation(fig, animate, frames=2000, interval=50)
#FFwriter = animation.FFMpegWriter(fps=10)
#ani.save('animation.mp4')
plt.show()

giovedì 17 agosto 2023

Subpixel corner detection con Opencv

Ho provato a vedere se la funzione di  corner detection poteva essere utile con gli aruco tags per vedere se vi erano spostamenti


Non sembra ma quella sopra e' una gif animata (187 frames)

Ho preso l'esempio della libreria OpenCV applicato ad una serie di immagini di un Aruco tag statico ripreso con differenti condizioni di illuminazione

from __future__ import print_function
import cv2 as cv
import numpy as np
import argparse
import random as rng

source_window = 'Image'
maxTrackbar = 25
rng.seed(12345)

def goodFeaturesToTrack_Demo(val):
maxCorners = max(val, 1)
# Parameters for Shi-Tomasi algorithm
qualityLevel = 0.01
minDistance = 10
blockSize = 3
gradientSize = 3
useHarrisDetector = False
k = 0.04
# Copy the source image
copy = np.copy(src)
# Apply corner detection
corners = cv.goodFeaturesToTrack(src_gray, maxCorners, qualityLevel, minDistance, None, blockSize=blockSize, gradientSize=gradientSize, useHarrisDetector=useHarrisDetector, k=k)
# Draw corners detected
radius = 4
winSize = (5, 5)
zeroZone = (-1, -1)
criteria = (cv.TERM_CRITERIA_EPS + cv.TermCriteria_COUNT, 40, 0.001)
# Calculate the refined corner locations
corners = cv.cornerSubPix(src_gray, corners, winSize, zeroZone, criteria)
# Write them down
for i in range(corners.shape[0]):
print(corners[i,0,0],";",corners[i,0,1],";",end='')
parser = argparse.ArgumentParser(description='Code for Shi-Tomasi corner detector tutorial.')
parser.add_argument('--input', help='Path to input image.', default='pic3.png')
args = parser.parse_args()
src = cv.imread(cv.samples.findFile(args.input))
if src is None:
print('Could not open or find the image:', args.input)
exit(0)
src_gray = cv.cvtColor(src, cv.COLOR_BGR2GRAY)
maxCorners = 10 # initial threshold
goodFeaturesToTrack_Demo(maxCorners)
print()


il risultato dell'algotitmo e' di questo tipo




Alla fine e' emerso l'algoritmo seleziona in modo automatico gli angoli con un ordine diverso  e c'e' una forte influenza dell'illuminazione sulla definizione delle coordinate dell'angolo 

In casi ottimali (ma non e' questo il caso) ho visto che la standard deviation e' di circa 0.2 pixels

Non e' quindi un metodo affidabile per il change detection








Debugger integrato ESP32S3

Aggiornamento In realta' il Jtag USB funziona anche sui moduli cinesi Il problema risiede  nell'ID USB della porta Jtag. Nel modulo...