venerdì 29 marzo 2024

Gaussian Splatting con Nerfstudio e Opensplat

Sto interessando a gaussian splatting ed ho iniziato provando OpenSplat

Prima di usare OpenSplat si devono preparare i dati con NerfStudio

Ho creato un virtualenv con Python e quindi ho installato Nerfstudio ed alcune dipendenze

pip install nerfstudio

apt install ffmpeg

apt install colmap

A questo punto ho preso le immagini di esempio del Lego ed ho lanciato il processing (no gpu)

 

ns-process-data images --data ./data/img/lego/train/ --output-dir ./data/img/elabora --no-gpu

E' giunto il momento di passare ad OpenSplat.

Ho creato il docker clonando il git del progetto e lanciando (compilazione abbastanza veloce)

docker build -t opensplat .

a questo punto si lancia il docker puntando a dove sono state salvate le elaborazioni di Nerfstudio

docker run -it -v /home/luca/OpenSplat/NerfStudio/data/img:/data --device=/dev/kfd --device=/dev/dri opensplat:latest bash

e lanciando la elaborazione

root@c52b8089c5f6:/code/build# ./opensplat ../../data/elabora/ -n 2000


il risultato dopo 2000 steps e' il file splat.ply


https://playcanvas.com/viewer



Yolo 8 in Google Colab

Ho provato a vedere di sfruttare le risorse gratuite di Google Colab per fare il retraining di Yolo del precedente post

Per prima cosa si copiano i files su Google Drive e si concede la lettura a Colab

import os, sys
from google.colab import drive
drive.mount('/content/drive')
nb_path = '/content/notebooks'
os.symlink('/content/drive/My Drive/Colab Notebooks', nb_path)
sys.path.insert(0,nb_path)

In seguito si installa Yolo 

%pip install ultralytics
import ultralytics
ultralytics.checks()

Ho usato per semplicita' la modalita'a linea nei comando (basta mettere un punto interrogativo prima del comando) effettuando i puntamenti sui folder GDrive

!yolo task=detect mode=train model=yolov8m.pt imgsz=640 data=/content/drive/MyDrive/ocean_trash/data.yaml epochs=60 batch=32 name=yolov8m_e60_b32_trash

Rispetto alla prova sul portatile ho potuto utilizzare la YOLO M al posto di YOLO N impostando le epoch a 60 e la batch a 32 (con queste impostazioni siamo quasi al limite dei 15 Gb di memoria disponibili sulla GPU T4 disponibile nel profilo gratuito)

Rispetto alla rete addestrata sul portatile con YOLO N, 10 epoch e 4 batch c'e' un significativo miglioramento a parita'di set di immagini di addestramento anche se la classe Rope risulta permanere come mal classificata






giovedì 28 marzo 2024

Retrain Yolo8 con rifiuti su spiaggia

Ho provato a fare il retraining di Yolo 8 con rifiuti spiaggiati usando il dataset al link sottostante

 https://universe.roboflow.com/baeulang/ocean_trash

 


 Il dataset e' composto da 1133 immagini

Il train e' stato eseguito sull' NVidia Geforce 940M su mio portatile Debian. Per questo motivo ho dovuto usare il modello nano e ridurre le batch da 16 a 8 altrimenti saturavo completamente il Gb di ram della scheda video


yolo task=detect mode=train model=yolov8n.pt imgsz=640 data=/home/luca/yolo8/trash/data.yaml epochs=10 batch=4 name=yolov8n_trash


 

yolo detect val model='/home/luca/yolo8/runs/detect/yolov8n_trash4/weights/best.pt'  imgsz=640 data=/home/luca/yolo8/trash/data.yaml


Matrice di confusione




a questo punto ho usato il file addestrato per riconoscere immagini che non sono state inserite nella fase di train

yolo predict model='/home/luca/yolo8/runs/detect/yolov8n_trash4/weights/best.pt' source='/home/luca/yolo8/trash/test/images/CD_9581_jpg.rf.ac853dd458749924fe0891cd4c5f9a21.jpg' imgsz=640

 

Questo e' un esempio di riconoscimento


Il risultato non e' eccezionale ma c'e' considerare che e' stato utilizzato come base il modello nano



venerdì 22 marzo 2024

SV305 raw image

Sto provando a prendere il controllo della SVBony SV305 senza passare da AstroDMX

La camera puo' essere controllata tramite l'SDK che si trova al link https://www.svbony.com/Support/SoftWare-Driver  (il driver e' proprietario e la libreria e' distribuita compilata)

Per prima cosa su Linux si copia il file 90-ckusb.rules in /etc/udev/rules.d e si riavvia udev con udevadm control --reload-rules

La camera puo' essere controllata sia mediante l'esempio in C sia mediante Python con la libreria https://pypi.org/project/pysvb/

Il salvataggio e' in ogni caso in formato raw con spazio colore GRBG, cio' vuol dire che il filtro di Bayes riporta nella prima riga valore RGRGRGRGRG mentre nella seconda GBGBGBGB

Nella demo viene selezionata una ROI di 800x600 (la massima risoluzione e' 1920x1080x16bit) con un file di uscita di dimensione di 3840000 byte.  Considerando che 800x600 sono 480000 pixels ci sono 8 byte per pixel (il calcolo sarebbe 800x600x4x2) 

oltre al formato RAW16 sono supportati i formati RAW8,RAW10, RAW12,RAW14,RAW16,RGB24,RGB32,Y8, Y10,Y12,Y14,Y16

http://www.suffolksky.com/2022/04/21/indi-allsky-sv305-gain-and-color-setting/


Scan camera number: 1

Friendly name: SVBONY SV305
Port type: USB2.0
SN: 01232E4B8B52E627EE20081200a4
Device ID: 0x1201
Camera ID: 1
sn: 01232E4B8B52E627EE20081200a4
camera maximum width 1920
camera maximum height 1080
camera color space: color
camera bayer pattern: 2
support bin: 1
support bin: 2
support bin: 0
support img type: 0
support img type: 4
support img type: 5
support img type: 10
support img type: -1
Max depth: 16
Is trigger camera: YES
=================================
control type: 1
control name: Exposure
control Description: Exposure
Maximum value: 1999999991
minimum value: 29
default value: 30000
is auto supported: YES
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 1 value 989
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 0
control name: Gain
control Description: Gain
Maximum value: 720
minimum value: 0
default value: 10
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 0 value 0
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 9
control name: Contrast
control Description: Contrast
Maximum value: 100
minimum value: 0
default value: 50
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 9 value 50
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 10
control name: Sharpness
control Description: Sharpness
Maximum value: 100
minimum value: 0
default value: 0
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 10 value 0
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 11
control name: Saturation
control Description: Saturation
Maximum value: 255
minimum value: 0
default value: 128
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 11 value 128
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 4
control name: WB_R
control Description: WB Red
Maximum value: 511
minimum value: 0
default value: 128
is auto supported: YES
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 4 value 231
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 5
control name: WB_G
control Description: WB Green
Maximum value: 511
minimum value: 0
default value: 128
is auto supported: YES
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 5 value 128
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 6
control name: WB_B
control Description: WB Blue
Maximum value: 511
minimum value: 0
default value: 128
is auto supported: YES
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 6 value 253
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 2
control name: Gamma
control Description: Gamma
Maximum value: 1000
minimum value: 0
default value: 100
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 2 value 100
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 8
control name: Frame speed
control Description: Frame speed
Maximum value: 2
minimum value: 0
default value: 1
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 8 value 2
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 13
control name: Offset
control Description: Offset
Maximum value: 255
minimum value: 0
default value: 0
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 13 value 0
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 12
control name: Auto exposure target
control Description: Auto exposure target
Maximum value: 160
minimum value: 60
default value: 100
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 12 value 100
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 7
control name: Flip
control Description: Flip
Maximum value: 3
minimum value: 0
default value: 0
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 7 value 0
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 18
control name: Bad pixel correction
control Description: Bad pixel correction
Maximum value: 1
minimum value: 0
default value: 1
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 18 value 1
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
=================================
control type: 19
control name: Bad pixel correction threshold
control Description: Bad pixel correction threshold
Maximum value: 200
minimum value: 10
default value: 60
is auto supported: NO
is writable: YES
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
control type 19 value 60
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
camera ROI, startX 0, startY 0, width 1920, height 1080, bin 1
camera ROI, startX 0, startY 0, width 800, height 600, bin 1
cameraMode: 0
pixel size 2.900000
write image file SVB_image_0.raw
drop frames: 0
write image file SVB_image_1.raw
drop frames: 0
write image file SVB_image_2.raw
drop frames: 0
write image file SVB_image_3.raw
drop frames: 0
write image file SVB_image_4.raw
drop frames: 0
write image file SVB_image_5.raw
drop frames: 0
write image file SVB_image_6.raw
drop frames: 0
write image file SVB_image_7.raw
drop frames: 0
write image file SVB_image_8.raw
drop frames: 0
write image file SVB_image_9.raw
drop frames: 0
stop camera capture
close camera

 

OpenCV e camera UVC

Ho scoperto per caso che le camere UVC sono facilmente controllabili da OpenCV  in particolare per quanto riguardo il parametro dell'auto esposizione

Per le caratteristiche della camera si usa 

v4l2-ctl --list-formats-ext -d 0

con il parametro d che e' il numero del device

 

===============================================

import cv2

cap = cv2.VideoCapture(0)

# The control range can be viewed through v4l2-ctl -L
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
cap.set(cv2.CAP_PROP_BRIGHTNESS, 64)
cap.set(cv2.CAP_PROP_CONTRAST, 0)
cap.set(cv2.CAP_PROP_EXPOSURE, 100) #in Linux l'esposizione e' 1/n
#cap.set(cv2.CAP_PROP_AUTO_EXPOSURE, 1) #controlla autoesposizione 1=true

while(True):
    ret, frame = cap.read()
    cv2.imshow('frame', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
         break

cap.release()
cv2.destroyAllWindows()

Aruco tag e image stacking

Per cercare di migliorare il riconoscimento dei tag (Aruco, April) ho provato a fare image stacking per ridurre il rumore

Ho trovato una soluzione gia' pronta in questo repository

https://github.com/maitek/image_stacking

La definizione dei bordi e' nettamente migliorata

Image stacking

 
Immagine singola

venerdì 15 marzo 2024

FTP Bash script

 Uno script rubato su stack overflow per fare l'upload automatico su FTP senza criptazione

 

#!/bin/sh
HOST='ftp.xxxxx.altervista.org'
USER='xxxxxxx'
PASSWD='xxxxxxxxxxxxxxxxxxx'
FILE='luca.jpg'

ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
binary
put $FILE
quit
END_SCRIPT
exit 0

Canon 450D e Gphoto2

 GPhoto 2 permette il controllo remoto di alcuni modelli di DSLR. Ho provato con la mia Canon EOS 500D

 

 

Al primo tentativo  il comando gphoto2 --auto-detect
Model                          Port                                            
----------------------------------------------------------
Canon EOS 500D                 usb:001,002    

funzionava ma i comandi successivi risultano con l'errore 

An error occurred in the io-library ('Could not claim the USB device'): Could not claim interface 0 (Device or resource busy). Make sure no other program (gvfs-gphoto2-volume-monitor) or kernel module (such as sdc2xx, stv680, spca50x) is using the device and you have read/write access to the device.
ERROR: Could not capture image.

Spenta e riaccesa la camera ho potuto eseguire il list dei files sulla SD

gphoto2 --list-files

Effettuare l'upload di una immagine 

gphoto2 --get-file /store_00020001/DCIM/100CANON/IMG_9200.JPG

effettuare scatti

gphoto2 --capture-image
gphoto2 --capture-image-and-download --filename %m%d%H%M%S.jpg

Con questo comando si ottengono i raw CRE

gphoto2 --get-all-raw-data

 Per convertire in raw in PNG si puo' usare Darktable in modalita' terminale

for pic in *.cr2; do darktable-cli "$pic" "$(basename ${pic%.CR2}.png)";  done

Oppure imagemagick

mogrify -format png *.cr2

Interessante e' anche la libreria Python

https://github.com/jim-easterbrook/python-gphoto2

giovedì 14 marzo 2024

Pyrealsense

ATTENZIONE: per funzionare alla massima risoluzione la Realsense deve usare una porta USB 3 ed un cavo idoneo ad USB 3 altrimenti si limita a 640x480

 

==================================================

Solo Ottico 1920x1080

import pyrealsense2 as rs
import numpy as np
import cv2
import time
import math

pipeline = rs.pipeline()
config = rs.config()

config.enable_stream(rs.stream.color, 1920, 1080, rs.format.bgr8, 30)

profile = pipeline.start(config)

align_to = rs.stream.color
align = rs.align(align_to)

frames = pipeline.wait_for_frames()
aligned_frames = align.process(frames)
color_frame = aligned_frames.get_color_frame()
color_image = np.asanyarray(color_frame.get_data())
imageName1 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_Color.png'
cv2.imwrite(imageName1, color_image)
pipeline.stop()


 

================================================== 

Ottico + Profondita' 1280x720 

 

================================================== 

import pyrealsense2 as rs
import numpy as np
import cv2
import time
import math

pipeline = rs.pipeline()
config = rs.config()

config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)

profile = pipeline.start(config)
depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()

# We will be removing the background of objects more than
#  clipping_distance_in_meters meters away
clipping_distance_in_meters = 1.5
clipping_distance = clipping_distance_in_meters / depth_scale


align_to = rs.stream.color
align = rs.align(align_to)

frames = pipeline.wait_for_frames()

aligned_frames = align.process(frames)
aligned_depth_frame = aligned_frames.get_depth_frame()
color_frame = aligned_frames.get_color_frame()

depth_image = np.asanyarray(aligned_depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())


# Remove background - Set pixels further than clipping_distance to grey
grey_color = 153
depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) #depth image is 1 channel, color is 3 channels
bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)

# Render images
depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
images = np.hstack((bg_removed, depth_colormap))
#cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)

# Filename
imageName1 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_Color.png'
imageName2 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_Depth.png'
imageName3 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_bg_removed.png'
imageName4 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_ColorDepth.png'
imageName5 = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_DepthColormap.png'

# Saving the image
cv2.imwrite(imageName1, color_image)
cv2.imwrite(imageName2, depth_image)
cv2.imwrite(imageName3, images)
cv2.imwrite(imageName4, bg_removed )
cv2.imwrite(imageName5, depth_colormap )

pipeline.stop()



================================================== 

Infrarosso' 1280x720

================================================== 

import pyrealsense2 as rs
import numpy as np
import cv2
import time
import math
 
pipeline = rs.pipeline()
config = rs.config()

config.enable_stream(rs.stream.infrared, 1, 1280, 720, rs.format.y8, 30)
profile = pipeline.start(config)
 
frames = pipeline.wait_for_frames()
ir1_frame = frames.get_infrared_frame(1)
image = np.asanyarray(ir1_frame.get_data())
imageIR = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_IR.png'
cv2.imwrite(imageIR, image)
pipeline.stop()




================================================== 

Infrarosso' IR Emitter OFF 1280x720

==================================================

import pyrealsense2 as rs
import numpy as np
import cv2
import time
import math
 
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.infrared, 1, 1280, 720, rs.format.y8, 30)

#disabilita disable IR emitter
pipeline_profile = pipeline.start(config)
device = pipeline_profile.get_device()
depth_sensor = device.query_sensors()[0]
if depth_sensor.supports(rs.option.emitter_enabled):
    depth_sensor.set_option(rs.option.emitter_enabled, 0)

frames = pipeline.wait_for_frames()
ir1_frame = frames.get_infrared_frame(1)
image = np.asanyarray(ir1_frame.get_data())
imageIR = str(time.strftime("%Y_%m_%d_%H_%M_%S")) +  '_IR_OFF.png'
cv2.imwrite(imageIR, image)
pipeline.stop() 

 


================================================== 

IMU

==================================================

import pyrealsense2 as rs

import numpy as np


def initialize_camera():
    # start the frames pipe
    p = rs.pipeline()
    conf = rs.config()
    conf.enable_stream(rs.stream.accel)
    conf.enable_stream(rs.stream.gyro)
    prof = p.start(conf)
    return p


def gyro_data(gyro):
    return np.asarray([gyro.x, gyro.y, gyro.z])


def accel_data(accel):
    return np.asarray([accel.x, accel.y, accel.z])

p = initialize_camera()
try:
    while True:
        f = p.wait_for_frames()
        accel = accel_data(f[0].as_motion_frame().get_motion_data())
        gyro = gyro_data(f[1].as_motion_frame().get_motion_data())
        print("accelerometer: ", accel)
        print("gyro: ", gyro)

finally:
    p.stop()

 

gyro:  [0.         0.         0.00349066]
accelerometer:  [-0.24516624 -8.78675842 -2.75566864]

 

================================================== 

REGOLA ESPOSIZIONE

==================================================

import pyrealsense2 as rs
pipeline = rs.pipeline()
config = rs.config()
profile = pipeline.start(config) # Start streaming
sensor_dep = profile.get_device().first_depth_sensor()
print("Trying to set Exposure")
exp = sensor_dep.get_option(rs.option.exposure)
print ("exposure = %d" % exp)
print ("Setting exposure to new value")
exp = sensor_dep.set_option(rs.option.exposure, 25000)
exp = sensor_dep.get_option(rs.option.exposure)
print ("New exposure = %d" % exp)
profile = pipeline.stop

l'esposizione si puo' regolare anche su ROI

p = rs.pipeline()
prof = p.start()
s = prof.get_device().first_roi_sensor()
roi = s.get_region_of_interest()
s.set_region_of_interest(roi)

 

================================================== 

ADVANVCED MODE (regola parametri di dettaglio)

==================================================

 

import pyrealsense2 as rs
import time
import json

DS5_product_ids = ["0AD1", "0AD2", "0AD3", "0AD4", "0AD5", "0AF6", "0AFE", "0AFF", "0B00", "0B01", "0B03", "0B07", "0B3A", "0B5C"]

def find_device_that_supports_advanced_mode() :
    ctx = rs.context()
    ds5_dev = rs.device()
    devices = ctx.query_devices();
    for dev in devices:
        if dev.supports(rs.camera_info.product_id) and str(dev.get_info(rs.camera_info.product_id)) in DS5_product_ids:
            if dev.supports(rs.camera_info.name):
                print("Found device that supports advanced mode:", dev.get_info(rs.camera_info.name))
            return dev
    raise Exception("No D400 product line device that supports advanced mode was found")

try:
    dev = find_device_that_supports_advanced_mode()
    advnc_mode = rs.rs400_advanced_mode(dev)
    print("Advanced mode is", "enabled" if advnc_mode.is_enabled() else "disabled")

    # Loop until we successfully enable advanced mode
    while not advnc_mode.is_enabled():
        print("Trying to enable advanced mode...")
        advnc_mode.toggle_advanced_mode(True)
        # At this point the device will disconnect and re-connect.
        print("Sleeping for 5 seconds...")
        time.sleep(5)
        # The 'dev' object will become invalid and we need to initialize it again
        dev = find_device_that_supports_advanced_mode()
        advnc_mode = rs.rs400_advanced_mode(dev)
        print("Advanced mode is", "enabled" if advnc_mode.is_enabled() else "disabled")

    # Get each control's current value
    print("Depth Control: \n", advnc_mode.get_depth_control())
    print("RSM: \n", advnc_mode.get_rsm())
    print("RAU Support Vector Control: \n", advnc_mode.get_rau_support_vector_control())
    print("Color Control: \n", advnc_mode.get_color_control())
    print("RAU Thresholds Control: \n", advnc_mode.get_rau_thresholds_control())
    print("SLO Color Thresholds Control: \n", advnc_mode.get_slo_color_thresholds_control())
    print("SLO Penalty Control: \n", advnc_mode.get_slo_penalty_control())
    print("HDAD: \n", advnc_mode.get_hdad())
    print("Color Correction: \n", advnc_mode.get_color_correction())
    print("Depth Table: \n", advnc_mode.get_depth_table())
    print("Auto Exposure Control: \n", advnc_mode.get_ae_control())
    print("Census: \n", advnc_mode.get_census())

    #To get the minimum and maximum value of each control use the mode value:
    query_min_values_mode = 1
    query_max_values_mode = 2
    current_std_depth_control_group = advnc_mode.get_depth_control()
    min_std_depth_control_group = advnc_mode.get_depth_control(query_min_values_mode)
    max_std_depth_control_group = advnc_mode.get_depth_control(query_max_values_mode)
    print("Depth Control Min Values: \n ", min_std_depth_control_group)
    print("Depth Control Max Values: \n ", max_std_depth_control_group)

    # Set some control with a new (median) value
    current_std_depth_control_group.scoreThreshA = int((max_std_depth_control_group.scoreThreshA - min_std_depth_control_group.scoreThreshA) / 2)
    advnc_mode.set_depth_control(current_std_depth_control_group)
    print("After Setting new value, Depth Control: \n", advnc_mode.get_depth_control())

    # Serialize all controls to a Json string
    serialized_string = advnc_mode.serialize_json()
    print("Controls as JSON: \n", serialized_string)
    as_json_object = json.loads(serialized_string)

    # We can also load controls from a json string
    # For Python 2, the values in 'as_json_object' dict need to be converted from unicode object to utf-8
    if type(next(iter(as_json_object))) != str:
        as_json_object = {k.encode('utf-8'): v.encode("utf-8") for k, v in as_json_object.items()}
    # The C++ JSON parser requires double-quotes for the json object so we need
    # to replace the single quote of the pythonic json to double-quotes
    json_string = str(as_json_object).replace("'", '\"')
    advnc_mode.load_json(json_string)

except Exception as e:
    print(e)
    pass

Da VirtualBox a PC

L'idea di questa prova era quella di configurare un thin client senza connetterlo a tastiera, monitor e mouse

Ho avuto in prova un Praim C33 ed ho deciso di usarlo come macchina test

 


 La macchina monta 2 Gb di Ram ed un disco mSATA da 8 G quindi la distro deve essere molto leggera...mi sono orientato su Alpine Linux



Mi sono creato su Virtualbox una macchina virtuale con Alpine e Docker (circa 500 Mb di spazio disco occupato) ed ho configurato utenti, rete ed accessori vari

Poi con il comando il file vdi e' stato convertito in img


VBoxManage internalcommands converttoraw Alpine.vdi Alpine.img

L'immagine e' stata quindi flashata sull' mSata tramite un adattatore USB-mSata e flashata con BalenaEtcher oppure tramite dd

dd if=Alpine.img of=/dev/sdb bs=1k conv=sync,noerror status=progress


(attenzione il disco e' dichiarato da 8G ma poi e' risultato essere piu' piccolo quindi l'immagine di Virtualbox doveva essere inferiore a 8G)

A questo punto ho rimontato il thin client ed e' partito regolarmente con l'immagine creata su Virtualbox....attenzione il Bios deve essere impostato come Legacy oppure Legacy+UEFI non solo UEFI

 

Debugger integrato ESP32S3

Aggiornamento In realta' il Jtag USB funziona anche sui moduli cinesi Il problema risiede  nell'ID USB della porta Jtag. Nel modulo...