Visualizzazione post con etichetta OpenCV4. Mostra tutti i post
Visualizzazione post con etichetta OpenCV4. Mostra tutti i post

venerdì 27 settembre 2024

Opencv camera calibration in cpp

Oltre che con uno script Python come visto qui la calibrazione della camera si puo' fare anche con il programma in CPP

Questo il procedimento per compilare il programma partendo dai sorgenti

 

git clone https://github.com/opencv/opencv

git clone https://github.com/opencv/opencv_contrib

 

sudo cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_ENABLE_NONFREE=ON \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_PC_FILE_NAME=opencv.pc \
-D OPENCV_GENERATE_PKCONFIG=ON \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-D PYTHON_EXECUTABLE=/usr/bin/python3 \
-D PYTHON_DEFAULT_EXECUTABLE=$(which python3) .. \
-D BUILD_EXAMPLES=ON  \
-D BUILD_opencv_apps=ON  \
-DBUILD_opencv_gapi:BOOL=OFF
-D BUILD_DOCS=ON \
..

 

sudo make install 

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib

 

si naviga fino al folder

/opencv/samples/cpp/tutorial_code/calib3d/camera_calibration 

e si digita il comando

g++ -ggdb camera_calibration.cpp -o camera_calibration `pkg-config --cflags --libs opencv`

per il funzionamento del programma occorrono due file xml, il primo con le impostazioni ed il secondo con l'elenco delle immagini di calibrazione

camera_calibration -d parametri.xml

parametri.xml

<?xml version="1.0"?>
<opencv_storage>
<Settings>
<!-- Number of inner corners per a item row and column. (square, circle) -->
<BoardSize_Width>9</BoardSize_Width>
<BoardSize_Height>4</BoardSize_Height>
<!-- The size of a square in some user defined metric system (pixel, millimeter)-->
<Square_Size>0.115</Square_Size>
<Marker_Size>0.090</Marker_Size>
<!-- The type of input used for camera calibration. One of: CHESSBOARD CHARUCOBOARD CIRCLES_GRID ASYMMETRIC_CIRCLES_GRID -->
<Calibrate_Pattern>"CHARUCOBOARD"</Calibrate_Pattern>
<ArUco_Dict_Name>DICT_4X4_250</ArUco_Dict_Name>
<ArUco_Dict_File_Name></ArUco_Dict_File_Name>
<!-- The input to use for calibration.
To use an input camera -> give the ID of the camera, like "1"
To use an input video -> give the path of the input video, like "/tmp/x.avi"
To use an image list -> give the path to the XML or YAML file containing the list of the images, like "/tmp/circles_list.xml"
-->
<Input>"elenco_foto.xml"</Input>
<!-- If true (non-zero) we flip the input images around the horizontal axis.-->
<Input_FlipAroundHorizontalAxis>0</Input_FlipAroundHorizontalAxis>
<!-- Time delay between frames in case of camera. -->
<Input_Delay>100</Input_Delay>
<!-- How many frames to use, for calibration. -->
<Calibrate_NrOfFrameToUse>25</Calibrate_NrOfFrameToUse>
<!-- Consider only fy as a free parameter, the ratio fx/fy stays the same as in the input cameraMatrix.
Use or not setting. 0 - False Non-Zero - True-->
<Calibrate_FixAspectRatio> 1 </Calibrate_FixAspectRatio>
<!-- If true (non-zero) tangential distortion coefficients are set to zeros and stay zero.-->
<Calibrate_AssumeZeroTangentialDistortion>1</Calibrate_AssumeZeroTangentialDistortion>
<!-- If true (non-zero) the principal point is not changed during the global optimization.-->
<Calibrate_FixPrincipalPointAtTheCenter> 1 </Calibrate_FixPrincipalPointAtTheCenter>
<!-- The name of the output log file. -->
<Write_outputFileName>"parametri_camera.xml"</Write_outputFileName>
<!-- If true (non-zero) we write to the output file the feature points.-->
<Write_DetectedFeaturePoints>1</Write_DetectedFeaturePoints>
<!-- If true (non-zero) we write to the output file the extrinsic camera parameters.-->
<Write_extrinsicParameters>1</Write_extrinsicParameters>
<!-- If true (non-zero) we write to the output file the refined 3D target grid points.-->
<Write_gridPoints>1</Write_gridPoints>
<!-- If true (non-zero) we show after calibration the undistorted images.-->
<Show_UndistortedImage>1</Show_UndistortedImage>
<!-- If true (non-zero) will be used fisheye camera model.-->
<Calibrate_UseFisheyeModel>0</Calibrate_UseFisheyeModel>
<!-- If true (non-zero) distortion coefficient k1 will be equals to zero.-->
<Fix_K1>0</Fix_K1>
<!-- If true (non-zero) distortion coefficient k2 will be equals to zero.-->
<Fix_K2>0</Fix_K2>
<!-- If true (non-zero) distortion coefficient k3 will be equals to zero.-->
<Fix_K3>0</Fix_K3>
<!-- If true (non-zero) distortion coefficient k4 will be equals to zero.-->
<Fix_K4>1</Fix_K4>
<!-- If true (non-zero) distortion coefficient k5 will be equals to zero.-->
<Fix_K5>1</Fix_K5>
</Settings>
</opencv_storage>


elenco_foto.xml

<?xml version="1.0"?>
<opencv_storage>
<images>
hikvision/01.jpg
hikvision/02.jpg
hikvision/03.jpg
hikvision/04.jpg
hikvision/05.jpg
hikvision/06.jpg
hikvision/07.jpg
hikvision/08.jpg
hikvision/09.jpg
hikvision/10.jpg
hikvision/11.jpg
hikvision/12.jpg
hikvision/13.jpg
hikvision/14.jpg
hikvision/15.jpg
hikvision/16.jpg
</images>
</opencv_storage>





martedì 23 agosto 2022

Calcolo distanza tra 2 Aruco tags con OpenCV

Invece di calcolare la distanza tra la camera ed un tag in questo caso volevo vedere se riuscivo a calcolare la distanza tra due tag 

Raffronto la distanza reale tra due tag e la distanza calcolata da OpenCV. L'errore e' dell'ordine del cm




Nel video effettuato con un drone si vede come la distanza tra i due tag venga correttamente letta come costante nonostante il punto di riprese si muova


'''
Sample Command:-
python detect_aruco_video.py --type DICT_5X5_100 --camera True
python detect_aruco_video.py --type DICT_5X5_100 --camera False --video test_video.mp4 -a 25 -k ./calibration_matrix.npy -d ./distortion_coefficients.npy
'''

from turtle import delay
import numpy as np
from utils import ARUCO_DICT, aruco_display
import argparse
import time
import cv2
import sys
import math
import time

def isRotationMatrix(R):
Rt = np.transpose(R)
shouldBeIdentity = np.dot(Rt, R)
I = np.identity(3, dtype=R.dtype)
n = np.linalg.norm(I - shouldBeIdentity)
return n < 1e-6

def rotationMatrixToEulerAngles(R):
assert (isRotationMatrix(R))

sy = math.sqrt(R[0, 0] * R[0, 0] + R[1, 0] * R[1, 0])

singular = sy < 1e-6

if not singular:
x = math.atan2(R[2, 1], R[2, 2])
y = math.atan2(-R[2, 0], sy)
z = math.atan2(R[1, 0], R[0, 0])
else:
x = math.atan2(-R[1, 2], R[1, 1])
y = math.atan2(-R[2, 0], sy)
z = 0

return np.array([x, y, z])

R_flip = np.zeros((3,3), dtype=np.float32)
R_flip[0,0] = 1.0
R_flip[1,1] =-1.0
R_flip[2,2] =-1.0

ap = argparse.ArgumentParser()
ap.add_argument("-i", "--camera", help="Set to True if using webcam")
ap.add_argument("-v", "--video", help="Path to the video file")
ap.add_argument("-t", "--type", type=str, default="DICT_ARUCO_ORIGINAL", help="Type of ArUCo tag to detect")
ap.add_argument("-k", "--K_Matrix", required=True, help="Path to calibration matrix (numpy file)")
ap.add_argument("-d", "--D_Coeff", required=True, help="Path to distortion coefficients (numpy file)")
ap.add_argument("-a", "--aruco_dim", required=True, help="ArUco tag dimension")
args = vars(ap.parse_args())


if args["camera"].lower() == "true":
video = cv2.VideoCapture(0)
time.sleep(2.0)
else:
if args["video"] is None:
print("[Error] Video file location is not provided")
sys.exit(1)

video = cv2.VideoCapture(args["video"])

if ARUCO_DICT.get(args["type"], None) is None:
print(f"ArUCo tag type '{args['type']}' is not supported")
sys.exit(0)

arucoDict = cv2.aruco.Dictionary_get(ARUCO_DICT[args["type"]])
calibration_matrix_path = args["K_Matrix"]
distortion_coefficients_path = args["D_Coeff"]
k = np.load(calibration_matrix_path)
d = np.load(distortion_coefficients_path)
arucoParams = cv2.aruco.DetectorParameters_create()

while True:
ret, frame = video.read()
time.sleep(0.05)
if ret is False:
break


h, w, _ = frame.shape

width=1000
height = int(width*(h/w))
frame = cv2.resize(frame, (width, height), interpolation=cv2.INTER_CUBIC)
corners, ids, rejected = cv2.aruco.detectMarkers(frame, arucoDict, parameters=arucoParams)

detected_markers = aruco_display(corners, ids, rejected, frame)
if len(corners) > 0:
#print(corners) #posizione degli angoli del marker
#print(len(ids))
if (len(ids) == 2):
for i in range(0, len(ids)):
rvec, tvec, markerPoints = cv2.aruco.estimatePoseSingleMarkers(corners[i], float(args["aruco_dim"]), k,d)
if (i == 0):
tvec0 = tvec
print(tvec0)
else:
tvec1 = tvec
print(tvec1)
distanza= "Dist=%4.0f"%(np.linalg.norm(tvec1-tvec0))
cv2.putText(frame, distanza,(50, 100),cv2.FONT_HERSHEY_SIMPLEX, 1,(0, 0, 255), 2, cv2.LINE_4)
cv2.imshow("Image", detected_markers)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break

cv2.destroyAllWindows()
video.release()

Distanza e Pose Estimation da single aruco marker con opencv

Volevo vedere se potevo creare un distanziametro usando una webcam economica e gli aruco tag

Per le prove ho usato 

1) Realsense D435 ottico (1920x1080 2 MPx)
2) Logitech C920 (2560x1470 3.7 Mpx)
3) webcam cinese senza marca (3624 x 2448 8.8 Mpx)
4) DJI Tello video (1280x720 1Mpx)

Per la calibrazione ho utilizzato 5x5 e 9x6 su fogli A4

l'errore stimato di calibrazione e' risultato
Realsense : 0.0371
Logitech : 0.052
Cinese : 0.047
Tello video : 0.027

In generale maggiore e' la risoluzione della camera e maggiore e' la dimensione del tag piu' distante il tag viene riconosciuto. Con un tag da 25 cm 4x4 e la camera a maggiore risoluzione opencv riesce a vederlo fino a 30 m.  Con le altre camere non sono andato oltre i 20 m di distanza

Usando un telemetro laser ho calcolato la distanza del tag dalla camera e con Opencv ho calcolato la distanza. Come si vede dal grafico OpenCV riesce a risolvere la distanza .. da una prima stima c'e' un errore di 3-4 cm tra due posizioni vicine



prendendo solo i primi 10 metri la precisione aumenta

Ho provato anche a mettere a confronto gli Apriltag con gli Arucotag ma non ho notato sensibili vantaggi nell'usare AprilTags


Ho trovato invece miglioramenti nel raffinamento subpixel tra i parametri di OpenCV



elaborazione singole immagini
----------------------------------------------------------------
'''
Sample Command:-
python3 detect_aruco_images.py --image foto/103.png --type DICT_5X5_100 -a 0.1 -k ./calibration_matrix.npy -d ./distortion_coefficients.npy
'''
from turtle import width
import numpy as np
from utils import ARUCO_DICT, aruco_display
import argparse
import cv2
import sys
import math


ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True, help="path to input image containing ArUCo tag")
ap.add_argument("-t", "--type", type=str, default="DICT_ARUCO_ORIGINAL", help="type of ArUCo tag to detect")
ap.add_argument("-k", "--K_Matrix", required=True, help="Path to calibration matrix (numpy file)")
ap.add_argument("-d", "--D_Coeff", required=True, help="Path to distortion coefficients (numpy file)")
ap.add_argument("-a", "--aruco_dim", required=True, help="ArUco tag dimension")
#la dimensione del tag e' quella dello spigolo esterno del quadrato nero esterno, non i singoli quadrati interni

args = vars(ap.parse_args())

image = cv2.imread(args["image"])
print("Loading image "+ args["image"])
height,width,_ = image.shape
# don't resize for better perfomance
#h,w,_ = image.shape
#width=600
#height = int(width*(h/w))
#image = cv2.resize(image, (width, height), interpolation=cv2.INTER_CUBIC)


# verify that the supplied ArUCo tag exists and is supported by OpenCV
if ARUCO_DICT.get(args["type"], None) is None:
print(f"ArUCo tag type '{args['type']}' is not supported")
sys.exit(0)

# load the ArUCo dictionary, grab the ArUCo parameters, and detect
# the markers
#print("Detecting '{}' tags....".format(args["type"]))
arucoDict = cv2.aruco.Dictionary_get(ARUCO_DICT[args["type"]])

calibration_matrix_path = args["K_Matrix"]
distortion_coefficients_path = args["D_Coeff"]
k = np.load(calibration_matrix_path)
d = np.load(distortion_coefficients_path)

image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

arucoParams = cv2.aruco.DetectorParameters_create()

#--- 180 deg rotation matrix around the x axis
R_flip = np.zeros((3,3), dtype=np.float32)
R_flip[0,0] = 1.0
R_flip[1,1] =-1.0
R_flip[2,2] =-1.0

corners, ids, rejected = cv2.aruco.detectMarkers(image, arucoDict,parameters=arucoParams,cameraMatrix=k,distCoeff=d)

if len(corners) > 0:
#print(corners) #posizione degli angoli del marker
for i in range(0, len(ids)):
rvec, tvec, markerPoints = cv2.aruco.estimatePoseSingleMarkers(corners[i], float(args["aruco_dim"]), k,d)
#print(rvec)
dist = math.sqrt((tvec[0][0][0]*tvec[0][0][0])+(tvec[0][0][1]*tvec[0][0][1])+(tvec[0][0][2]*tvec[0][0][2]))
#dist = (dist*1.060672)+10.1674
print("Marker "+str(ids[i])+"="+str(dist))

-----------------------------------------------------------------

elaborazione video

-----------------------------------------------------------------

'''
Sample Command:-
python detect_aruco_video.py --type DICT_5X5_100 --camera True
python detect_aruco_video.py --type DICT_5X5_100 --camera False --video test_video.mp4 -a 25 -k ./calibration_matrix.npy -d ./distortion_coefficients.npy
'''

from turtle import delay
import numpy as np
from utils import ARUCO_DICT, aruco_display
import argparse
import time
import cv2
import sys
import math
import time

def isRotationMatrix(R):
Rt = np.transpose(R)
shouldBeIdentity = np.dot(Rt, R)
I = np.identity(3, dtype=R.dtype)
n = np.linalg.norm(I - shouldBeIdentity)
return n < 1e-6

def rotationMatrixToEulerAngles(R):
assert (isRotationMatrix(R))

sy = math.sqrt(R[0, 0] * R[0, 0] + R[1, 0] * R[1, 0])

singular = sy < 1e-6

if not singular:
x = math.atan2(R[2, 1], R[2, 2])
y = math.atan2(-R[2, 0], sy)
z = math.atan2(R[1, 0], R[0, 0])
else:
x = math.atan2(-R[1, 2], R[1, 1])
y = math.atan2(-R[2, 0], sy)
z = 0

return np.array([x, y, z])

R_flip = np.zeros((3,3), dtype=np.float32)
R_flip[0,0] = 1.0
R_flip[1,1] =-1.0
R_flip[2,2] =-1.0

ap = argparse.ArgumentParser()
ap.add_argument("-i", "--camera", required=True, help="Set to True if using webcam")
ap.add_argument("-v", "--video", help="Path to the video file")
ap.add_argument("-t", "--type", type=str, default="DICT_ARUCO_ORIGINAL", help="Type of ArUCo tag to detect")
ap.add_argument("-k", "--K_Matrix", required=True, help="Path to calibration matrix (numpy file)")
ap.add_argument("-d", "--D_Coeff", required=True, help="Path to distortion coefficients (numpy file)")
ap.add_argument("-a", "--aruco_dim", required=True, help="ArUco tag dimension")
args = vars(ap.parse_args())

if args["camera"].lower() == "true":
video = cv2.VideoCapture('udp://127.0.0.1:5000',cv2.CAP_FFMPEG)
time.sleep(2.0)
else:
if args["video"] is None:
print("[Error] Video file location is not provided")
sys.exit(1)

video = cv2.VideoCapture(args["video"])

if ARUCO_DICT.get(args["type"], None) is None:
print(f"ArUCo tag type '{args['type']}' is not supported")
sys.exit(0)

arucoDict = cv2.aruco.Dictionary_get(ARUCO_DICT[args["type"]])
calibration_matrix_path = args["K_Matrix"]
distortion_coefficients_path = args["D_Coeff"]
k = np.load(calibration_matrix_path)
d = np.load(distortion_coefficients_path)
arucoParams = cv2.aruco.DetectorParameters_create()

while True:
ret, frame = video.read()
time.sleep(0.1)
if ret is False:
break


h, w, _ = frame.shape

width=1000
height = int(width*(h/w))
frame = cv2.resize(frame, (width, height), interpolation=cv2.INTER_CUBIC)
corners, ids, rejected = cv2.aruco.detectMarkers(frame, arucoDict, parameters=arucoParams)

detected_markers = aruco_display(corners, ids, rejected, frame)
if len(corners) > 0:
#print(corners) #posizione degli angoli del marker
for i in range(0, len(ids)):
rvec, tvec, markerPoints = cv2.aruco.estimatePoseSingleMarkers(corners[i], float(args["aruco_dim"]), k,d)
dist = math.sqrt((tvec[0][0][0]*tvec[0][0][0])+(tvec[0][0][1]*tvec[0][0][1])+(tvec[0][0][2]*tvec[0][0][2]))
distanza = "Distanza=%4.0f"%(dist)
cv2.putText(frame, distanza,(50, 50),cv2.FONT_HERSHEY_SIMPLEX, 1,(0, 0, 255), 2, cv2.LINE_4)
R_ct = np.matrix(cv2.Rodrigues(rvec)[0])
R_ct = R_ct.T
roll_marker, pitch_marker, yaw_marker = rotationMatrixToEulerAngles(R_flip*R_ct)
str_roll = "Roll=%4.0f"%(math.degrees(roll_marker))
str_pitch = "Pitch=%4.0f"%(math.degrees(pitch_marker))
str_yaw= "Yaw=%4.0f"%(math.degrees(yaw_marker))

cv2.putText(frame, str_roll,(50, 100),cv2.FONT_HERSHEY_SIMPLEX, 1,(0, 0, 255), 2, cv2.LINE_4)
cv2.putText(frame, str_pitch,(50, 125),cv2.FONT_HERSHEY_SIMPLEX, 1,(0, 0, 255), 2, cv2.LINE_4)
cv2.putText(frame, str_yaw,(50, 150),cv2.FONT_HERSHEY_SIMPLEX, 1,(0, 0, 255), 2, cv2.LINE_4)
cv2.imshow("Image", detected_markers)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break

cv2.destroyAllWindows()
video.release()

-----------------------------------------------------------------

Debugger integrato ESP32S3

Aggiornamento In realta' il Jtag USB funziona anche sui moduli cinesi Il problema risiede  nell'ID USB della porta Jtag. Nel modulo...