Categories
Uncategorized

Laser triangulation (sheet of light) halcon exemplary model of three-dimensional reconstruction Detailed Reconstruct_Connection_Rod_Calib.hdev

 

Original author: aircraft

Original link: https: //www.cnblogs.com/DOMLX/p/11555100.html

 

 

Introduction: The company recently used project template matching halcon of 3d, three-dimensional reconstruction, camera calibration, so recently in these studies, and now share with you an example of laser triangulation of personal understanding.

 

1.Reconstruct_Connection_Rod_Calib.hdev

Look at this example halcon done:

By irradiating a laser beam over a part, leaving a screenshot of a sheet, for measuring its depth information later.

 

 

 

 

The example is the rebuilding light mesh model to process all connection_rod series of images reconstructed image of the original model:

 

 

 

 

 

 

An optical sheet may also see information x, y, z of:

 

 

 

 

 

 

 

Finally, we can call halcon operator visualize_object_model_3d (WindowHandle, ObjectModel3DID, CameraParam1, PoseIn, ‘color’, ‘blue’, ‘Reconstructed Connection Rod’, ”, Instructions, PoseOut) the 3d part model reconstruction

 

 

 

 

 

 

Can move freely through the mouse model, just like my last blog post opengl 3d model import and display the same OpenGl read import 3D models and move the mouse to rotate the display added

2. Laser triangulation

Laser triangulation method as a laser radar design low-cost, high-precision available, cost-effective application effect, and become the first choice for indoor service robot navigation, this article will introduce laser radar core components and focuses on laser triangulation lIDAR principle of fault location.

Lidar four core components

Laser radar is mainly composed of a laser, a receiver, a signal processing unit and a rotating mechanism which four core components.

Laser: a laser emitting means is a laser lidar. During operation, it will light up in a pulsed manner. To think Lan technology RPLIDAR A3 series radar, for example, every second, it will turn on and off 16,000 times.

Receiver: irradiating laser light after the laser is emitted to an obstacle, an obstacle by reflection, the reflected light via the lens will converge to the receiver group.

The signal processing unit: a signal processing unit responsible for controlling the emitting laser, and the processing of signals received by the receiver. Calculate the distance information of the target object based on the information.

Rotating mechanism: three or more components form the core member measured. Rotation mechanism responsible for the above-described core member together in a stable rotational speed to achieve a scan of the plane and a plan view of generating real time information.

 

The principle of laser triangulation method

Currently measuring principle of laser radar of the main pulse method, three kinds of coherent law and trigonometry, high pulse method and the coherent light method hardware requirements for laser radar, but the measurement accuracy is much higher than the laser triangulation method, it is used for military applications. The laser triangulation method because of its low cost and accuracy to meet most commercial and civil requirements, it has been widespread concern.

Laser triangulation method is mainly measured object, laser generation by irradiating a laser beam at a constant angle of incidence is reflected and scattered at the target surface, at another point by a lens for converging the reflected laser image, image spots CCD (Charge-coupled in on Device, photosensitive coupling assembly) position sensors. When the object moves in the direction of the laser spot on the sensor will produce a shift position, which corresponds to the magnitude of displacement of the moving distance of the object, therefore the object distance and the baseline design algorithmically calculated from the displacement distance of the light spot value. Since the incident and reflected light form a triangle, the calculation using the spot displacement triangle geometric theorem, so the measurement method is referred to as a laser triangulation method.

Relationship between the incident light beam at an angle to the surface normal of the test object, laser triangulation method can be divided into two kinds of direct and oblique style.

1, direct laser triangulation method

As shown in FIG. 1, when the laser beam is incident perpendicular to the object surface, i.e., the incident ray and the object surface normal collinear, to direct laser triangulation method.

 

2, oblique laser triangulation method

When the optical system, the laser beam is incident and the object surface normal angle of less than 90 °, the oblique incidence method is the formula. The picture shows the optical path of oblique laser triangulation optical-path diagram shown in FIG. 2.

Laser light emitted by the laser and the surface angle to the normal to the object surface is incident, trans (scattered) emitted by the light converging lens at the image B, and finally the photosensitive cells is collected.

As seen from FIG 2 the baseline AB AO incident angle of α, AB is the distance the center of the CCD laser center, BF is the lens focal length f, D is the object to be measured from the base line at infinity reflected light on the photosensitive element limit position imaging. DE is a displacement limit position of the light spot deviates on the photosensitive unit, referred to as x. When the optical system is determined, α, AB and f are known parameters. The geometry of the light path can be seen in FIG △ ABO∽ △ DEB, with a side length of the relationship:

It is easy to know

When determining the optical path of the system, a CCD-axis position sensor is parallel to the baseline AB (assuming the y-axis), by the pixel coordinates of the laser spot is obtained by the algorithm (Px, Py) can be obtained values ​​of x:

Wherein CellSize is the size of a single pixel on the photosensitive unit, DeviationValue distance x is the deviation amount calculated by the pixel and the actual projection point from the projection. When the object to be measured from the baseline AB relative displacement, x is changed to x, the above conditions can be obtained as the object motion distance y:

 

Single point laser ranging principle

Single point laser distance diagram as shown in FIG. 2-6,
    Laser laser head with the camera on the same horizontal line (called a reference line), which is a distance s, the camera focal length is f, the angle between the laser head and the reference line is β.

Assuming that the target object in the Object laser irradiation point, is reflected back to the position of the camera image plane is point P.

FIG: a schematic view of a single point laser distance

Knowledge can be used by the geometry of similar triangles, the laser, camera and the target object consisting of triangles, similar to the camera, the image point P and the auxiliary point P ‘.

Provided PP ‘= x, q, d as shown, by a similar triangle we have:

                f/x=q/s  ==>  q=fs/x                              

 

Calculation can be divided into two parts:

        X=x1+x2= f/tan⁡β + pixelSize* position

 

Wherein the pixel unit is pixelSize size, position coordinates of the pixel position with respect to the imaging of the imaging center.

Finally, the determined distance d:

                     d=q/sin⁡β 

 

3. Code notes

 

See comments like, take your time, it can probably run in connection with an understanding, an example of laser triangulation halcon

 

If you do not understand an operator in the process of looking, the parameters have questions, you can simply double-click the operator

 

Open the help manual, go to each operator parameter information, and usage presentation:

 

 

 

 

 

General dev_update_off placed at the beginning, if the original program has some residual what you can close the window, dev_update_on on the end of program

dev_update_window: define and open during program execution, the object image is displayed in the graphics window closed; in a single-step mode, the rule is invalid, after the call to a single operator, objects are always displayed on the graphical window; measuring range operator running time, it should be set to OFF, in order to reduce the impact of HDevelop updated GUI runtime

dev_update_pc: during program execution, the program counter control update

dev_update_var: during program execution control variable update or close the window, whenever the program modifies the variable, change the variable content window (icons and control variables). dev_update_time: Controls whether the execution time operator

* 首先,创建一个片光模型,并设置合适的参数,接下来连续采集一系列轮廓图像。

* 最后,从模型中检索视觉差图像,分数图像,标定坐标X,Y和Z以及测量得到的3D对象模型并显示。

*

dev_update_off () //

Pause familiar

read_image (ProfileImage, 'sheet_of_light/connection_rod_001') //

Read image

dev_close_window () //

Close the form

dev_open_window_fit_image (ProfileImage, 0, 0, 1024, 768, WindowHandle1) //

Open a new form

set_display_font (WindowHandle1, 14, 'mono', 'true', 'false')//

Set the font

dev_set_draw ('margin') //

Set profile

dev_set_line_width (3) //

Set width

dev_set_color ('green') //

Set the color is green

dev_set_lut ('default') // * * 设置计算校准测量所需的姿势和相机参数 * 内部相机参数 CamParam := [0.0126514,640.275,-2.07143e+007,3.18867e+011,-0.0895689,0.0231197,6.00051e-006,6e-006,387.036,120.112,752,240] CamPose := [-0.00164029,1.91372e-006,0.300135,0.575347,0.587877,180.026,0] //

Camera coordinate

LightplanePose := [0.00270989,-0.00548841,0.00843714,66.9928,359.72,0.659384,0] //

Light sheet plane coordinates

MovementPose := [7.86235e-008,0.000120112,1.9745e-006,0,0,0,0] // * * 创建模型以处理配置文件图像并设置模型所需的参数 gen_rectangle1 (ProfileRegion, 120, 75, 195, 710) //

Create a rectangle

* 创建一个基于 3D 测量的片光模型 create_sheet_of_light_model (ProfileRegion, ['min_gray','num_profiles','ambiguity_solving'], [70,290,'first'], SheetOfLightModelID) set_sheet_of_light_param (SheetOfLightModelID, 'calibration', 'xyz') //

The deformation applied in different calibration image

set_sheet_of_light_param (SheetOfLightModelID, 'scale', 'mm') //

unit

set_sheet_of_light_param (SheetOfLightModelID, 'camera_parameter', CamParam) //

The camera internal parameters

set_sheet_of_light_param (SheetOfLightModelID, 'camera_pose', CamPose) //

The camera coordinate system

set_sheet_of_light_param (SheetOfLightModelID, 'lightplane_pose', LightplanePose) //

Camera pose, if the measured object and the same plane

set_sheet_of_light_param (SheetOfLightModelID, 'movement_pose', MovementPose) //

Moving gesture, the move is usually geodetic coordinates

* * 连续的图像测量轮廓 for Index := 1 to 290 by 1 read_image (ProfileImage, 'sheet_of_light/connection_rod_' + Index$'.3') //

Read image

dev_display (ProfileImage) //

Display image

dev_display (ProfileRegion) //

Display area

measure_profile_sheet_of_light (ProfileImage, SheetOfLightModelID, []) //

And the contour of the input image stored sheet of light handling technology

disp_message (WindowHandle1, '

Acquisition profile image

', 'window', -1, -1, 'black', 'true') //

Display information

endfor * 获取片光图像 get_sheet_of_light_result (Disparity, SheetOfLightModelID, 'disparity') //

Back sheet of light from the depth

get_sheet_of_light_result (X, SheetOfLightModelID, 'x') //

Returns the data sheet of light x

get_sheet_of_light_result (Y, SheetOfLightModelID, 'y') //

Back sheet of light data y

get_sheet_of_light_result (Z, SheetOfLightModelID, 'z') //

Back sheet of light data z

get_sheet_of_light_result_object_model_3d (SheetOfLightModelID, ObjectModel3DID) //

Back sheet of light 3D model data

clear_sheet_of_light_model (SheetOfLightModelID) //

Clear the sheet of light Model

* * 显示视差图像 get_image_size (Disparity, Width, Height) //

Acquiring an image size

dev_set_window_extents (0, 0, Width, Height) //

adjust size

dev_set_lut ('temperature') // set_display_font (WindowHandle1, 16, 'mono', 'true', 'false') //

Set the font

dev_clear_window () //

Clear Form

dev_display (Disparity) disp_message (WindowHandle1, '

Reconstruction sheet of light produced parallax image

', 'window', -1, -1, 'black', 'true') //

Display information

disp_continue_message (WindowHandle1, 'black', 'true') //

Pause display information

stop () //

time out

* * 显示Z坐标 dev_close_window () //

Close the form

dev_open_window (Height + 10, 0, Width * .5, Height * .5, 'black', WindowHandle3) //

Open the form 3

set_display_font (WindowHandle3, 16, 'mono', 'true', 'false')//

Set the font

dev_display (Z) //

Z information display

disp_message (WindowHandle3, '

Calibration of the Z coordinate

', 'window', -1, -1, 'black', 'true') //

Display information

* * 显示Y坐标 dev_open_window ((Height + 10) * .5, 0, Width * .5, Height * .5, 'black', WindowHandle2) //

Open the form 2

set_display_font (WindowHandle2, 16, 'mono', 'true', 'false')//

Set the font

dev_display (Y)//

Z information display

disp_message (WindowHandle2, '

Y coordinate calibration

', 'window', -1, -1, 'black', 'true') * * 显示X坐标 dev_open_window (0, 0, Width * .5, Height * .5, 'black', WindowHandle1) //

Open the form 1

dev_display (X)//

Z information display

dev_set_lut ('default') set_display_font (WindowHandle1, 16, 'mono', 'true', 'false')//

Set the font

disp_message (WindowHandle1, '

Calibration of the X coordinate

', 'window', -1, -1, 'black', 'true') //

Display information

disp_continue_message (WindowHandle3, 'black', 'true') //

Pause display information

stop () //

time out

* * Display the 3d object model CameraParam1 := [0.012,0,6e-006,6e-006,376,240,752,480] //

The camera internal parameters

Instructions[0] := '

Rotation: Left mouse button

' Instructions[1] := '

Zoom: Shift + left mouse button

' Instructions[2] := '

Mobile Move: Ctrl + left mouse button

' PoseIn := [0,-10,300,-30,0,-30,0] dev_close_window () //

Close the form 2

dev_close_window () //

Close the form 3

dev_close_window () //

Close the form 12

dev_open_window (0, 0, CameraParam1[6], CameraParam1[7], 'black', WindowHandle) //

Open a new form

set_display_font (WindowHandle, 16, 'mono', 'true', 'false')//

Set the font

visualize_object_model_3d (WindowHandle, ObjectModel3DID, CameraParam1, PoseIn, 'color', \ 'blue', '

reconstruction

', '', Instructions, PoseOut)//

Display information

* 清除片光模型 clear_object_model_3d (ObjectModel3DID)

 

This above-mentioned parallax images:

Disparity (parallax) how to understand? When studying the binocular depth map estimate, often using D = B × f / d (D: Depth, B: Baseline, f: focal, d: disparity) this formula, the depth from the disparity inference, then in the end where d how to understand? Now, hold out your right hand index finger, placed at different distances from the position of the eye. Close my left eye and two fingers, then closed his right eye observation of two fingers, you can find things left and right eyes see is not the same, and secondly, (disparity) ** larger distance eye near objects moving distance, eye movement distance away from the object (parallax) is smaller. ** use the same physical space image points in different image points of correspondence, the difference, we called disparity (Disparity) images. Referring next to FIG:

 

 

Formula difficult to find, and the depth parallax is inversely proportional to the following relationship:

 

 

With parallax disparity, we can infer a depth map. Also called a depth image from the image, is the distance from each point in the scene (depth) of the camera image to a value as the pixel value. There are many method for obtaining the depth map, for example: laser radar depth imaging, computed stereoscopic imaging, the coordinate measuring machine method, a moire method, a structured light method and the like. For example: (Kinect camera)

 

 

 

 

 

Reference blog: Disparity (parallax) Simple Explanation link: https://blog.csdn.net/weixin_40367126/article/details/90753760

Reference blog: HALCON example: laser triangulation ReconstructConnectionRodCalib link: https://weibo.com/ttarticle/p/show?id=2309404407678905483355

 

If you are interested to share technology exchange, it may be concerned about the public number, which will share from time to time various programming tutorials, and share source code, such as sharing research on c / c ++, python, front, back, opencv, halcon, opengl, machine depth study of learning class has knowledge about basic programming, image processing and machine vision development

Leave a Reply