This page describes how to use the extension “Image Annotations to Objects”
Main Toolbar > Extensions > Image Annotations to Objects
The “Image Annotations to Objects” extension makes it possible to convert image annotations defined with pixel coordinates or pixel bounds into 3D coordinates or 3D boxes using the pointcloud resource.
Image annotations for mapping run imagery can be defined with two-pixel coordinates or pixel bounds defined by four-pixel coordinates. The image pixels are converted into 3D coordinates or 3D boxes using the mapping run point cloud and original/optimized images.
The image_pixel.ini file describes the tags in the annotation file used for the conversion. Every annotation (and optionally the tag) can be overlayed on the opened image.
Time to convert annotations to objects, ~90 annotations/minute.
Select how the annotations are defined.
Every annotation is defined by 1 coordinate having two pixel coordinates: X and Y.
Every annotation is defined by 4 pixel coordinates that form the bounds of the rectangle: MinX, MaxX, MinY and MaxY.
The center of the bounds can be used to convert to 3D. The same algorithm as for pixel coordinates will be used.
Use all annotation files in the camera directory.
Use the annotations from the non-converted text file with new parameters.
Open the Annotations procedure to import annotations coming from one csv or xml file.
Open the Image Annotation Editor extension to add or edit annotations.
After selecting the target file location and CRS, choose to create 3D point objects or boxes.
For every situation, slices are created along the vector line between the camera position and the pixel point.
Within a radius around the vector, points are detected and clustered to create the 3D coordinate.
Within the pixel coordinate bounds around the vector, voxels are created and clustered to create the 3D coordinate.
With a radius around the vector, points are detected and clustered to create the 3D box.
If objects on ground, use ground margin.
Withing the pixel coordinate bounds around the vector, voxels are created and clustered to create the 3D box.
If objects on ground, use ground margin.
Depending on the chosen source and target options, the used parameters will be enabled.
The thickness of the slices created from camera position to pixel point to search for points.
The maximum distance from the camera to create slices and detect points.
The radius around the vector between the camera position and pixel point to search for cluster points within a slice. This value counts also as the cluster distance which is the minimum cluster gap size.
The minimum number of points to have a cluster. For less dense point clouds, decrease this parameter.
The search radius around the direction camera to pixel position to search for the 3D object.
The ground margin used to identify a 3D box reached the ground to be able to end the clustering. To be used if the object to identify is located on the ground. If you don't remove the ground, the object will become bigger.
The points in the slice are divided into voxels of this size.
Nearby voxels that have the minimum number of points will be clustered into an object.
The result is an ovf where every object corresponds to one annotation, having the attributes from the image_pixel.ini file or csv file. The following attributes are extra:
The pixels_not_converted.txt contains the list of objects that couldn't be converted.