For documentation on the current version, please check Knowledge Base.
This is an old revision of the document!
Image Annotation to Object
This page describes how to use the extension “Image Annotation to Object”
Main Toolbar > Extensions > Image Annotation to Object
The “Image Annotation to Object” extension makes it possible to convert image annotations defined with pixel coordinates or pixel bounds into 3D coordinates or 3D boxes using the pointcloud resource.
Concepts
Image annotations for mapping run imagery can be defined in pixel coordinates or pixel bounds defined by two pixel coordinates. The image pixels are converted into 3D coordinates or 3D boxes using the mapping run pointcloud. Both original and optimizes images can be used.
In the image_pixel.ini file describes the tags in the annotation file used for the conversion. Every annotation can have a tag to overlay on the opened image.
Pixel Definitions
Select how the annotations are defined.
Coordinates
Every annotation is defined by 1 coordinate having two pixel coordinates: X and Y.
Bounds
Every annotation is defined by 4 pixel coordinates that form the bounds of the rectangle: MinX, MaxX, MinY and MaxY.
Optional
Reference Files
Choose how the reference files are structured:
One file for every image
There is one csv or xml file in the original or processed folder that is linked to an image via equal filenames. The image_pixel.ini file describes the xml tags to be used for conversion.
Element= Filename= PixelX= PixelY= PixelMinX= PixelMaxX= PixelMinY= PixelMaxY= Tags= Attribute0= Attribute1= ...
One file for all images
There is one csv or xml file to be selected in which all annotation are described.
Target
After selecting the target file location and CRS, choose to create 3D point objects or boxes.
The result is an ovf with the
The pixels_not_converted.txt contains the list of objects that couldn't be converted.
Advanced Parameters
Depending on the chosen source and target options, the used parameters will be enabled.
3D Coordinates from Pixel Coordinates
Slices are created along the vector line between the camera position and the pixel point. Within a radius around the vector, points are detected and clustered to create the 3D coordinate.
3D Coordinates fom Pixel Bounds
Slices are created along the vector line between the camera position and the pixel point. Within the pixel coordinate bounds around the vector, voxels are created and clustered to create the 3D coordinate.
3D Boxes from Pixel Coordinates
3D Boxes from Pixel Bounds
Search Slice Thickness
The thickness of the slices created from camera position to pixel point to search for points.
- For creating 3D Coordinates: If a slice has an object, don't search in the following slices.
- For creating 3D Boxes: Do search in the following slices up until the Search Dist Max is reached.
Search Dist Max
The maximum distance from the camera to create slices and detect points.
Search Radius and Cluster Distance
The radius around the vector between the camera position and pixel point to search for cluster points within a slice. This value counts also as the cluster distance
Cluster Pts Min
The minimum number of points to have a cluster. For less dense point clouds, decrease this parameter.
Search Radius
The search radius around the direction camera to pixel position to search for the 3D object.
Ground margin
The ground margin used to identify a 3D box reached the ground to be able to end the clustering.
Voxel Size
The box is divided into voxels of this size.
Voxel Pts Min
Nearby voxels that have the minimum number of points will be clustered into an object.
Results
The result is an ovf with the
The pixels_not_converted.txt contains the list of objects that couldn't be converted.