This is documentation of a bèta release.
For documentation on the current version, please check Knowledge Base.

This is an old revision of the document!


Image Annotation to Object

This page describes how to use the extension “Image Annotation to Object”

Main Toolbar > Extensions > Image Annotation to Object

The “Image Annotation to Object” extension makes it possible to convert image annotations defined with pixel coordinates or pixel bounds into 3D coordinates or 3D boxes using the pointcloud resource.

Concepts

Image annotations for mapping run imagery can be defined with two pixel coordinates or pixel bounds defined by four pixel coordinates. The image pixels are converted into 3D coordinates or 3D boxes using the mapping run pointcloud and original/optimized images.

The image_pixel.ini file describes the tags in the annotation file used for the conversion. Every annotation can have a tag to overlay on the opened image.

Pixel Definitions

Select how the annotations are defined.

Coordinates

Every annotation is defined by 1 coordinate having two pixel coordinates: X and Y.

Bounds

Every annotation is defined by 4 pixel coordinates that form the bounds of the rectangle: MinX, MaxX, MinY and MaxY.

Optional

Reference Files

Choose how the reference files are structured:

One file for every image

There is one csv or xml file in the original or processed folder that is linked to an image via equal filenames. The image_pixel.ini file describes the xml tags to be used for conversion.

    Element=
    Filename=
    PixelX=
    PixelY=
    PixelMinX=
    PixelMaxX=
    PixelMinY=
    PixelMaxY=
    Tags=
    Attribute0=
    Attribute1=
    ...    

One file for all images

There is one csv or xml file to be selected in which all annotation are described.

Target

After selecting the target file location and CRS, choose to create 3D point objects or boxes.

The result is an ovf with the

The pixels_not_converted.txt contains the list of objects that couldn't be converted.

Advanced Parameters

Depending on the chosen source and target options, the used parameters will be enabled.

3D Coordinates from Pixel Coordinates

Slices are created along the vector line between the camera position and the pixel point. Within a radius around the vector, points are detected and clustered to create the 3D coordinate.

3D Coordinates fom Pixel Bounds

Slices are created along the vector line between the camera position and the pixel point. Within the pixel coordinate bounds around the vector, voxels are created and clustered to create the 3D coordinate.

3D Boxes from Pixel Coordinates

If objects on ground, use ground margin.

3D Boxes from Pixel Bounds

If objects on ground, use ground margin.

Search Slice Thickness

The thickness of the slices created from camera position to pixel point to search for points.

  • For creating 3D Coordinates: If a slice has an object, don't search in the following slices.
  • For creating 3D Boxes: Do search in the following slices up until the Search Dist Max is reached.

Search Dist Max

The maximum distance from the camera to create slices and detect points.

Search Radius and Cluster Distance

The radius around the vector between the camera position and pixel point to search for cluster points within a slice. Minimum cluster gap size.

This value counts also as the cluster distance which is the minimum cluster gap size.

Cluster Pts Min

The minimum number of points to have a cluster. For less dense point clouds, decrease this parameter.

Search Radius

The search radius around the direction camera to pixel position to search for the 3D object.

Ground margin

The ground margin used to identify a 3D box reached the ground to be able to end the clustering. To be used if the object to identify is located on the ground. If you don't remove the ground, the object will become bigger. For bins, hyrdants

Use to create boxes for objects on the ground.

Voxel Size

The box is divided into voxels of this size.

Voxel Pts Min

Nearby voxels that have the minimum number of points will be clustered into an object.

Results

The result is an ovf with the attributes from the image_pixel.ini file. The following attributes are extra:

  • FD_ImageAnnotationSequence: Is to indicate the sequence of annotation on the same image.
  • FD_PixelDistanceFromAnnotation: The distance to the pixel coordinate or the center of the pixel bounds, expressed in pixels.
  • FD_InsideAnnotationBounds: The value can be zero/false or one/true. True means that the display of the created object is still within the bounds of the original annotation. False means that the display of the created object is outside the bounds of the original annotation. For pixel coordinate definition always false/zero.

The pixels_not_converted.txt contains the list of objects that couldn't be converted. The image objects and tags can be displayed on the imagery via the Preferences of Mapping.

 
Last modified:: 2021/04/26 07:46