This is documentation of an archived release.
For documentation on the current version, please check Knowledge Base.

Image Annotations to Objects

This page describes how to use the extension “Image Annotations to Objects”

Main Toolbar > Extensions > Image Annotations to Objects

The “Image Annotations to Objects” extension makes it possible to convert image annotations defined with pixel coordinates or pixel bounds into 3D coordinates or 3D boxes using the pointcloud resource.

Concepts

Image annotations for mapping run imagery can be defined with two pixel coordinates or pixel bounds defined by four pixel coordinates. The image pixels are converted into 3D coordinates or 3D boxes using the mapping run pointcloud and original/optimized images.

The image_pixel.ini file describes the tags in the annotation file used for the conversion. Every annotation (and optionally the tag) can be overlayed on the opened image.

Benchmarks

Hardware:

  • 4 quore, i7
  • data on external HDD, USB3 connected

Input:

  • 28,655 planars
  • 1,307 annotations
  • 4.66 GB point cloud.

Processing Time: 15.56 minutes

Pixel Definitions

Select how the annotations are defined.

Coordinates

Every annotation is defined by 1 coordinate having two pixel coordinates: X and Y.

Bounds

Every annotation is defined by 4 pixel coordinates that form the bounds of the rectangle: MinX, MaxX, MinY and MaxY.

The center of the bounds can be used to convert to 3D. The same algorithm as for pixel coordinates will be used.

Source Annotations

All

Use all annotation files in the camera directory.

Current Non-Converted

Use the annotations from the non-converted text file with new parameters.

Previous Non-Converted

FIXME

Manage Annotations

Open the Annotations procedure to import annotations coming from one csv or xml file.

Annotate Images

Open the Image Annotation Editor extension to add or edit annotations.

Target

After selecting the target file location and CRS, choose to create 3D point objects or boxes.

Algorithms

For every situation, slices are created along the vector line between the camera position and the pixel point.

3D Coordinates from Pixel Coordinates

Within a radius around the vector, points are detected and clustered to create the 3D coordinate.

3D Coordinates fom Pixel Bounds

Within the pixel coordinate bounds around the vector, voxels are created and clustered to create the 3D coordinate.

3D Boxes from Pixel Coordinates

With a radius around the vector, points are detected and clustered to create the 3D box.

If objects on ground, use ground margin.

3D Boxes from Pixel Bounds

Withing the pixel coordinate bounds around the vector, voxels are created and clustered to create the 3D box.

If objects on ground, use ground margin.

Advanced Parameters

Depending on the chosen source and target options, the used parameters will be enabled.

Search Slice Thickness

The thickness of the slices created from camera position to pixel point to search for points.

  • For creating 3D Coordinates: If an object is detected in a slice, don't search in the following slices.
  • For creating 3D Boxes: If an object is detected in a slice, do search in the following slices up until the 3D cluster is finished.

Search Dist Max

The maximum distance from the camera to create slices and detect points.

Search Radius and Cluster Distance

The radius around the vector between the camera position and pixel point to search for cluster points within a slice. This value counts also as the cluster distance which is the minimum cluster gap size.

Cluster Pts Min

The minimum number of points to have a cluster. For less dense point clouds, decrease this parameter.

Search Radius

The search radius around the direction camera to pixel position to search for the 3D object.

Ground margin

The ground margin used to identify a 3D box reached the ground to be able to end the clustering. To be used if the object to identify is located on the ground. If you don't remove the ground, the object will become bigger.

Voxel Size

The points in the slice are divided into voxels of this size.

Voxel Pts Min

Nearby voxels that have the minimum number of points will be clustered into an object.

Results

The result is an ovf where every object corresponds to one annotation, having the attributes from the image_pixel.ini file or csv file. The following attributes are extra:

  • FD_ImageAnnotationSequence: Used to indicate the sequence of annotation on the same image.
  • FD_PixelDistanceFromAnnotation: The distance to the pixel coordinate or the center of the pixel bounds, expressed in pixels.
  • FD_InsideAnnotationBounds: The value can be zero/false or one/true. True means that the display of the created object is still within the bounds of the original annotation. False means that the display of the created object is outside the bounds of the original annotation. For pixel coordinate definition always false/zero.

The pixels_not_converted.txt contains the list of objects that couldn't be converted.

 
Last modified:: 2022/06/15 21:56