Instance segmentation

Segmentation models are used to detect and label instances of objects within training data. They work similarly to object detection models, but differ in that the expected input and output of the model are polygons rather than bounding boxes. Consequently, the labels these models produce are associated with polygon annotations from the project’s Ontology.

ℹ️

Note

Segmentation models assume there are potentially multiple objects in an image that need to be segmented and classified.

Framework and models

For instance segmentation, the Mask-RCNN model by the Pytorch framework is available.

Creating instance segmentation models

To learn how to create instance segmentation models, head over to our models page.

Working with instance segmentation models

Once a model has been attached to a project it can be used to perform the functions it has been trained on. Inside the label editor, click the Automated labeling button highlighted in the image below.

Open the 'Detection and segmentation' section, as seen in the screenshot below.

  • Select the model you would like to run. You will be able to choose from a list of models previously attached to the project.

  • The 'Detection range' lets you determine the start and end frames you would like the model to run on.

  • Set the Confidence. A value ranging from 0 to 1 that represents how confident the model has to be in order for a particular data point to be included in its output. Read more about confidence values here.

  • Set the Polygon coarseness. The coarseness controls the spacing between two vertices. A low polygon coarseness allows for high resolution polygons, but creates high vertex counts. To avoid possible performance issues with large or complicated polygons, set the coarseness only as fine as necessary to accurately define the desired segmentation.

Advanced settings

  • Set the Intersection over union threshold. This parameter specifies that any boxes or polygons with an amount of overlap higher than the specified threshold should be deleted.

  • Choose between GPU or CPU processing units. CPUs are designed to handle a wide-range of tasks quickly, but are limited in how many tasks can run at the same time. GPUs are designed to quickly render high-resolution images and video concurrently.

  • The Tracking enabled toggle determines whether objects are part of the same ‘instance’ or not. In other words, whether the model should attempt to tracking individual instances through frames, or create separate objects for each frame.