LLM automated labeling (BETA)

ℹ️

Note

LLM automated labeling is currently in Beta and only available in Encord Labs. Contact [email protected] for more information and access.

LLM automated labeling is powered by a Vision-Language Model (VLM), offering an efficient solution for classifying your data. LLM automated labeling simplifies the process of labeling by generating prompts based on your predefined classifications.

ℹ️

Note

LLM automated labeling is currently unavailable for DICOM files.

Before using the LLM automated labeling feature, ensure that the classifications in your ontology are precise and unambiguous. Clear definitions in your ontology improve the accuracy of the LLaVA's predictions.

👍

Tip

Regularly review and refine your classification ontology for better prediction accuracy.

To start use the LLM automated labeling, follow these steps:

  1. Click the Automated Labeling button in the Label Editor.
  1. Select LLM Prediction from the available options.
  1. When labeling an image group, image sequence, or a video, input the start and end frames within which classifications should be predicted. For images, leave both values as 0.
  1. Click Predict to initiate the automated labeling process.

Upon completion, the VLM provides classifications for the image or specified frames.