Import Data from OTC

Using cloud storage data in Encord is a multi-step process:

  1. Set up your cloud storage so Encord can access your data

  2. Create a cloud storage integration on Encord to link to your cloud storage

  3. Create a Dataset

  4. Create a JSON or CSV file to import your data

  5. Perform the import using the JSON or CSV file

Step 1: Setup OTC Integration

Before you can do anything with the Encord platform and cloud storage, you need to configure your cloud storage to work with Encord. Once the integration between Encord and your cloud storage is complete, you can then use your data in Encord.

In order to integrate with Open Telecom Cloud, you need to:

  1. Create the account which accesses data in the Object Storage Service.
  2. Give the account read accessto the desired buckets by:
    1. Creating a Custom Bucket Policy.
    2. (Optional) If you have Cross-origin resource sharing (CORS) configured on your buckets, make sure that *.encord.com is given read access.
  3. Create the integration by giving Encord access to that account's credentials.
Clickable Div Step 1: Create Account Step 2: Give Account Read access Step 3: Configure CORS

Step 2: Create Encord Integration

On the Encord platform enter the Access key ID and Secret access key, which should be located in the access key file, generated with the creation of the user. (if the access key has been misplaced, a new one can be created from the IAM User menu).

Optionally check the box to enable Strict client-only access, server-side media features will not be available if you would like Encord to sign URLs, but refrain from downloading any media files onto Encord servers. Read more about this feature here.

Finally, click the Create button at the bottom of the pop-up. The integration will now appear in the list of integrations in the 'Integrations' tab.

Step 3: Create Dataset using SDK

Datasets are the space where the data itself lives. Datasets can be reused across multiple projects and contain no labels or annotations themselves.

Each dataset is identified using a unique "<dataset_hash>" - a unique ID that can be found within the dataset's Settings, or the URL in the Encord platform when the dataset is being viewed.

Use the following code to create a Dataset.

  • Replace the <private_key_path> variable with the file path to the private key you generated in Step 1: Create Encord Integration.

  • Replace the <Dataset_Title> variable with a meaningful name for the Dataset.

In the following example, the variables are replaced with:

  • <private_key_path>: /Users/chris/encord-ssh-private-key.txt

  • <Dataset_Title>: Training Data Dataset 1


# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import StorageLocation

# Authenticate with Encord using the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(ssh_private_key_path="<private_key_path>")

# Create a dataset by specifying a title as well as a storage location
dataset = user_client.create_dataset(
    "<Dataset_Title>", StorageLocation.OTC
)

# Prints the dataset
print(dataset)
# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import StorageLocation

# Authenticate with Encord using the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(ssh_private_key_path="/Users/chris/encord-ssh-private-key.txt")

# Create a dataset by specifying a title as well as a storage location
dataset = user_client.create_dataset(
    "Training Data Dataset 1", StorageLocation.OTC
)

# Prints the dataset
print(dataset)

Step 4: Create JSON or CSV for import

All types of data (videos, images, image groups, image sequences, and DICOM) from a private cloud are added to a dataset in the same way, by using a JSON or CSV file. The file includes links to all of the images, image groups, videos and DIOCOM files in your cloud storage.

ℹ️

Note

For a list of supported file formats for each data type, go here

❗️

CRITICAL INFORMATION

Encord supports file names up to 300 characters in length for any file or video for upload.

Create JSON file for import

For detailed information about the JSON file format used for import go here.

The information provided about each of the following data types is designed to get you up and running as quickly as possible without going too deeply into the why or how. Look at the template for each data type, then the examples, and adjust the examples to suit your needs.

ℹ️

Note

If skip_duplicate_urls is set to true, all object URLs that exactly match existing images/videos in the dataset are skipped.

Videos

JSON for videos provides the proper JSON format to import videos into Encord.

Example 1 for videos, imports videos into Encord.

Example 2 for videos, imports videos with an Encord title for the videos and with custom metadata. Custom metadata only appears in the Encord UI in Active as an option to filter your data in an Active Project.

{
  "videos": [
    {
      "objectUrl": "file/path/to/videos/video-001.file-extension>"
    },
    {
      "objectUrl": "file/path/to/videos/video-002.file-extension"
    },
    {
      "objectUrl": "file/path/to/videos/video-003.file-extension",
      "title": "video-title.file-extension",
      "clientMetadata": {"optional": "metadata"}
    },
    {
      "objectUrl": "file/path/to/videos/video-004.file-extension",
      "title": "video-title.file-extension",
      "clientMetadata": {"optional": "metadata"}
    }
  ],
  "skip_duplicate_urls": true
}
{
  "videos": [
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/BerghouseLeopardJog.mp4"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blue_bus_video.mp4"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/bluemlisalphutte_flyover.mp4"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/boat_lake_normalized.mp4"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/boat_lake.MP4"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/boat_lake.MP4"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/snow_sled.MOV"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cyclists.MP4"
    }
  ],
  "skip_duplicate_urls": true
}
{
    "videos": [
      {
        "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberries-001.mp4",
        "title": "Blueberries video 1",
        "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
      },
      {
        "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberries-002.mp4",
        "title": "Blueberries video 2",
        "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
      },
      {
        "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberries-003.mp4",
        "title": "Blueberries video 3",
        "clientMetadata": {"Location": "NY", "Harvested": "Mid July"}
      },
      {
        "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherries-001.MOV",
        "title": "Cherries video 1",
        "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
      },
      {
        "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherries-002.MP4",
        "title": "Cherries video 2",
        "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
      }
    ],
  "skip_duplicate_urls": true
  }

Single images

For detailed information about the JSON file format used for import go here.

The JSON structure for single images parallels that of videos.

JSON for images provides the proper JSON format to import images into Encord.

Example 1 for images, indicates that Encord imports the images in the JSON into Encord.

Example 2 for images, indicates that Encord imports images with an Encord title for the images and with custom metadata for each image. Custom metadata only appears in the Encord UI in Active as an option to filter your data in an Active Project.

{
  "images": [
    {
      "objectUrl": "file/path/to/images/file-name-01.file-extension"
    },
    {
      "objectUrl": "file/path/to/images/file-name-02.file-extension"
    },
    {
      "objectUrl": "file/path/to/images/file-name-03.file-extension",
      "title": "image-title.file-extension",
      "clientMetadata": {"optional": "metadata"}
    },
    {
      "objectUrl": "file/path/to/images/file-name-04.file-extension",
      "title": "image-title.file-extension",
      "clientMetadata": {"optional": "metadata"}
    }
  ]
}
{
  "images": [
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/0001.jpg"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/0002.jpg"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/0003.jpg"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/DALL%C2%B7E+2022-09-08+18.53.25+-+firefighter+extinguishing+flames+around+computer+in+software+office+overgrown+by+foliage.png"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/DALL%C2%B7E+2022-12-08+22.16.52+-+steampunk+combustion+engine+that+is+fueled+by+data+and+produces+computer+vision+algorithms.png"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/large_images/pexels-ivo-rainha-00057.png"
    }
  ]
}
{
  "images": [
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberry-0001.jpg",
      "title": "Blueberry 0001",
      "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberry-0002.jpg",
      "title": "Blueberry 0002",
      "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberry-0001.jpg",
      "title": "Blueberry 0003",
      "clientMetadata": {"Location": "NY", "Harvested": "Late July"}
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherry-0001.jpg",
      "title": "Cherry 0001",
      "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherry-0002.jpg",
      "title": "Cherry 0002",
      "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherry-0003.jpg",
      "title": "Cherry 0003",
      "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
    }
  ]
}

Image groups

For detailed information about the JSON file format used for import go here.

  • Image groups are collections of images that are processed as one annotation task.
  • Images within image groups remain unaltered, meaning that images of different sizes and resolutions can form an image group without the loss of data.
  • Image groups do NOT require 'write' permissions to your cloud storage.
  • Custom metadata is defined per image group, not per image.
  • If skip_duplicate_urls is set to true, all URLs exactly matching existing image groups in the dataset are skipped.

ℹ️

Note

The position of each image within the sequence needs to be specified in the key (objectUrl_{position_number}).

JSON for image groups provides the proper JSON format to import image groups into Encord.

Example 1 for image groups, indicates that Encord imports the image groups in the JSON into Encord.

Example 2 for image groups, indicates that Encord imports image groups with an Encord title for the image groups and with custom metadata for each image. Custom metadata only appears in the Encord UI in Active as an option to filter your data in an Active Project.

{
  "image_groups": [
    {
      "title": "<title 1>",
      "createVideo": false,
      "objectUrl_0": "file/path/to/images/file-name-01.file-extension",
      "objectUrl_1": "file/path/to/images/file-name-02.file-extension",
      "objectUrl_2": "file/path/to/images/file-name-03.file-extension"
    },
    {
      "title": "<title 2>",
      "createVideo": false,
      "objectUrl_0": "file/path/to/images/file-name-01.file-extension",
      "objectUrl_1": "file/path/to/images/file-name-02.file-extension",
      "objectUrl_2": "file/path/to/images/file-name-03.file-extension",
      "clientMetadata": {"optional": "metadata"}
    }
  ]
}

{
  "image_groups": [
    {
      "title": "Image group 01",
      "createVideo": false,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/0001.jpg",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/0002.jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/DALL%C2%B7E+2022-09-08+18.53.25+-+firefighter+extinguishing+flames+around+computer+in+software+office+overgrown+by+foliage.png"
    },
    {
      "title": "Image group 02",
      "createVideo": false,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/thing-0001.jpg",
      "objectUrl_1": "<https://encord-bucket.obs.eu-de.otc.t-systems.com/thing-0002.jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/thing-0003.jpg"
    }
  ]
}


{
  "image_groups": [
    {
      "title": "Blueberries image group 1",
      "createVideo": false,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberry-0001.jpg",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberry-0002.jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberry-0003.jpg",
      "clientMetadata": {"Location": "NY", "Harvested": "End July"}
    },
    {
      "title": "Cherries image group 1",
      "createVideo": false,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherry-0001.jpg",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherry-0002.jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherry-0003.jpg",
      "clientMetadata": {"Location": "BC", "Harvested": "Mid Aug"}
    }
  ]
}

Image sequences

For detailed information about the JSON file format used for import go here.

  • Image sequences are collections of images that are processed as one annotation task and represented as a video.
  • Images within image sequences may be altered as images of varying sizes and resolutions are made to match that of the first image in the sequence.
  • Creating Image sequences from cloud storage requires 'write' permissions, as new files have to be created in order to be read as a video.
  • Each object in the image_groups array with the createVideo flag set to true represents a single image sequence.
  • Custom client metadata is defined per image sequence, not per image.
  • If skip_duplicate_urls is set to true, all URLs exactly matching existing image sequences in the Dataset are skipped.

👍

Tip

The only difference between adding image groups and image sequences using a JSON file is that image sequences require the createVideo flag to be set to true. Both use the key image_groups.

ℹ️

Note

The position of each image within the sequence needs to be specified in the key (objectUrl_{position_number}).

❗️

CRITICAL INFORMATION

Encord supports up to 32,767 entries (21:50 minutes) for a single image sequence. We recommend up to 10,000 to 15,000 entries for a single image sequence for best performance. If you need a longer sequence, we recommend using video instead of an image sequence.

{
  "image_groups": [
    {
      "title": "<title 1>",
      "createVideo": true,
      "objectUrl_0": "<object url>"
    },
    {
      "title": "<title 2>",
      "createVideo": true,
      "objectUrl_0": "<object url>",
      "objectUrl_1": "<object url>",
      "objectUrl_2": "<object url>",
      "clientMetadata": {"optional": "metadata"}
    }
  ]
}

{
  "image_groups": [
    {
      "title": "Image sequence 001",
      "createVideo": true,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/01.jpg",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/02.jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/DALL%C2%B7E+2022-09-08+18.53.25+-+firefighter+extinguishing+flames+around+computer+in+software+office+overgrown+by+foliage.png"
    },
    {
      "title": "Image sequence 002",
      "createVideo": true,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/thing-01.jpg",
      "objectUrl_1": "<https://encord-bucket.obs.eu-de.otc.t-systems.com/thing-02.jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/thing-03.jpg"
    }
  ]
}


{
  "image_groups": [
    {
      "title": "Blueberries sequence 01",
      "createVideo": true,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberry-highbush-01.jpg",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberry-highbush-02.jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/blueberry-highbush-03.jpg",
      "clientMetadata": {"Location": "ON", "Harvested": "Mid July"}
    },
    {
      "title": "Cherries sequence 01",
      "createVideo": false,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherry-bing-01.jpg",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherry-bing-02.jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/cherry-bing-03.jpg",
      "clientMetadata": {"Location": "ON", "Harvested": "End July"}
    }
  ]
}

DICOM

For detailed information about the JSON file format used for import go here.

ℹ️

Note

Encord requires the following tags on DICOM images to import the images:

  • Rows
  • Columns
  • StudyInstanceUID
  • SeriesInstanceUID
  • SOPInstanceUID
  • PatientID

The following DICOM tags are required to render DICOM images in 3D:

  • ImagePositionPatient
  • ImageOrientationPatient
  • SliceThickness
  • PixelSpacing
  • Each dicom_series element can contain one or more DICOM series.
  • Each series requires a title and at least one object URL, as shown in the example below.
  • If skip_duplicate_urls is set to true, all object URLs exactly matching existing DICOM files in the Dataset will be skipped.

ℹ️

Note

Custom metadata is distinct from patient metadata, which is included in the .dcm file and does not have to be specific during the upload to Encord.

The following is an example JSON for uploading three DICOM series belonging to a study. Each title and object URL correspond to individual DICOM series.

  • The first series contains only a single object URL, as it is composed of a single file.
  • The second series contains 3 object URLs, as it is composed of three separate files.
  • The third series contains 2 object URLs, as it is composed of two separate files.
{
  "dicom_series": [
    {
      "title": "Series-1",
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/study1-series1-file.dcm"
    },
    {
      "title": "Series-2",
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/study1-series2-file1.dcm",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/study1-series2-file2.dcm",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/study1-series2-file3.dcm",
    },
      {
      "title": "Series-3",
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/study1-series3-file1.dcm",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/study1-series3-file2.dcm",
    }
  ]
}


Multiple file types

You can upload multiple file types using a single JSON file. The example below shows 1 image, 2 videos, 2 image sequences, and 1 image group.


{
  "images": [
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/Image1.png"
    }
  ],
  "videos": [
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/Cooking.mp4"
    },
    {
      "objectUrl": "https://encord-bucket.obs.eu-de.otc.t-systems.com/Oranges.mp4"
    }
  ],
  "image_groups": [
    {
      "title": "apple-samsung-light",
      "createVideo": true,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/1-Samsung-S4-Light+Environment/1+(32).jpg",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/1-Samsung-S4-Light+Environment/1+(33).jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/1-Samsung-S4-Light+Environment/1+(34).jpg",
      "objectUrl_3": "https://encord-bucket.obs.eu-de.otc.t-systems.com/1-Samsung-S4-Light+Environment/1+(35).jpg"
    },
    {
      "title": "apple-samsung-dark",
      "createVideo": true,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/2-samsung-S4-Dark+Environment/2+(32).jpg",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/2-samsung-S4-Dark+Environment/2+(33).jpg",
      "objectUrl_2": "https://encord-bucket.obs.eu-de.otc.t-systems.com/2-samsung-S4-Dark+Environment/2+(34).jpg",
      "objectUrl_3": "https://encord-bucket.obs.eu-de.otc.t-systems.com/2-samsung-S4-Dark+Environment/2+(35).jpg"
    }
  ],
  "image_groups": [
    {
      "title": "apple-ios-light",
      "createVideo": false,
      "objectUrl_0": "https://encord-bucket.obs.eu-de.otc.t-systems.com/3-IOS-4-Light+Environment/3+(32).jpg",
      "objectUrl_1": "https://encord-bucket.obs.eu-de.otc.t-systems.com/3-IOS-4-Light+Environment/3+(33).jpg"
    }
  ]
}


CSV format

In the CSV file format, the column headers specify which type of data is being uploaded. You can add and single file format at a time, or combine multiple data types in a single CSV file.

Details for each data format are given in the sections below.

🚧

Caution

  • Object URLs can't contain whitespace.
  • For backwards compatibility reasons, a single column CSV is supported. A file with the single ObjectUrl column is interpreted as a request for video upload. If your objects are of a different type (for example, images), this error displays: "Expected a video, got a file of type XXX".
Videos

Videos

A CSV file containing videos should contain two columns with the following mandatory column headings:
'ObjectURL' and 'Video title'. All headings are case-insensitive.

  • The 'ObjectURL' column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the video resource.

  • The 'Video title' column containing the video_title. If left blank, the original file name is used.

In the example below files 1, 2 and 4 are assigned the names in the title column, while file 3 keeps its original file name.

ObjectUrlVideo title
https://storage/frame1.mp4Video 1
https://storage/frame2.mp4Video 2
https://storage/frame3.mp4
https://storage/frame4.mp4Video 3
Single images

A CSV file containing single images MUST contain two columns with the following mandatory headings:
'ObjectURL' and 'Image title'. All headings are case-insensitive.

  • The 'ObjectURL' column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the image resource.

  • The 'Image title' column containing the image_title. If left blank, the original file name is used.

In the example below files 1, 2 and 4 are assigned the names in the title column, while file 3 keeps its original file name.

ObjectUrlImage title
https://storage/frame1.jpgImage 1
https://storage/frame2.jpgImage 2
https://storage/frame3.jpg
https://storage/frame4.jpgImage 3
Image groups

Image groups

A CSV file containing image groups MUST contain three columns with the following mandatory headings:
'ObjectURL', 'Image group title', and 'Create video'. All three headings are case-insensitive.

  • The 'ObjectURL' column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the resource.

  • The 'Image group title' column containing the image_group_title. This field is mandatory, as it determines which image group a file will be assigned to.

In the example below the first two URLs are grouped together into 'Group 1', while the following two files are grouped together into 'Group 2'.

ObjectUrlImage group titleCreate video
https://storage/frame1.jpgGroup 1false
https://storage/frame2.jpgGroup 1false
https://storage/frame3.jpgGroup 2false
https://storage/frame4.jpgGroup 2false

ℹ️

Note

Image groups do not require 'write' permissions.

Image sequences

Image sequences

A CSV file containing image sequences MUST contain three columns with the following mandatory headings: 'ObjectURL', 'Image group title', and 'Create video'. All three headings are case-insensitive.

  • The 'ObjectURL' column containing the objectUrl. This field is mandatory for each file, as it specifies the full URL of the resource.

  • The 'Image group title' column containing the image_group_title. This field is mandatory, as it determines which image sequence a file will be assigned to. The dimensions of the image sequence are determined by the first file in the sequence.

  • The 'Create video' column. This can be left blank, as the default value is 'true'.

In the example below the first two URLs are grouped together into 'Sequence 1', while the second two files are grouped together into 'Sequence 2'.

ObjectUrlImage group titleCreate video
https://storage/frame1.jpgSequence 1true
https://storage/frame2.jpgSequence 1true
https://storage/frame3.jpgSequence 2true
https://storage/frame4.jpgSequence 2true

👍

Tip

Image groups and image sequences are only distinguished by the presence of the 'Create video' column.

ℹ️

Note

Image sequences require 'write' permissions against your storage bucket to save the compressed video.

DICOM

A CSV file containing DICOM files MUST contain two columns with the following headings: 'ObjectURL' and 'Series title'. Both headings are case-insensitive.

ℹ️

Note

Encord requires the following tags on DICOM images to import the images:

  • Rows
  • Columns
  • StudyInstanceUID
  • SeriesInstanceUID
  • SOPInstanceUID
  • PatientID

The following DICOM tags are required to render DICOM images in 3D:

  • ImagePositionPatient
  • ImageOrientationPatient
  • SliceThickness
  • PixelSpacing
  • The 'ObjectURL' column contains the objectUrl. This field is mandatory for each file, as it specifies the full URL of the resource.

  • The 'Series title' column contains the dicom_title. When two files are given the same title they are grouped into the same DICOM series. If left blank, the original file name is used.

In the example below the first two files are grouped into 'dicom series 1', the next two files are grouped into 'dicom series 2', while the final file will remain separated as 'dicom series 3'.

ObjectUrlSeries title
https://storage/frame1.dcmdicom series 1
https://storage/frame2.dcmdicom series 1
https://storage/frame3.dcmdicom series 2
https://storage/frame4.dcmdicom series 2
https://storage/frame5.dcmdicom series 3
Multiple file types

Multiple file types

You can upload multiple file types with a single CSV file by using a new header each time there is a change of file type. Three headings will be required if image sequences are included.

🚧

Caution

Since the 'Create video' column defaults to "true" all files that aren't image sequences have to contain the value "false"

The example below shows a CSV file for the following:

  • Two image sequences composed of 2 files each.
  • One image group composed of 2 files.
  • One single image.
  • One video.
ObjectUrlImage group titleCreate video
https://storage/frame1.jpgSequence 1true
https://storage/frame2.jpgSequence 1true
https://storage/frame3.jpgSequence 2true
https://storage/frame4.jpgSequence 2true
https://storage/frame5.jpgGroup 1false
https://storage/frame6.jpgGroup 1false
ObjectUrlImage titleCreate video
https://storage/frame1.jpgImage 1false
ObjectUrlImage titleCreate video
https://storage/video.mp4Video 1false

Step 5: Import data using the SDK

The following script only initializes the upload job, allowing you to use your computer for other purposes while the data upload runs in the background. You have to manually query Encord servers to find out whether the upload job completed.

The script has several possible outputs:

  • "Upload is still in progress, try again later!": The upload has not finished. Run this script again later to check if the upload has finished.

  • "Upload completed": The upload completed. If any files failed to upload, the URLs are listed.

  • "Upload failed": The entire upload failed, and not just individual files. Ensure your JSON file is formatted correctly.

In the following example, the variables are replaced with:

  • <private_key_path>: /Users/chris/encord-ssh-private-key.txt

  • path/to/json/file.json: /Users/chris/training_data_1.json


# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import LongPollingStatus

# Instantiate user client. Replace <private_key_path> with the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(ssh_private_key_path="<private_key_path>")

# Specify the Dataset you want to upload data to by replacing <dataset_hash> with the Dataset hash
dataset = user_client.get_dataset("<dataset_hash>")

# Specify the integration you want to upload data to by replacing <integration_title> with the integration hash
integrations = user_client.get_cloud_integrations()
integration_idx = [i.title for i in integrations].index("OTC")
integration = integrations[integration_idx].id

# Initiate cloud data upload. Replace path/to/json/file.json with the path to your JSON file
upload_job_id = dataset.add_private_data_to_dataset_start(
    integration, "path/to/json/file.json", ignore_errors=True
)

# timeout_seconds determines how long the code will wait after initiating upload until continuing and checking upload status
res = dataset.add_private_data_to_dataset_get_result(upload_job_id, timeout_seconds=5)
print(f"Execution result: {res}")

if res.status == LongPollingStatus.PENDING:
    print("Upload is still in progress, try again later!")
elif res.status == LongPollingStatus.DONE:
    print("Upload completed")
    if res.data_unit_errors:
        print("The following URLs failed to upload:")
        for e in res.data_unit_errors:
            print(e.object_urls)
else:
    print(f"Upload failed: {res.errors}")

# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import LongPollingStatus

# Instantiate user client. Replace <private_key_path> with the path to your private key
user_client = EncordUserClient.create_with_ssh_private_key(ssh_private_key_path="/Users/chris/encord-ssh-private-key.txt")

# Specify the dataset you want to upload data to by replacing <dataset_hash> with the dataset hash
dataset = user_client.get_dataset("<dataset_hash>")

# Specify the integration you want to upload data to by replacing <integration_title> with the integration hash
integrations = user_client.get_cloud_integrations()
integration_idx = [i.title for i in integrations].index("OTC")
integration = integrations[integration_idx].id

# Initiate cloud data upload. Replace path/to/json/file.json with the path to your JSON file
upload_job_id = dataset.add_private_data_to_dataset_start(
    integration, "/Users/chris/training_data_1.json", ignore_errors=True
)

# timeout_seconds determines how long the code will wait after initiating upload until continuing and checking upload status
res = dataset.add_private_data_to_dataset_get_result(upload_job_id, timeout_seconds=5)
print(f"Execution result: {res}")

if res.status == LongPollingStatus.PENDING:
    print("Upload is still in progress, try again later!")
elif res.status == LongPollingStatus.DONE:
    print("Upload completed")
    if res.data_unit_errors:
        print("The following URLs failed to upload:")
        for e in res.data_unit_errors:
            print(e.object_urls)
else:
    print(f"Upload failed: {res.errors}")
add_private_data_to_dataset job started with upload_job_id=c4026edb-4fw2-40a0-8f05-a1af7f465727.
SDK process can be terminated, this will not affect successful job execution.
You can follow the progress in the web app via notifications.
add_private_data_to_dataset job completed with upload_job_id=c4026edb-4fw2-40a0-8f05-a1af7f465727.
Execution result: DatasetDataLongPolling(status=<LongPollingStatus.DONE: 'DONE'>, data_hashes_with_titles=[DatasetDataInfo(data_hash='cd42333d-8014-46q7-837b-5bf68b9b5', title='funny_image.jpg')], errors=[], units_pending_count=0, units_done_count=1, units_error_count=0)
Upload completed without errors

Step 6: Check data upload

If Step 5 returns "Upload is still in progress, try again later!", run the following code to query the Encord server again. Ensure that you replace <upload_job_id> with the output by the previous code. In the example above upload_job_id=c4026edb-4fw2-40a0-8f05-a1af7f465727.

The script has several possible outputs:

  • "Upload is still in progress, try again later!": The upload has not finished. Run this script again later to check if the upload has finished.

  • "Upload completed": The upload completed. If any files failed to upload, the URLs are listed.

  • "Upload failed": The entire upload failed, and not just individual files. Ensure your JSON file is formatted correctly.

# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import LongPollingStatus

upload_job_id = <upload_job_id>

# Authenticate with Encord using the path to your private key. 
user_client = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path="<private_key_path>"
    )

# Check upload status
res = dataset.add_private_data_to_dataset_get_result(upload_job_id, timeout_seconds=5)
print(f"Execution result: {res}")

if res.status == LongPollingStatus.PENDING:
    print("Upload is still in progress, try again later!")
elif res.status == LongPollingStatus.DONE:
    print("Upload completed")
    if res.data_unit_errors:
        print("The following URLs failed to upload:")
        for e in res.data_unit_errors:
            print(e.object_urls)
else:
    print(f"Upload failed: {res.errors}")

Step 7: Verify all the data from your cloud storage is available in Encord

After importing all your data, verify all your data imported from your cloud storage.

The example replaces the following variables:

  • <private_key_path>: /Users/chris/encord-ssh-private-key.txt

  • <project_hash>: 7d4ead9c-4087-4832-a301-eb2545e7d43b

from encord import EncordUserClient

user_client = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path="<private_key_path>"
)

project = user_client.get_project(
    project_hash="<project_hash>"
    )

for log_line in project.list_label_rows_v2():
    data_list = project.get_data(log_line.data_hash, get_signed_url=True)
    print(data_list)

from encord import EncordUserClient

user_client = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path="/Users/chris/encord-ssh-private-key.txt"
)

project = user_client.get_project(
    project_hash="7d4ead9c-4087-4832-a301-eb2545e7d43b"
    )

for log_line in project.list_label_rows_v2():
    data_list = project.get_data(log_line.data_hash, get_signed_url=True)
    print(data_list)

Step 8: Prepare your data for label/annotation import

Clickable Div Prepare Data for Labels

Step 9: Import labels/annotations

Clickable Div Import Labels/Annotations