Kili Docs

Kili Docs

›Projects

Introduction to Kili Technology

  • Introduction to Kili Technology
  • Kili Technology allows
  • Compatible browser

Getting Started

  • Getting started with Kili - Classification

Hosting

  • SaaS
  • On-Premise Data
  • On-Premise Entreprise

Concepts

  • Definitions
  • Status Lifecycle
  • Architecture

Users and roles

  • Roles by project
  • Users
  • Users and roles management

Projects

  • Audit labelers
  • Customize interface
  • Dataset
  • New project
  • Project overview
  • Projects
  • Projects list
  • Settings
  • Shortcuts

Image interfaces

  • Bounding Box
  • Classification
  • Point
  • Polygon
  • Polyline
  • Segmentation
  • Simple and intuitive interfaces

Text & PDF interfaces

  • Classification
  • Image transcription / OCR
  • Named entities recognition
  • Relations extraction

Video interfaces

  • Classification
  • Multi-frames classification
  • Multi-frames object detection
  • Transcription

Audio interfaces

  • Voice transcription / Speech to text

Data ingestion

  • Data ingestion made easy
  • Load data from a workstation
  • Load data from a public cloud
  • Data on premise or on private cloud
  • How to generate non-expiring signed URLs on AWS

Quality management

  • Consensus
  • Honeypot or Gold Standard
  • Instructions
  • Quality KPIs
  • Quality management
  • Questions and Issues
  • Review Process
  • Workload distribution

Automation

  • Human in the loop
  • Model based preannotation
  • Online learning
  • Queue prioritisation

Data export

  • Data export
  • Data format
  • Example

Python - GraphQL API

  • GraphQL API
  • Python API

Code snippets

  • Authentication
  • Create a Honeypot
  • Create a user
  • Creating Consensus
  • Delete the data
  • Export data
  • Export labels
  • Import data
  • Import labels
  • Prioritize assets
  • See the Consensus of an annotation
  • See the Honeypot of an annotation
  • Throttling

Recipes

  • AutoML for faster labeling with Kili Technology
  • Create a project
  • Exporting a training set
  • Importing medical data into a frame project
  • Importing assets
  • Import rich-text assets
  • Importing predictions
  • Reading and uploading dicom image data
  • How to query using the API
  • Labelled Image Data & Transfer Learning
  • Webhooks

Change log

  • Change log

Customize interface

All interfaces (excepted custom interfaces) are divided into two parts:

  • the asset viewer on the left
  • the jobs viewer on the right

Jobs are implementation of machine learning tasks (such as classification, object detection, etc). Interfaces can chain several jobs one after the other. Each job can be either required or optional.

Interface can be customized with the interface builder or in JSON settings.

Interface builder

This is a builder to easily customize the interface. It is accessible in the Settings tab.

An interface is composed of

  • An asset type (image, text, video, audio)
  • A machine Learning task (classification, object detection, named entity recognition, ...)
  • An input (radio button, check box, dropdown, text field)
  • (Optionnally), A tool (bounding box, polygon, line, ...)

An interface can have: one job

Several jobs

You can add as many jobs as needed for your labeling task.

Or nested jobs

To nest a job, you click on the arrow at the class level. Classification tasks can be nested with single choice up to the 4th level.

Custom render width

Renderer width is a parameter in the JSON file. It has to be between 0 and 1, and is the proportion of the screen given to the annotation tools (the jobs). The rest of the screen is for the asset. Key: jobRendererWidth : (here 0.2, i.e. 20%)

Metadata at asset level

The app allows to add to the asset extra information, called metadata. Metadata can be uploaded through the API. It can be an image and/or a text, and/or an URL. The code below shows how to attach metadata to an asset with Kili Playground:

metadata = {
    'imageUrl': 'https://myriadrbm.com/wp-content/blogs.dir/20/files/2018/05/cell-3.png',
    'text': 'Normal cell',
    'url': 'https://www.google.com'
}
kili.update_properties_in_asset(
    asset_id=asset_id,
      json_metadata=metadata
)

You can find more information on the API here or directly in the github of the project.

Metadata at category level

You can set up metadata at category level. They will be visible by every annotator, directly on the job interface when they hover the information icon at the right of the category:

Category metadata are configured in the interface. You can set them up for every category belonging to any existing job.

  • Go to "Settings"
  • Click on the tab "JSON"
  • Add a jsonMetadata key that can contain text as in the example below:

Custom classes' colors

The color of each class is a parameter in the JSON file. The color code is the color hex code (hexadecimal). To setup a custom class, you can change directly the following key for each category: color: "#941100". You can use an online tool to upload an image and select the perfect color for your task.

JSON settings

The JSON settings fully describes an interface. For 3 successive classification jobs for instance, it is composed as such:

{
  "jobRendererWidth": 0.25,
  "jobs": {
    "CLASSIFICATION_JOB": {
      "mlTask": "CLASSIFICATION",
      "content": {
        "categories": {
          "ECONOMICS": {
            "name": "economics",
            "color": "#941100",
            "children": []
          },
          "MATHEMATICS": {
            "name": "mathematics",
            "children": []
          },
          "DATA_SCIENCE": {
            "name": "data-science",
            "children": []
          },
          "ANTHROPOLOGY": {
            "name": "anthropology",
            "children": []
          }
        },
        "input": "radio"
      },
      "required": 1,
      "isChild": false,
      "instruction": "What is the topic of this paper ?"
    },
    "NAMED_ENTITIES_RECOGNITION_JOB": {
      "mlTask": "NAMED_ENTITIES_RECOGNITION",
      "content": {
        "categories": {
          "ACTIVE_LEARNING": {
            "name": "active learning",
            "color": "#73FDFF"
          },
          "DATA_TYPE": {
            "name": "data type",
            "color": "#9437FF"
          },
          "MACHINE_LEARNING": {
            "name": "machine learning",
            "color": "#FF9900"
          },
          "AUTHORS": {
            "name": "authors",
            "color": "#FF40FF"
          }
        },
        "input": "radio"
      },
      "required": 1,
      "tools": [null],
      "isChild": false,
      "instruction": "Identify terms specific to"
    }
  }
}

The JSON can be saved, to duplicate a project for example, but also edited interactively.

← Audit labelersDataset →
  • Interface builder
    • An interface can have: one job
    • Several jobs
    • Or nested jobs
    • Custom render width
    • Metadata at asset level
    • Metadata at category level
    • Custom classes' colors
  • JSON settings