Skip to content
Get started

Get Job

extraction.jobs.get(strjob_id) -> ExtractJob
GET/api/v1/extraction/jobs/{job_id}

Get Job

ParametersExpand Collapse
job_id: str
ReturnsExpand Collapse
class ExtractJob:

Schema for an extraction job.

id: str

The id of the extraction job

formatuuid
extraction_agent: ExtractAgent

The agent that the job was run on.

id: str

The id of the extraction agent.

formatuuid

The configuration parameters for the extraction agent.

chunk_mode: Optional[Literal["PAGE", "SECTION"]]

The mode to use for chunking the document.

Accepts one of the following:
"PAGE"
"SECTION"
Deprecatedcitation_bbox: Optional[bool]

Whether to fetch citation bounding boxes for the extraction. Only available in PREMIUM mode. Deprecated: this is now synonymous with cite_sources.

cite_sources: Optional[bool]

Whether to cite sources for the extraction.

confidence_scores: Optional[bool]

Whether to fetch confidence scores for the extraction.

extract_model: Optional[Union[Literal["openai-gpt-4-1", "openai-gpt-4-1-mini", "openai-gpt-4-1-nano", 8 more], str, null]]

The extract model to use for data extraction. If not provided, uses the default for the extraction mode.

Accepts one of the following:
Literal["openai-gpt-4-1", "openai-gpt-4-1-mini", "openai-gpt-4-1-nano", 8 more]

Extract model options.

Accepts one of the following:
"openai-gpt-4-1"
"openai-gpt-4-1-mini"
"openai-gpt-4-1-nano"
"openai-gpt-5"
"openai-gpt-5-mini"
"gemini-2.0-flash"
"gemini-2.5-flash"
"gemini-2.5-flash-lite"
"gemini-2.5-pro"
"openai-gpt-4o"
"openai-gpt-4o-mini"
str
extraction_mode: Optional[Literal["FAST", "BALANCED", "PREMIUM", "MULTIMODAL"]]

The extraction mode specified (FAST, BALANCED, MULTIMODAL, PREMIUM).

Accepts one of the following:
"FAST"
"BALANCED"
"PREMIUM"
"MULTIMODAL"
extraction_target: Optional[Literal["PER_DOC", "PER_PAGE", "PER_TABLE_ROW"]]

The extraction target specified.

Accepts one of the following:
"PER_DOC"
"PER_PAGE"
"PER_TABLE_ROW"
high_resolution_mode: Optional[bool]

Whether to use high resolution mode for the extraction.

invalidate_cache: Optional[bool]

Whether to invalidate the cache for the extraction.

multimodal_fast_mode: Optional[bool]

DEPRECATED: Whether to use fast mode for multimodal extraction.

num_pages_context: Optional[int]

Number of pages to pass as context on long document extraction.

minimum1
page_range: Optional[str]

Comma-separated list of page numbers or ranges to extract from (1-based, e.g., '1,3,5-7,9' or '1-3,8-10').

parse_model: Optional[Literal["openai-gpt-4o", "openai-gpt-4o-mini", "openai-gpt-4-1", 23 more]]

Public model names.

Accepts one of the following:
"openai-gpt-4o"
"openai-gpt-4o-mini"
"openai-gpt-4-1"
"openai-gpt-4-1-mini"
"openai-gpt-4-1-nano"
"openai-gpt-5"
"openai-gpt-5-mini"
"openai-gpt-5-nano"
"openai-text-embedding-3-large"
"openai-text-embedding-3-small"
"openai-whisper-1"
"anthropic-sonnet-3.5"
"anthropic-sonnet-3.5-v2"
"anthropic-sonnet-3.7"
"anthropic-sonnet-4.0"
"anthropic-sonnet-4.5"
"anthropic-haiku-3.5"
"anthropic-haiku-4.5"
"gemini-2.5-flash"
"gemini-3.0-pro"
"gemini-2.5-pro"
"gemini-2.0-flash"
"gemini-2.0-flash-lite"
"gemini-2.5-flash-lite"
"gemini-1.5-flash"
"gemini-1.5-pro"
priority: Optional[Literal["low", "medium", "high", "critical"]]

The priority for the request. This field may be ignored or overwritten depending on the organization tier.

Accepts one of the following:
"low"
"medium"
"high"
"critical"
system_prompt: Optional[str]

The system prompt to use for the extraction.

use_reasoning: Optional[bool]

Whether to use reasoning for the extraction.

data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

The schema of the data.

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
name: str

The name of the extraction agent.

project_id: str

The ID of the project that the extraction agent belongs to.

formatuuid
created_at: Optional[datetime]

The creation time of the extraction agent.

formatdate-time
custom_configuration: Optional[Literal["default"]]

Custom configuration type for the extraction agent. Currently supports 'default'.

updated_at: Optional[datetime]

The last update time of the extraction agent.

formatdate-time
status: Literal["PENDING", "SUCCESS", "ERROR", 2 more]

The status of the extraction job

Accepts one of the following:
"PENDING"
"SUCCESS"
"ERROR"
"PARTIAL_SUCCESS"
"CANCELLED"
error: Optional[str]

The error that occurred during extraction

Deprecatedfile: Optional[File]

Schema for a file.

id: str

Unique identifier

formatuuid
name: str
project_id: str

The ID of the project that the file belongs to

formatuuid
created_at: Optional[datetime]

Creation datetime

formatdate-time
data_source_id: Optional[str]

The ID of the data source that the file belongs to

formatuuid
expires_at: Optional[datetime]

The expiration date for the file. Files past this date can be deleted.

formatdate-time
external_file_id: Optional[str]

The ID of the file in the external system

file_size: Optional[int]

Size of the file in bytes

minimum0
file_type: Optional[str]

File type (e.g. pdf, docx, etc.)

maxLength3000
minLength1
last_modified_at: Optional[datetime]

The last modified time of the file

formatdate-time
permission_info: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]

Permission information for the file

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
purpose: Optional[str]

The intended purpose of the file (e.g., 'user_data', 'parse', 'extract', 'split', 'classify')

resource_info: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]

Resource information for the file

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
updated_at: Optional[datetime]

Update datetime

formatdate-time
file_id: Optional[str]

The id of the file that the extract was extracted from

formatuuid

Get Job

import os
from llama_cloud import LlamaCloud

client = LlamaCloud(
    api_key=os.environ.get("LLAMA_CLOUD_API_KEY"),  # This is the default and can be omitted
)
extract_job = client.extraction.jobs.get(
    "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
)
print(extract_job.id)
{
  "id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
  "extraction_agent": {
    "id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "config": {
      "chunk_mode": "PAGE",
      "citation_bbox": true,
      "cite_sources": true,
      "confidence_scores": true,
      "extract_model": "openai-gpt-4-1",
      "extraction_mode": "FAST",
      "extraction_target": "PER_DOC",
      "high_resolution_mode": true,
      "invalidate_cache": true,
      "multimodal_fast_mode": true,
      "num_pages_context": 1,
      "page_range": "page_range",
      "parse_model": "openai-gpt-4o",
      "priority": "low",
      "system_prompt": "system_prompt",
      "use_reasoning": true
    },
    "data_schema": {
      "foo": {
        "foo": "bar"
      }
    },
    "name": "name",
    "project_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "created_at": "2019-12-27T18:11:19.117Z",
    "custom_configuration": "default",
    "updated_at": "2019-12-27T18:11:19.117Z"
  },
  "status": "PENDING",
  "error": "error",
  "file": {
    "id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "name": "x",
    "project_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "created_at": "2019-12-27T18:11:19.117Z",
    "data_source_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "expires_at": "2019-12-27T18:11:19.117Z",
    "external_file_id": "external_file_id",
    "file_size": 0,
    "file_type": "x",
    "last_modified_at": "2019-12-27T18:11:19.117Z",
    "permission_info": {
      "foo": {
        "foo": "bar"
      }
    },
    "purpose": "purpose",
    "resource_info": {
      "foo": {
        "foo": "bar"
      }
    },
    "updated_at": "2019-12-27T18:11:19.117Z"
  },
  "file_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e"
}
Returns Examples
{
  "id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
  "extraction_agent": {
    "id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "config": {
      "chunk_mode": "PAGE",
      "citation_bbox": true,
      "cite_sources": true,
      "confidence_scores": true,
      "extract_model": "openai-gpt-4-1",
      "extraction_mode": "FAST",
      "extraction_target": "PER_DOC",
      "high_resolution_mode": true,
      "invalidate_cache": true,
      "multimodal_fast_mode": true,
      "num_pages_context": 1,
      "page_range": "page_range",
      "parse_model": "openai-gpt-4o",
      "priority": "low",
      "system_prompt": "system_prompt",
      "use_reasoning": true
    },
    "data_schema": {
      "foo": {
        "foo": "bar"
      }
    },
    "name": "name",
    "project_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "created_at": "2019-12-27T18:11:19.117Z",
    "custom_configuration": "default",
    "updated_at": "2019-12-27T18:11:19.117Z"
  },
  "status": "PENDING",
  "error": "error",
  "file": {
    "id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "name": "x",
    "project_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "created_at": "2019-12-27T18:11:19.117Z",
    "data_source_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
    "expires_at": "2019-12-27T18:11:19.117Z",
    "external_file_id": "external_file_id",
    "file_size": 0,
    "file_type": "x",
    "last_modified_at": "2019-12-27T18:11:19.117Z",
    "permission_info": {
      "foo": {
        "foo": "bar"
      }
    },
    "purpose": "purpose",
    "resource_info": {
      "foo": {
        "foo": "bar"
      }
    },
    "updated_at": "2019-12-27T18:11:19.117Z"
  },
  "file_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e"
}