Create Batch Job
Create a new batch processing job for a directory.
Processes all files in the specified directory according to the job configuration. The job runs asynchronously and you can monitor progress using the job status endpoint.
ParametersExpand Collapse
Job configuration for batch processing. Can be BatchParseJobRecordCreate or ClassifyJob.
class JobConfigBatchParseJobRecordCreate: …
Batch-specific parse job record for batch processing.
This model contains the metadata and configuration for a batch parse job, but excludes file-specific information. It's used as input to the batch parent workflow and combined with DirectoryFile data to create full ParseJobRecordCreate instances for each file.
Attributes: job_name: Must be PARSE_RAW_FILE partitions: Partitions for job output location parameters: Generic parse configuration (BatchParseJobConfig) session_id: Upstream request ID for tracking correlation_id: Correlation ID for cross-service tracking parent_job_execution_id: Parent job execution ID if nested user_id: User who created the job project_id: Project this job belongs to webhook_url: Optional webhook URL for job completion notifications
correlation_id: Optional[str]
The correlation ID for this job. Used for tracking the job across services.
parameters: Optional[JobConfigBatchParseJobRecordCreateParameters]
Generic parse job configuration for batch processing.
This model contains the parsing configuration that applies to all files in a batch, but excludes file-specific fields like file_name, file_id, etc. Those file-specific fields are populated from DirectoryFile data when creating individual ParseJobRecordCreate instances for each file.
The fields in this model should be generic settings that apply uniformly to all files being processed in the batch.
custom_metadata: Optional[Dict[str, object]]
The custom metadata to attach to the documents.
images_to_save: Optional[List[Literal["screenshot", "embedded", "layout"]]]
input_s3_region: Optional[str]
The region for the input S3 bucket.
lang: Optional[str]
The language.
output_s3_path_prefix: Optional[str]
If specified, llamaParse will save the output to the specified path. All output file will use this 'prefix' should be a valid s3:// url
output_s3_region: Optional[str]
The region for the output S3 bucket.
output_bucket: Optional[str]
The output bucket.
parse_mode: Optional[ParsingMode]
Enum for representing the mode of parsing to be used.
pipeline_id: Optional[str]
The pipeline ID.
priority: Optional[Literal["low", "medium", "high", "critical"]]
The priority for the request. This field may be ignored or overwritten depending on the organization tier.
replace_failed_page_mode: Optional[FailPageMode]
Enum for representing the different available page error handling modes.
resource_info: Optional[Dict[str, object]]
The resource info about the file
The outbound webhook configurations
webhook_events: Optional[List[Literal["extract.pending", "extract.success", "extract.error", 13 more]]]
List of event names to subscribe to
webhook_headers: Optional[Dict[str, str]]
Custom HTTP headers to include with webhook requests.
webhook_output_format: Optional[str]
The output format to use for the webhook. Defaults to string if none supplied. Currently supported values: string, json
webhook_url: Optional[str]
The URL to send webhook notifications to.
parent_job_execution_id: Optional[str]
The ID of the parent job execution.
partitions: Optional[Dict[str, str]]
The partitions for this execution. Used for determining where to save job output.
project_id: Optional[str]
The ID of the project this job belongs to.
session_id: Optional[str]
The upstream request ID that created this job. Used for tracking the job across services.
user_id: Optional[str]
The ID of the user that created this job
webhook_url: Optional[str]
The URL that needs to be called at the end of the parsing job.
class ClassifyJob: …
A classify job.
id: str
Unique identifier
project_id: str
The ID of the project
The rules to classify the files
description: str
Natural language description of what to classify. Be specific about the content characteristics that identify this document type.
type: str
The document type to assign when this rule matches (e.g., 'invoice', 'receipt', 'contract')
The status of the classify job
user_id: str
The ID of the user
created_at: Optional[datetime]
Creation datetime
error_message: Optional[str]
Error message for the latest job attempt, if any.
job_record_id: Optional[str]
The job record ID associated with this status, if any.
mode: Optional[Literal["FAST", "MULTIMODAL"]]
The classification mode to use
parsing_configuration: Optional[ClassifyParsingConfiguration]
The configuration for the parsing job
lang: Optional[ParsingLanguages]
The language to parse the files in
max_pages: Optional[int]
The maximum number of pages to parse
target_pages: Optional[List[int]]
The pages to target for parsing (0-indexed, so first page is at 0)
updated_at: Optional[datetime]
Update datetime
continue_as_new_threshold: Optional[int]
Maximum number of files to process before calling continue-as-new. If None, continue-as-new is called after every batch. (only used in directory mode)
directory_id: Optional[str]
ID of the directory containing files to process
item_ids: Optional[SequenceNotStr[str]]
List of specific item IDs to process. Either this or directory_id must be provided.
page_size: Optional[int]
Number of files to fetch per batch from the directory (only used in directory mode)
ReturnsExpand Collapse
class BatchCreateResponse: …
Response schema for a batch processing job.
id: str
Unique identifier for the batch job
job_type: Literal["parse", "extract", "classify"]
Type of processing operation
project_id: str
Project this job belongs to
status: Literal["pending", "running", "dispatched", 3 more]
Current status of the job
total_items: int
Total number of items in the job
completed_at: Optional[datetime]
Timestamp when job completed
created_at: Optional[datetime]
Creation datetime
directory_id: Optional[str]
Directory being processed
error_message: Optional[str]
Error message for the latest job attempt, if any.
failed_items: Optional[int]
Number of items that failed processing
job_record_id: Optional[str]
The job record ID associated with this status, if any.
processed_items: Optional[int]
Number of items processed so far
skipped_items: Optional[int]
Number of items skipped (already processed or size limit)
started_at: Optional[datetime]
Timestamp when job processing started
updated_at: Optional[datetime]
Update datetime
workflow_id: Optional[str]
Temporal workflow ID for this batch job
Create Batch Job
import os
from llama_cloud import LlamaCloud
client = LlamaCloud(
api_key=os.environ.get("LLAMA_CLOUD_API_KEY"), # This is the default and can be omitted
)
batch = client.beta.batch.create(
job_config={},
)
print(batch.id){
"id": "id",
"job_type": "parse",
"project_id": "project_id",
"status": "pending",
"total_items": 0,
"completed_at": "2019-12-27T18:11:19.117Z",
"created_at": "2019-12-27T18:11:19.117Z",
"directory_id": "directory_id",
"effective_at": "2019-12-27T18:11:19.117Z",
"error_message": "error_message",
"failed_items": 0,
"job_record_id": "job_record_id",
"processed_items": 0,
"skipped_items": 0,
"started_at": "2019-12-27T18:11:19.117Z",
"updated_at": "2019-12-27T18:11:19.117Z",
"workflow_id": "workflow_id"
}Returns Examples
{
"id": "id",
"job_type": "parse",
"project_id": "project_id",
"status": "pending",
"total_items": 0,
"completed_at": "2019-12-27T18:11:19.117Z",
"created_at": "2019-12-27T18:11:19.117Z",
"directory_id": "directory_id",
"effective_at": "2019-12-27T18:11:19.117Z",
"error_message": "error_message",
"failed_items": 0,
"job_record_id": "job_record_id",
"processed_items": 0,
"skipped_items": 0,
"started_at": "2019-12-27T18:11:19.117Z",
"updated_at": "2019-12-27T18:11:19.117Z",
"workflow_id": "workflow_id"
}