Create Batch Job
Create a new batch processing job for a directory.
Processes all files in the specified directory according to the job configuration. The job runs asynchronously and you can monitor progress using the job status endpoint.
ParametersExpand Collapse
params: BatchCreateParams { job_config, organization_id, project_id, 5 more }
job_config: BatchParseJobRecordCreate { correlation_id, job_name, parameters, 6 more } | ClassifyJob { id, project_id, rules, 9 more }
Body param: Job configuration for batch processing. Can be BatchParseJobRecordCreate or ClassifyJob.
BatchParseJobRecordCreate { correlation_id, job_name, parameters, 6 more }
Batch-specific parse job record for batch processing.
This model contains the metadata and configuration for a batch parse job, but excludes file-specific information. It's used as input to the batch parent workflow and combined with DirectoryFile data to create full ParseJobRecordCreate instances for each file.
Attributes: job_name: Must be PARSE_RAW_FILE partitions: Partitions for job output location parameters: Generic parse configuration (BatchParseJobConfig) session_id: Upstream request ID for tracking correlation_id: Correlation ID for cross-service tracking parent_job_execution_id: Parent job execution ID if nested user_id: User who created the job project_id: Project this job belongs to webhook_url: Optional webhook URL for job completion notifications
correlation_id?: string | null
The correlation ID for this job. Used for tracking the job across services.
parameters?: Parameters | null
Generic parse job configuration for batch processing.
This model contains the parsing configuration that applies to all files in a batch, but excludes file-specific fields like file_name, file_id, etc. Those file-specific fields are populated from DirectoryFile data when creating individual ParseJobRecordCreate instances for each file.
The fields in this model should be generic settings that apply uniformly to all files being processed in the batch.
custom_metadata?: Record<string, unknown> | null
The custom metadata to attach to the documents.
images_to_save?: Array<"screenshot" | "embedded" | "layout"> | null
input_s3_region?: string | null
The region for the input S3 bucket.
lang?: string
The language.
output_s3_path_prefix?: string | null
If specified, llamaParse will save the output to the specified path. All output file will use this 'prefix' should be a valid s3:// url
output_s3_region?: string | null
The region for the output S3 bucket.
outputBucket?: string | null
The output bucket.
Enum for representing the mode of parsing to be used.
pipeline_id?: string | null
The pipeline ID.
priority?: "low" | "medium" | "high" | "critical" | null
The priority for the request. This field may be ignored or overwritten depending on the organization tier.
Enum for representing the different available page error handling modes.
resource_info?: Record<string, unknown> | null
The resource info about the file
webhook_configurations?: Array<WebhookConfiguration { webhook_events, webhook_headers, webhook_output_format, webhook_url } > | null
The outbound webhook configurations
webhook_events?: Array<"extract.pending" | "extract.success" | "extract.error" | 13 more> | null
List of event names to subscribe to
webhook_headers?: Record<string, string> | null
Custom HTTP headers to include with webhook requests.
webhook_output_format?: string | null
The output format to use for the webhook. Defaults to string if none supplied. Currently supported values: string, json
webhook_url?: string | null
The URL to send webhook notifications to.
parent_job_execution_id?: string | null
The ID of the parent job execution.
partitions?: Record<string, string>
The partitions for this execution. Used for determining where to save job output.
project_id?: string | null
The ID of the project this job belongs to.
session_id?: string | null
The upstream request ID that created this job. Used for tracking the job across services.
user_id?: string | null
The ID of the user that created this job
webhook_url?: string | null
The URL that needs to be called at the end of the parsing job.
ClassifyJob { id, project_id, rules, 9 more }
A classify job.
id: string
Unique identifier
project_id: string
The ID of the project
The rules to classify the files
description: string
Natural language description of what to classify. Be specific about the content characteristics that identify this document type.
type: string
The document type to assign when this rule matches (e.g., 'invoice', 'receipt', 'contract')
The status of the classify job
user_id: string
The ID of the user
created_at?: string | null
Creation datetime
error_message?: string | null
Error message for the latest job attempt, if any.
job_record_id?: string | null
The job record ID associated with this status, if any.
mode?: "FAST" | "MULTIMODAL"
The classification mode to use
The configuration for the parsing job
The language to parse the files in
max_pages?: number | null
The maximum number of pages to parse
target_pages?: Array<number> | null
The pages to target for parsing (0-indexed, so first page is at 0)
updated_at?: string | null
Update datetime
organization_id?: string | null
Query param
project_id?: string | null
Query param
continue_as_new_threshold?: number | null
Body param: Maximum number of files to process before calling continue-as-new. If None, continue-as-new is called after every batch. (only used in directory mode)
directory_id?: string | null
Body param: ID of the directory containing files to process
item_ids?: Array<string> | null
Body param: List of specific item IDs to process. Either this or directory_id must be provided.
page_size?: number
Body param: Number of files to fetch per batch from the directory (only used in directory mode)
temporalNamespace?: string
Header param
ReturnsExpand Collapse
BatchCreateResponse { id, job_type, project_id, 14 more }
Response schema for a batch processing job.
id: string
Unique identifier for the batch job
job_type: "parse" | "extract" | "classify"
Type of processing operation
project_id: string
Project this job belongs to
status: "pending" | "running" | "dispatched" | 3 more
Current status of the job
total_items: number
Total number of items in the job
completed_at?: string | null
Timestamp when job completed
created_at?: string | null
Creation datetime
directory_id?: string | null
Directory being processed
error_message?: string | null
Error message for the latest job attempt, if any.
failed_items?: number
Number of items that failed processing
job_record_id?: string | null
The job record ID associated with this status, if any.
processed_items?: number
Number of items processed so far
skipped_items?: number
Number of items skipped (already processed or size limit)
started_at?: string | null
Timestamp when job processing started
updated_at?: string | null
Update datetime
workflow_id?: string | null
Temporal workflow ID for this batch job
Create Batch Job
import LlamaCloud from '@llamaindex/llama-cloud';
const client = new LlamaCloud({
apiKey: process.env['LLAMA_CLOUD_API_KEY'], // This is the default and can be omitted
});
const batch = await client.beta.batch.create({ job_config: {} });
console.log(batch.id);{
"id": "id",
"job_type": "parse",
"project_id": "project_id",
"status": "pending",
"total_items": 0,
"completed_at": "2019-12-27T18:11:19.117Z",
"created_at": "2019-12-27T18:11:19.117Z",
"directory_id": "directory_id",
"effective_at": "2019-12-27T18:11:19.117Z",
"error_message": "error_message",
"failed_items": 0,
"job_record_id": "job_record_id",
"processed_items": 0,
"skipped_items": 0,
"started_at": "2019-12-27T18:11:19.117Z",
"updated_at": "2019-12-27T18:11:19.117Z",
"workflow_id": "workflow_id"
}Returns Examples
{
"id": "id",
"job_type": "parse",
"project_id": "project_id",
"status": "pending",
"total_items": 0,
"completed_at": "2019-12-27T18:11:19.117Z",
"created_at": "2019-12-27T18:11:19.117Z",
"directory_id": "directory_id",
"effective_at": "2019-12-27T18:11:19.117Z",
"error_message": "error_message",
"failed_items": 0,
"job_record_id": "job_record_id",
"processed_items": 0,
"skipped_items": 0,
"started_at": "2019-12-27T18:11:19.117Z",
"updated_at": "2019-12-27T18:11:19.117Z",
"workflow_id": "workflow_id"
}