Skip to content
Get started

Pipelines

Search Pipelines
GET/api/v1/pipelines
Create Pipeline
POST/api/v1/pipelines
Get Pipeline
GET/api/v1/pipelines/{pipeline_id}
Update Existing Pipeline
PUT/api/v1/pipelines/{pipeline_id}
Delete Pipeline
DELETE/api/v1/pipelines/{pipeline_id}
Get Pipeline Status
GET/api/v1/pipelines/{pipeline_id}/status
Upsert Pipeline
PUT/api/v1/pipelines
Run Search
POST/api/v1/pipelines/{pipeline_id}/retrieve
ModelsExpand Collapse
AdvancedModeTransformConfig = object { chunking_config, mode, segmentation_config }
chunking_config: optional object { mode } or object { chunk_overlap, chunk_size, mode } or object { chunk_overlap, chunk_size, mode, separator } or 2 more

Configuration for the chunking.

Accepts one of the following:
NoneChunkingConfig = object { mode }
mode: optional "none"
CharacterChunkingConfig = object { chunk_overlap, chunk_size, mode }
chunk_overlap: optional number
chunk_size: optional number
mode: optional "character"
TokenChunkingConfig = object { chunk_overlap, chunk_size, mode, separator }
chunk_overlap: optional number
chunk_size: optional number
mode: optional "token"
separator: optional string
SentenceChunkingConfig = object { chunk_overlap, chunk_size, mode, 2 more }
chunk_overlap: optional number
chunk_size: optional number
mode: optional "sentence"
paragraph_separator: optional string
separator: optional string
SemanticChunkingConfig = object { breakpoint_percentile_threshold, buffer_size, mode }
breakpoint_percentile_threshold: optional number
buffer_size: optional number
mode: optional "semantic"
mode: optional "advanced"
segmentation_config: optional object { mode } or object { mode, page_separator } or object { mode }

Configuration for the segmentation.

Accepts one of the following:
NoneSegmentationConfig = object { mode }
mode: optional "none"
PageSegmentationConfig = object { mode, page_separator }
mode: optional "page"
page_separator: optional string
ElementSegmentationConfig = object { mode }
mode: optional "element"
AutoTransformConfig = object { chunk_overlap, chunk_size, mode }
chunk_overlap: optional number

Chunk overlap for the transformation.

chunk_size: optional number

Chunk size for the transformation.

exclusiveMinimum0
mode: optional "auto"
AzureOpenAIEmbedding = object { additional_kwargs, api_base, api_key, 12 more }
additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for Azure deployment.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for Azure OpenAI API.

azure_deployment: optional string

The Azure deployment to use.

azure_endpoint: optional string

The Azure endpoint to use.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
AzureOpenAIEmbeddingConfig = object { component, type }
component: optional AzureOpenAIEmbedding { additional_kwargs, api_base, api_key, 12 more }

Configuration for the Azure OpenAI embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for Azure deployment.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for Azure OpenAI API.

azure_deployment: optional string

The Azure deployment to use.

azure_endpoint: optional string

The Azure endpoint to use.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
type: optional "AZURE_EMBEDDING"

Type of the embedding model.

BedrockEmbedding = object { additional_kwargs, aws_access_key_id, aws_secret_access_key, 9 more }
additional_kwargs: optional map[unknown]

Additional kwargs for the bedrock client.

aws_access_key_id: optional string

AWS Access Key ID to use

aws_secret_access_key: optional string

AWS Secret Access Key to use

aws_session_token: optional string

AWS Session Token to use

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

The maximum number of API retries.

exclusiveMinimum0
model_name: optional string

The modelId of the Bedrock model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

profile_name: optional string

The name of aws profile to use. If not given, then the default profile is used.

region_name: optional string

AWS region name to use. Uses region configured in AWS CLI if not passed

timeout: optional number

The timeout for the Bedrock API request in seconds. It will be used for both connect and read timeouts.

BedrockEmbeddingConfig = object { component, type }
component: optional BedrockEmbedding { additional_kwargs, aws_access_key_id, aws_secret_access_key, 9 more }

Configuration for the Bedrock embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the bedrock client.

aws_access_key_id: optional string

AWS Access Key ID to use

aws_secret_access_key: optional string

AWS Secret Access Key to use

aws_session_token: optional string

AWS Session Token to use

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

The maximum number of API retries.

exclusiveMinimum0
model_name: optional string

The modelId of the Bedrock model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

profile_name: optional string

The name of aws profile to use. If not given, then the default profile is used.

region_name: optional string

AWS region name to use. Uses region configured in AWS CLI if not passed

timeout: optional number

The timeout for the Bedrock API request in seconds. It will be used for both connect and read timeouts.

type: optional "BEDROCK_EMBEDDING"

Type of the embedding model.

CohereEmbedding = object { api_key, class_name, embed_batch_size, 5 more }
api_key: string

The Cohere API key.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embedding_type: optional string

Embedding type. If not provided float embedding_type is used when needed.

input_type: optional string

Model Input type. If not provided, search_document and search_query are used when needed.

model_name: optional string

The modelId of the Cohere model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

truncate: optional string

Truncation type - START/ END/ NONE

CohereEmbeddingConfig = object { component, type }
component: optional CohereEmbedding { api_key, class_name, embed_batch_size, 5 more }

Configuration for the Cohere embedding model.

api_key: string

The Cohere API key.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embedding_type: optional string

Embedding type. If not provided float embedding_type is used when needed.

input_type: optional string

Model Input type. If not provided, search_document and search_query are used when needed.

model_name: optional string

The modelId of the Cohere model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

truncate: optional string

Truncation type - START/ END/ NONE

type: optional "COHERE_EMBEDDING"

Type of the embedding model.

DataSinkCreate = object { component, name, sink_type }

Schema for creating a data sink.

component: map[unknown] or CloudPineconeVectorStore { api_key, index_name, class_name, 3 more } or CloudPostgresVectorStore { database, embed_dim, host, 10 more } or 5 more

Component that implements the data sink

Accepts one of the following:
UnionMember0 = map[unknown]
CloudPineconeVectorStore = object { api_key, index_name, class_name, 3 more }

Cloud Pinecone Vector Store.

This class is used to store the configuration for a Pinecone vector store, so that it can be created and used in LlamaCloud.

Args: api_key (str): API key for authenticating with Pinecone index_name (str): name of the Pinecone index namespace (optional[str]): namespace to use in the Pinecone index insert_kwargs (optional[dict]): additional kwargs to pass during insertion

api_key: string

The API key for authenticating with Pinecone

formatpassword
index_name: string
class_name: optional string
insert_kwargs: optional map[unknown]
namespace: optional string
supports_nested_metadata_filters: optional true
CloudPostgresVectorStore = object { database, embed_dim, host, 10 more }
database: string
embed_dim: number
host: string
password: string
port: number
schema_name: string
table_name: string
user: string
class_name: optional string
hnsw_settings: optional PgVectorHnswSettings { distance_method, ef_construction, ef_search, 2 more }

HNSW settings for PGVector.

distance_method: optional "l2" or "ip" or "cosine" or 3 more

The distance method to use.

Accepts one of the following:
"l2"
"ip"
"cosine"
"l1"
"hamming"
"jaccard"
ef_construction: optional number

The number of edges to use during the construction phase.

minimum1

The number of edges to use during the search phase.

minimum1
m: optional number

The number of bi-directional links created for each new element.

minimum1
vector_type: optional "vector" or "half_vec" or "bit" or "sparse_vec"

The type of vector to use.

Accepts one of the following:
"vector"
"half_vec"
"bit"
"sparse_vec"
perform_setup: optional boolean
supports_nested_metadata_filters: optional boolean
CloudQdrantVectorStore = object { api_key, collection_name, url, 4 more }

Cloud Qdrant Vector Store.

This class is used to store the configuration for a Qdrant vector store, so that it can be created and used in LlamaCloud.

Args: collection_name (str): name of the Qdrant collection url (str): url of the Qdrant instance api_key (str): API key for authenticating with Qdrant max_retries (int): maximum number of retries in case of a failure. Defaults to 3 client_kwargs (dict): additional kwargs to pass to the Qdrant client

api_key: string
collection_name: string
url: string
class_name: optional string
client_kwargs: optional map[unknown]
max_retries: optional number
supports_nested_metadata_filters: optional true
CloudAzureAISearchVectorStore = object { search_service_api_key, search_service_endpoint, class_name, 8 more }

Cloud Azure AI Search Vector Store.

search_service_api_key: string
search_service_endpoint: string
class_name: optional string
client_id: optional string
client_secret: optional string
embedding_dimension: optional number
filterable_metadata_field_keys: optional map[unknown]
index_name: optional string
search_service_api_version: optional string
supports_nested_metadata_filters: optional true
tenant_id: optional string

Cloud MongoDB Atlas Vector Store.

This class is used to store the configuration for a MongoDB Atlas vector store, so that it can be created and used in LlamaCloud.

Args: mongodb_uri (str): URI for connecting to MongoDB Atlas db_name (str): name of the MongoDB database collection_name (str): name of the MongoDB collection vector_index_name (str): name of the MongoDB Atlas vector index fulltext_index_name (str): name of the MongoDB Atlas full-text index

CloudMilvusVectorStore = object { uri, token, class_name, 3 more }

Cloud Milvus Vector Store.

uri: string
token: optional string
class_name: optional string
collection_name: optional string
embedding_dimension: optional number
supports_nested_metadata_filters: optional boolean
CloudAstraDBVectorStore = object { token, api_endpoint, collection_name, 4 more }

Cloud AstraDB Vector Store.

This class is used to store the configuration for an AstraDB vector store, so that it can be created and used in LlamaCloud.

Args: token (str): The Astra DB Application Token to use. api_endpoint (str): The Astra DB JSON API endpoint for your database. collection_name (str): Collection name to use. If not existing, it will be created. embedding_dimension (int): Length of the embedding vectors in use. keyspace (optional[str]): The keyspace to use. If not provided, 'default_keyspace'

token: string

The Astra DB Application Token to use

formatpassword
api_endpoint: string

The Astra DB JSON API endpoint for your database

collection_name: string

Collection name to use. If not existing, it will be created

embedding_dimension: number

Length of the embedding vectors in use

class_name: optional string
keyspace: optional string

The keyspace to use. If not provided, 'default_keyspace'

supports_nested_metadata_filters: optional true
name: string

The name of the data sink.

sink_type: "PINECONE" or "POSTGRES" or "QDRANT" or 4 more
Accepts one of the following:
"PINECONE"
"POSTGRES"
"QDRANT"
"AZUREAI_SEARCH"
"MONGODB_ATLAS"
"MILVUS"
"ASTRA_DB"
GeminiEmbedding = object { api_base, api_key, class_name, 6 more }
api_base: optional string

API base to access the model. Defaults to None.

api_key: optional string

API key to access the model. Defaults to None.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
model_name: optional string

The modelId of the Gemini model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

task_type: optional string

The task for embedding model.

title: optional string

Title is only applicable for retrieval_document tasks, and is used to represent a document title. For other tasks, title is invalid.

transport: optional string

Transport to access the model. Defaults to None.

GeminiEmbeddingConfig = object { component, type }
component: optional GeminiEmbedding { api_base, api_key, class_name, 6 more }

Configuration for the Gemini embedding model.

api_base: optional string

API base to access the model. Defaults to None.

api_key: optional string

API key to access the model. Defaults to None.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
model_name: optional string

The modelId of the Gemini model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

task_type: optional string

The task for embedding model.

title: optional string

Title is only applicable for retrieval_document tasks, and is used to represent a document title. For other tasks, title is invalid.

transport: optional string

Transport to access the model. Defaults to None.

type: optional "GEMINI_EMBEDDING"

Type of the embedding model.

HuggingFaceInferenceAPIEmbedding = object { token, class_name, cookies, 9 more }
token: optional string or boolean

Hugging Face token. Will default to the locally saved token. Pass token=False if you don’t want to send your token to the server.

Accepts one of the following:
UnionMember0 = string
UnionMember1 = boolean
class_name: optional string
cookies: optional map[string]

Additional cookies to send to the server.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
headers: optional map[string]

Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values.

model_name: optional string

Hugging Face model name. If None, the task will be used.

num_workers: optional number

The number of workers to use for async embedding calls.

pooling: optional "cls" or "mean" or "last"

Enum of possible pooling choices with pooling behaviors.

Accepts one of the following:
"cls"
"mean"
"last"
query_instruction: optional string

Instruction to prepend during query embedding.

task: optional string

Optional task to pick Hugging Face's recommended model, used when model_name is left as default of None.

text_instruction: optional string

Instruction to prepend during text embedding.

timeout: optional number

The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available.

HuggingFaceInferenceAPIEmbeddingConfig = object { component, type }
component: optional HuggingFaceInferenceAPIEmbedding { token, class_name, cookies, 9 more }

Configuration for the HuggingFace Inference API embedding model.

token: optional string or boolean

Hugging Face token. Will default to the locally saved token. Pass token=False if you don’t want to send your token to the server.

Accepts one of the following:
UnionMember0 = string
UnionMember1 = boolean
class_name: optional string
cookies: optional map[string]

Additional cookies to send to the server.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
headers: optional map[string]

Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values.

model_name: optional string

Hugging Face model name. If None, the task will be used.

num_workers: optional number

The number of workers to use for async embedding calls.

pooling: optional "cls" or "mean" or "last"

Enum of possible pooling choices with pooling behaviors.

Accepts one of the following:
"cls"
"mean"
"last"
query_instruction: optional string

Instruction to prepend during query embedding.

task: optional string

Optional task to pick Hugging Face's recommended model, used when model_name is left as default of None.

text_instruction: optional string

Instruction to prepend during text embedding.

timeout: optional number

The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available.

type: optional "HUGGINGFACE_API_EMBEDDING"

Type of the embedding model.

LlamaParseParameters = object { adaptive_long_table, aggressive_table_extraction, annotate_links, 115 more }

Settings that can be configured for how to use LlamaParse to parse files within a LlamaCloud pipeline.

adaptive_long_table: optional boolean
aggressive_table_extraction: optional boolean
auto_mode: optional boolean
auto_mode_configuration_json: optional string
auto_mode_trigger_on_image_in_page: optional boolean
auto_mode_trigger_on_regexp_in_page: optional string
auto_mode_trigger_on_table_in_page: optional boolean
auto_mode_trigger_on_text_in_page: optional string
azure_openai_api_version: optional string
azure_openai_deployment_name: optional string
azure_openai_endpoint: optional string
azure_openai_key: optional string
bbox_bottom: optional number
bbox_left: optional number
bbox_right: optional number
bbox_top: optional number
bounding_box: optional string
compact_markdown_table: optional boolean
complemental_formatting_instruction: optional string
content_guideline_instruction: optional string
continuous_mode: optional boolean
disable_image_extraction: optional boolean
disable_ocr: optional boolean
disable_reconstruction: optional boolean
do_not_cache: optional boolean
do_not_unroll_columns: optional boolean
enable_cost_optimizer: optional boolean
extract_charts: optional boolean
extract_layout: optional boolean
extract_printed_page_number: optional boolean
fast_mode: optional boolean
formatting_instruction: optional string
gpt4o_api_key: optional string
gpt4o_mode: optional boolean
guess_xlsx_sheet_name: optional boolean
hide_footers: optional boolean
hide_headers: optional boolean
high_res_ocr: optional boolean
html_make_all_elements_visible: optional boolean
html_remove_fixed_elements: optional boolean
html_remove_navigation_elements: optional boolean
http_proxy: optional string
ignore_document_elements_for_layout_detection: optional boolean
images_to_save: optional array of "screenshot" or "embedded" or "layout"
Accepts one of the following:
"screenshot"
"embedded"
"layout"
inline_images_in_markdown: optional boolean
input_s3_path: optional string
input_s3_region: optional string
input_url: optional string
internal_is_screenshot_job: optional boolean
invalidate_cache: optional boolean
is_formatting_instruction: optional boolean
job_timeout_extra_time_per_page_in_seconds: optional number
job_timeout_in_seconds: optional number
keep_page_separator_when_merging_tables: optional boolean
languages: optional array of ParsingLanguages
Accepts one of the following:
"af"
"az"
"bs"
"cs"
"cy"
"da"
"de"
"en"
"es"
"et"
"fr"
"ga"
"hr"
"hu"
"id"
"is"
"it"
"ku"
"la"
"lt"
"lv"
"mi"
"ms"
"mt"
"nl"
"no"
"oc"
"pi"
"pl"
"pt"
"ro"
"rs_latin"
"sk"
"sl"
"sq"
"sv"
"sw"
"tl"
"tr"
"uz"
"vi"
"ar"
"fa"
"ug"
"ur"
"bn"
"as"
"mni"
"ru"
"rs_cyrillic"
"be"
"bg"
"uk"
"mn"
"abq"
"ady"
"kbd"
"ava"
"dar"
"inh"
"che"
"lbe"
"lez"
"tab"
"tjk"
"hi"
"mr"
"ne"
"bh"
"mai"
"ang"
"bho"
"mah"
"sck"
"new"
"gom"
"sa"
"bgc"
"th"
"ch_sim"
"ch_tra"
"ja"
"ko"
"ta"
"te"
"kn"
layout_aware: optional boolean
line_level_bounding_box: optional boolean
markdown_table_multiline_header_separator: optional string
max_pages: optional number
max_pages_enforced: optional number
merge_tables_across_pages_in_markdown: optional boolean
model: optional string
outlined_table_extraction: optional boolean
output_pdf_of_document: optional boolean
output_s3_path_prefix: optional string
output_s3_region: optional string
output_tables_as_HTML: optional boolean
page_error_tolerance: optional number
page_header_prefix: optional string
page_header_suffix: optional string
page_prefix: optional string
page_separator: optional string
page_suffix: optional string
parse_mode: optional ParsingMode

Enum for representing the mode of parsing to be used.

Accepts one of the following:
"parse_page_without_llm"
"parse_page_with_llm"
"parse_page_with_lvm"
"parse_page_with_agent"
"parse_page_with_layout_agent"
"parse_document_with_llm"
"parse_document_with_lvm"
"parse_document_with_agent"
parsing_instruction: optional string
precise_bounding_box: optional boolean
premium_mode: optional boolean
presentation_out_of_bounds_content: optional boolean
presentation_skip_embedded_data: optional boolean
preserve_layout_alignment_across_pages: optional boolean
preserve_very_small_text: optional boolean
preset: optional string
priority: optional "low" or "medium" or "high" or "critical"

The priority for the request. This field may be ignored or overwritten depending on the organization tier.

Accepts one of the following:
"low"
"medium"
"high"
"critical"
project_id: optional string
remove_hidden_text: optional boolean
replace_failed_page_mode: optional FailPageMode

Enum for representing the different available page error handling modes.

Accepts one of the following:
"raw_text"
"blank_page"
"error_message"
replace_failed_page_with_error_message_prefix: optional string
replace_failed_page_with_error_message_suffix: optional string
save_images: optional boolean
skip_diagonal_text: optional boolean
specialized_chart_parsing_agentic: optional boolean
specialized_chart_parsing_efficient: optional boolean
specialized_chart_parsing_plus: optional boolean
specialized_image_parsing: optional boolean
spreadsheet_extract_sub_tables: optional boolean
spreadsheet_force_formula_computation: optional boolean
strict_mode_buggy_font: optional boolean
strict_mode_image_extraction: optional boolean
strict_mode_image_ocr: optional boolean
strict_mode_reconstruction: optional boolean
structured_output: optional boolean
structured_output_json_schema: optional string
structured_output_json_schema_name: optional string
system_prompt: optional string
system_prompt_append: optional string
take_screenshot: optional boolean
target_pages: optional string
tier: optional string
use_vendor_multimodal_model: optional boolean
user_prompt: optional string
vendor_multimodal_api_key: optional string
vendor_multimodal_model_name: optional string
version: optional string
webhook_configurations: optional array of WebhookConfiguration { webhook_events, webhook_headers, webhook_output_format, webhook_url }

The outbound webhook configurations

webhook_events: optional array of "extract.pending" or "extract.success" or "extract.error" or 13 more

List of event names to subscribe to

Accepts one of the following:
"extract.pending"
"extract.success"
"extract.error"
"extract.partial_success"
"extract.cancelled"
"parse.pending"
"parse.success"
"parse.error"
"parse.partial_success"
"parse.cancelled"
"classify.pending"
"classify.success"
"classify.error"
"classify.partial_success"
"classify.cancelled"
"unmapped_event"
webhook_headers: optional map[string]

Custom HTTP headers to include with webhook requests.

webhook_output_format: optional string

The output format to use for the webhook. Defaults to string if none supplied. Currently supported values: string, json

webhook_url: optional string

The URL to send webhook notifications to.

webhook_url: optional string
LlmParameters = object { class_name, model_name, system_prompt, 3 more }
class_name: optional string
model_name: optional "GPT_4O" or "GPT_4O_MINI" or "GPT_4_1" or 11 more

The name of the model to use for LLM completions.

Accepts one of the following:
"GPT_4O"
"GPT_4O_MINI"
"GPT_4_1"
"GPT_4_1_NANO"
"GPT_4_1_MINI"
"AZURE_OPENAI_GPT_4O"
"AZURE_OPENAI_GPT_4O_MINI"
"AZURE_OPENAI_GPT_4_1"
"AZURE_OPENAI_GPT_4_1_MINI"
"AZURE_OPENAI_GPT_4_1_NANO"
"CLAUDE_4_5_SONNET"
"BEDROCK_CLAUDE_3_5_SONNET_V1"
"BEDROCK_CLAUDE_3_5_SONNET_V2"
"VERTEX_AI_CLAUDE_3_5_SONNET_V2"
system_prompt: optional string

The system prompt to use for the completion.

maxLength3000
temperature: optional number

The temperature value for the model.

use_chain_of_thought_reasoning: optional boolean

Whether to use chain of thought reasoning.

use_citation: optional boolean

Whether to show citations in the response.

ManagedIngestionStatusResponse = object { status, deployment_date, effective_at, 2 more }
status: "NOT_STARTED" or "IN_PROGRESS" or "SUCCESS" or 3 more

Status of the ingestion.

Accepts one of the following:
"NOT_STARTED"
"IN_PROGRESS"
"SUCCESS"
"ERROR"
"PARTIAL_SUCCESS"
"CANCELLED"
deployment_date: optional string

Date of the deployment.

formatdate-time
effective_at: optional string

When the status is effective

formatdate-time
error: optional array of object { job_id, message, step }

List of errors that occurred during ingestion.

job_id: string

ID of the job that failed.

formatuuid
message: string

List of errors that occurred during ingestion.

step: "MANAGED_INGESTION" or "DATA_SOURCE" or "FILE_UPDATER" or 4 more

Name of the job that failed.

Accepts one of the following:
"MANAGED_INGESTION"
"DATA_SOURCE"
"FILE_UPDATER"
"PARSE"
"TRANSFORM"
"INGESTION"
"METADATA_UPDATE"
job_id: optional string

ID of the latest job.

formatuuid
MessageRole = "system" or "developer" or "user" or 5 more

Message role.

Accepts one of the following:
"system"
"developer"
"user"
"assistant"
"function"
"tool"
"chatbot"
"model"
MetadataFilters = object { filters, condition }

Metadata filters for vector stores.

filters: array of object { key, value, operator } or MetadataFilters { filters, condition }
Accepts one of the following:
MetadataFilter = object { key, value, operator }

Comprehensive metadata filter for vector stores to support more operators.

Value uses Strict types, as int, float and str are compatible types and were all converted to string before.

See: https://docs.pydantic.dev/latest/usage/types/#strict-types

key: string
value: number or string or array of string or 2 more
Accepts one of the following:
UnionMember0 = number
UnionMember1 = string
UnionMember2 = array of string
UnionMember3 = array of number
UnionMember4 = array of number
operator: optional "==" or ">" or "<" or 11 more

Vector store filter operator.

Accepts one of the following:
"=="
">"
"<"
"!="
">="
"<="
"in"
"nin"
"any"
"all"
"text_match"
"text_match_insensitive"
"contains"
"is_empty"
MetadataFilters = object { filters, condition }

Metadata filters for vector stores.

filters: array of object { key, value, operator } or MetadataFilters { filters, condition }
Accepts one of the following:
MetadataFilter = object { key, value, operator }

Comprehensive metadata filter for vector stores to support more operators.

Value uses Strict types, as int, float and str are compatible types and were all converted to string before.

See: https://docs.pydantic.dev/latest/usage/types/#strict-types

key: string
value: number or string or array of string or 2 more
Accepts one of the following:
UnionMember0 = number
UnionMember1 = string
UnionMember2 = array of string
UnionMember3 = array of number
UnionMember4 = array of number
operator: optional "==" or ">" or "<" or 11 more

Vector store filter operator.

Accepts one of the following:
"=="
">"
"<"
"!="
">="
"<="
"in"
"nin"
"any"
"all"
"text_match"
"text_match_insensitive"
"contains"
"is_empty"
MetadataFilters { filters, condition }
condition: optional "and" or "or" or "not"

Vector store filter conditions to combine different filters.

Accepts one of the following:
"and"
"or"
"not"
condition: optional "and" or "or" or "not"

Vector store filter conditions to combine different filters.

Accepts one of the following:
"and"
"or"
"not"
OpenAIEmbedding = object { additional_kwargs, api_base, api_key, 10 more }
additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for OpenAI API.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for OpenAI API.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
OpenAIEmbeddingConfig = object { component, type }
component: optional OpenAIEmbedding { additional_kwargs, api_base, api_key, 10 more }

Configuration for the OpenAI embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for OpenAI API.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for OpenAI API.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
type: optional "OPENAI_EMBEDDING"

Type of the embedding model.

PageFigureNodeWithScore = object { node, score, class_name }

Page figure metadata with score

node: object { confidence, figure_name, figure_size, 4 more }
confidence: number

The confidence of the figure

maximum1
minimum0
figure_name: string

The name of the figure

figure_size: number

The size of the figure in bytes

minimum0
file_id: string

The ID of the file that the figure was taken from

formatuuid
page_index: number

The index of the page for which the figure is taken (0-indexed)

minimum0
is_likely_noise: optional boolean

Whether the figure is likely to be noise

metadata: optional map[unknown]

Metadata for the figure

score: number

The score of the figure node

class_name: optional string
PageScreenshotNodeWithScore = object { node, score, class_name }

Page screenshot metadata with score

node: object { file_id, image_size, page_index, metadata }
file_id: string

The ID of the file that the page screenshot was taken from

formatuuid
image_size: number

The size of the image in bytes

minimum0
page_index: number

The index of the page for which the screenshot is taken (0-indexed)

minimum0
metadata: optional map[unknown]

Metadata for the screenshot

score: number

The score of the screenshot node

class_name: optional string
Pipeline = object { id, embedding_config, name, 15 more }

Schema for a pipeline.

id: string

Unique identifier

formatuuid
embedding_config: object { component, type } or AzureOpenAIEmbeddingConfig { component, type } or CohereEmbeddingConfig { component, type } or 5 more
Accepts one of the following:
ManagedOpenAIEmbedding = object { component, type }
component: optional object { class_name, embed_batch_size, model_name, num_workers }

Configuration for the Managed OpenAI embedding model.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
model_name: optional "openai-text-embedding-3-small"

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

type: optional "MANAGED_OPENAI_EMBEDDING"

Type of the embedding model.

AzureOpenAIEmbeddingConfig = object { component, type }
component: optional AzureOpenAIEmbedding { additional_kwargs, api_base, api_key, 12 more }

Configuration for the Azure OpenAI embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for Azure deployment.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for Azure OpenAI API.

azure_deployment: optional string

The Azure deployment to use.

azure_endpoint: optional string

The Azure endpoint to use.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
type: optional "AZURE_EMBEDDING"

Type of the embedding model.

CohereEmbeddingConfig = object { component, type }
component: optional CohereEmbedding { api_key, class_name, embed_batch_size, 5 more }

Configuration for the Cohere embedding model.

api_key: string

The Cohere API key.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embedding_type: optional string

Embedding type. If not provided float embedding_type is used when needed.

input_type: optional string

Model Input type. If not provided, search_document and search_query are used when needed.

model_name: optional string

The modelId of the Cohere model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

truncate: optional string

Truncation type - START/ END/ NONE

type: optional "COHERE_EMBEDDING"

Type of the embedding model.

GeminiEmbeddingConfig = object { component, type }
component: optional GeminiEmbedding { api_base, api_key, class_name, 6 more }

Configuration for the Gemini embedding model.

api_base: optional string

API base to access the model. Defaults to None.

api_key: optional string

API key to access the model. Defaults to None.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
model_name: optional string

The modelId of the Gemini model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

task_type: optional string

The task for embedding model.

title: optional string

Title is only applicable for retrieval_document tasks, and is used to represent a document title. For other tasks, title is invalid.

transport: optional string

Transport to access the model. Defaults to None.

type: optional "GEMINI_EMBEDDING"

Type of the embedding model.

HuggingFaceInferenceAPIEmbeddingConfig = object { component, type }
component: optional HuggingFaceInferenceAPIEmbedding { token, class_name, cookies, 9 more }

Configuration for the HuggingFace Inference API embedding model.

token: optional string or boolean

Hugging Face token. Will default to the locally saved token. Pass token=False if you don’t want to send your token to the server.

Accepts one of the following:
UnionMember0 = string
UnionMember1 = boolean
class_name: optional string
cookies: optional map[string]

Additional cookies to send to the server.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
headers: optional map[string]

Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values.

model_name: optional string

Hugging Face model name. If None, the task will be used.

num_workers: optional number

The number of workers to use for async embedding calls.

pooling: optional "cls" or "mean" or "last"

Enum of possible pooling choices with pooling behaviors.

Accepts one of the following:
"cls"
"mean"
"last"
query_instruction: optional string

Instruction to prepend during query embedding.

task: optional string

Optional task to pick Hugging Face's recommended model, used when model_name is left as default of None.

text_instruction: optional string

Instruction to prepend during text embedding.

timeout: optional number

The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available.

type: optional "HUGGINGFACE_API_EMBEDDING"

Type of the embedding model.

OpenAIEmbeddingConfig = object { component, type }
component: optional OpenAIEmbedding { additional_kwargs, api_base, api_key, 10 more }

Configuration for the OpenAI embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for OpenAI API.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for OpenAI API.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
type: optional "OPENAI_EMBEDDING"

Type of the embedding model.

VertexAIEmbeddingConfig = object { component, type }
component: optional VertexTextEmbedding { client_email, location, private_key, 9 more }

Configuration for the VertexAI embedding model.

client_email: string

The client email for the VertexAI credentials.

location: string

The default location to use when making API calls.

private_key: string

The private key for the VertexAI credentials.

private_key_id: string

The private key ID for the VertexAI credentials.

project: string

The default GCP project to use when making Vertex API calls.

token_uri: string

The token URI for the VertexAI credentials.

additional_kwargs: optional map[unknown]

Additional kwargs for the Vertex.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embed_mode: optional "default" or "classification" or "clustering" or 2 more

The embedding mode to use.

Accepts one of the following:
"default"
"classification"
"clustering"
"similarity"
"retrieval"
model_name: optional string

The modelId of the VertexAI model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

type: optional "VERTEXAI_EMBEDDING"

Type of the embedding model.

BedrockEmbeddingConfig = object { component, type }
component: optional BedrockEmbedding { additional_kwargs, aws_access_key_id, aws_secret_access_key, 9 more }

Configuration for the Bedrock embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the bedrock client.

aws_access_key_id: optional string

AWS Access Key ID to use

aws_secret_access_key: optional string

AWS Secret Access Key to use

aws_session_token: optional string

AWS Session Token to use

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

The maximum number of API retries.

exclusiveMinimum0
model_name: optional string

The modelId of the Bedrock model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

profile_name: optional string

The name of aws profile to use. If not given, then the default profile is used.

region_name: optional string

AWS region name to use. Uses region configured in AWS CLI if not passed

timeout: optional number

The timeout for the Bedrock API request in seconds. It will be used for both connect and read timeouts.

type: optional "BEDROCK_EMBEDDING"

Type of the embedding model.

name: string
project_id: string
config_hash: optional object { embedding_config_hash, parsing_config_hash, transform_config_hash }

Hashes for the configuration of a pipeline.

embedding_config_hash: optional string

Hash of the embedding config.

parsing_config_hash: optional string

Hash of the llama parse parameters.

transform_config_hash: optional string

Hash of the transform config.

created_at: optional string

Creation datetime

formatdate-time
data_sink: optional DataSink { id, component, name, 4 more }

Schema for a data sink.

id: string

Unique identifier

formatuuid
component: map[unknown] or CloudPineconeVectorStore { api_key, index_name, class_name, 3 more } or CloudPostgresVectorStore { database, embed_dim, host, 10 more } or 5 more

Component that implements the data sink

Accepts one of the following:
UnionMember0 = map[unknown]
CloudPineconeVectorStore = object { api_key, index_name, class_name, 3 more }

Cloud Pinecone Vector Store.

This class is used to store the configuration for a Pinecone vector store, so that it can be created and used in LlamaCloud.

Args: api_key (str): API key for authenticating with Pinecone index_name (str): name of the Pinecone index namespace (optional[str]): namespace to use in the Pinecone index insert_kwargs (optional[dict]): additional kwargs to pass during insertion

api_key: string

The API key for authenticating with Pinecone

formatpassword
index_name: string
class_name: optional string
insert_kwargs: optional map[unknown]
namespace: optional string
supports_nested_metadata_filters: optional true
CloudPostgresVectorStore = object { database, embed_dim, host, 10 more }
database: string
embed_dim: number
host: string
password: string
port: number
schema_name: string
table_name: string
user: string
class_name: optional string
hnsw_settings: optional PgVectorHnswSettings { distance_method, ef_construction, ef_search, 2 more }

HNSW settings for PGVector.

distance_method: optional "l2" or "ip" or "cosine" or 3 more

The distance method to use.

Accepts one of the following:
"l2"
"ip"
"cosine"
"l1"
"hamming"
"jaccard"
ef_construction: optional number

The number of edges to use during the construction phase.

minimum1

The number of edges to use during the search phase.

minimum1
m: optional number

The number of bi-directional links created for each new element.

minimum1
vector_type: optional "vector" or "half_vec" or "bit" or "sparse_vec"

The type of vector to use.

Accepts one of the following:
"vector"
"half_vec"
"bit"
"sparse_vec"
perform_setup: optional boolean
supports_nested_metadata_filters: optional boolean
CloudQdrantVectorStore = object { api_key, collection_name, url, 4 more }

Cloud Qdrant Vector Store.

This class is used to store the configuration for a Qdrant vector store, so that it can be created and used in LlamaCloud.

Args: collection_name (str): name of the Qdrant collection url (str): url of the Qdrant instance api_key (str): API key for authenticating with Qdrant max_retries (int): maximum number of retries in case of a failure. Defaults to 3 client_kwargs (dict): additional kwargs to pass to the Qdrant client

api_key: string
collection_name: string
url: string
class_name: optional string
client_kwargs: optional map[unknown]
max_retries: optional number
supports_nested_metadata_filters: optional true
CloudAzureAISearchVectorStore = object { search_service_api_key, search_service_endpoint, class_name, 8 more }

Cloud Azure AI Search Vector Store.

search_service_api_key: string
search_service_endpoint: string
class_name: optional string
client_id: optional string
client_secret: optional string
embedding_dimension: optional number
filterable_metadata_field_keys: optional map[unknown]
index_name: optional string
search_service_api_version: optional string
supports_nested_metadata_filters: optional true
tenant_id: optional string

Cloud MongoDB Atlas Vector Store.

This class is used to store the configuration for a MongoDB Atlas vector store, so that it can be created and used in LlamaCloud.

Args: mongodb_uri (str): URI for connecting to MongoDB Atlas db_name (str): name of the MongoDB database collection_name (str): name of the MongoDB collection vector_index_name (str): name of the MongoDB Atlas vector index fulltext_index_name (str): name of the MongoDB Atlas full-text index

CloudMilvusVectorStore = object { uri, token, class_name, 3 more }

Cloud Milvus Vector Store.

uri: string
token: optional string
class_name: optional string
collection_name: optional string
embedding_dimension: optional number
supports_nested_metadata_filters: optional boolean
CloudAstraDBVectorStore = object { token, api_endpoint, collection_name, 4 more }

Cloud AstraDB Vector Store.

This class is used to store the configuration for an AstraDB vector store, so that it can be created and used in LlamaCloud.

Args: token (str): The Astra DB Application Token to use. api_endpoint (str): The Astra DB JSON API endpoint for your database. collection_name (str): Collection name to use. If not existing, it will be created. embedding_dimension (int): Length of the embedding vectors in use. keyspace (optional[str]): The keyspace to use. If not provided, 'default_keyspace'

token: string

The Astra DB Application Token to use

formatpassword
api_endpoint: string

The Astra DB JSON API endpoint for your database

collection_name: string

Collection name to use. If not existing, it will be created

embedding_dimension: number

Length of the embedding vectors in use

class_name: optional string
keyspace: optional string

The keyspace to use. If not provided, 'default_keyspace'

supports_nested_metadata_filters: optional true
name: string

The name of the data sink.

project_id: string
sink_type: "PINECONE" or "POSTGRES" or "QDRANT" or 4 more
Accepts one of the following:
"PINECONE"
"POSTGRES"
"QDRANT"
"AZUREAI_SEARCH"
"MONGODB_ATLAS"
"MILVUS"
"ASTRA_DB"
created_at: optional string

Creation datetime

formatdate-time
updated_at: optional string

Update datetime

formatdate-time
embedding_model_config: optional object { id, embedding_config, name, 3 more }

Schema for an embedding model config.

id: string

Unique identifier

formatuuid
embedding_config: AzureOpenAIEmbeddingConfig { component, type } or CohereEmbeddingConfig { component, type } or GeminiEmbeddingConfig { component, type } or 4 more

The embedding configuration for the embedding model config.

Accepts one of the following:
AzureOpenAIEmbeddingConfig = object { component, type }
component: optional AzureOpenAIEmbedding { additional_kwargs, api_base, api_key, 12 more }

Configuration for the Azure OpenAI embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for Azure deployment.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for Azure OpenAI API.

azure_deployment: optional string

The Azure deployment to use.

azure_endpoint: optional string

The Azure endpoint to use.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
type: optional "AZURE_EMBEDDING"

Type of the embedding model.

CohereEmbeddingConfig = object { component, type }
component: optional CohereEmbedding { api_key, class_name, embed_batch_size, 5 more }

Configuration for the Cohere embedding model.

api_key: string

The Cohere API key.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embedding_type: optional string

Embedding type. If not provided float embedding_type is used when needed.

input_type: optional string

Model Input type. If not provided, search_document and search_query are used when needed.

model_name: optional string

The modelId of the Cohere model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

truncate: optional string

Truncation type - START/ END/ NONE

type: optional "COHERE_EMBEDDING"

Type of the embedding model.

GeminiEmbeddingConfig = object { component, type }
component: optional GeminiEmbedding { api_base, api_key, class_name, 6 more }

Configuration for the Gemini embedding model.

api_base: optional string

API base to access the model. Defaults to None.

api_key: optional string

API key to access the model. Defaults to None.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
model_name: optional string

The modelId of the Gemini model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

task_type: optional string

The task for embedding model.

title: optional string

Title is only applicable for retrieval_document tasks, and is used to represent a document title. For other tasks, title is invalid.

transport: optional string

Transport to access the model. Defaults to None.

type: optional "GEMINI_EMBEDDING"

Type of the embedding model.

HuggingFaceInferenceAPIEmbeddingConfig = object { component, type }
component: optional HuggingFaceInferenceAPIEmbedding { token, class_name, cookies, 9 more }

Configuration for the HuggingFace Inference API embedding model.

token: optional string or boolean

Hugging Face token. Will default to the locally saved token. Pass token=False if you don’t want to send your token to the server.

Accepts one of the following:
UnionMember0 = string
UnionMember1 = boolean
class_name: optional string
cookies: optional map[string]

Additional cookies to send to the server.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
headers: optional map[string]

Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values.

model_name: optional string

Hugging Face model name. If None, the task will be used.

num_workers: optional number

The number of workers to use for async embedding calls.

pooling: optional "cls" or "mean" or "last"

Enum of possible pooling choices with pooling behaviors.

Accepts one of the following:
"cls"
"mean"
"last"
query_instruction: optional string

Instruction to prepend during query embedding.

task: optional string

Optional task to pick Hugging Face's recommended model, used when model_name is left as default of None.

text_instruction: optional string

Instruction to prepend during text embedding.

timeout: optional number

The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available.

type: optional "HUGGINGFACE_API_EMBEDDING"

Type of the embedding model.

OpenAIEmbeddingConfig = object { component, type }
component: optional OpenAIEmbedding { additional_kwargs, api_base, api_key, 10 more }

Configuration for the OpenAI embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for OpenAI API.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for OpenAI API.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
type: optional "OPENAI_EMBEDDING"

Type of the embedding model.

VertexAIEmbeddingConfig = object { component, type }
component: optional VertexTextEmbedding { client_email, location, private_key, 9 more }

Configuration for the VertexAI embedding model.

client_email: string

The client email for the VertexAI credentials.

location: string

The default location to use when making API calls.

private_key: string

The private key for the VertexAI credentials.

private_key_id: string

The private key ID for the VertexAI credentials.

project: string

The default GCP project to use when making Vertex API calls.

token_uri: string

The token URI for the VertexAI credentials.

additional_kwargs: optional map[unknown]

Additional kwargs for the Vertex.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embed_mode: optional "default" or "classification" or "clustering" or 2 more

The embedding mode to use.

Accepts one of the following:
"default"
"classification"
"clustering"
"similarity"
"retrieval"
model_name: optional string

The modelId of the VertexAI model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

type: optional "VERTEXAI_EMBEDDING"

Type of the embedding model.

BedrockEmbeddingConfig = object { component, type }
component: optional BedrockEmbedding { additional_kwargs, aws_access_key_id, aws_secret_access_key, 9 more }

Configuration for the Bedrock embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the bedrock client.

aws_access_key_id: optional string

AWS Access Key ID to use

aws_secret_access_key: optional string

AWS Secret Access Key to use

aws_session_token: optional string

AWS Session Token to use

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

The maximum number of API retries.

exclusiveMinimum0
model_name: optional string

The modelId of the Bedrock model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

profile_name: optional string

The name of aws profile to use. If not given, then the default profile is used.

region_name: optional string

AWS region name to use. Uses region configured in AWS CLI if not passed

timeout: optional number

The timeout for the Bedrock API request in seconds. It will be used for both connect and read timeouts.

type: optional "BEDROCK_EMBEDDING"

Type of the embedding model.

name: string

The name of the embedding model config.

project_id: string
created_at: optional string

Creation datetime

formatdate-time
updated_at: optional string

Update datetime

formatdate-time
embedding_model_config_id: optional string

The ID of the EmbeddingModelConfig this pipeline is using.

formatuuid
llama_parse_parameters: optional LlamaParseParameters { adaptive_long_table, aggressive_table_extraction, annotate_links, 115 more }

Settings that can be configured for how to use LlamaParse to parse files within a LlamaCloud pipeline.

adaptive_long_table: optional boolean
aggressive_table_extraction: optional boolean
auto_mode: optional boolean
auto_mode_configuration_json: optional string
auto_mode_trigger_on_image_in_page: optional boolean
auto_mode_trigger_on_regexp_in_page: optional string
auto_mode_trigger_on_table_in_page: optional boolean
auto_mode_trigger_on_text_in_page: optional string
azure_openai_api_version: optional string
azure_openai_deployment_name: optional string
azure_openai_endpoint: optional string
azure_openai_key: optional string
bbox_bottom: optional number
bbox_left: optional number
bbox_right: optional number
bbox_top: optional number
bounding_box: optional string
compact_markdown_table: optional boolean
complemental_formatting_instruction: optional string
content_guideline_instruction: optional string
continuous_mode: optional boolean
disable_image_extraction: optional boolean
disable_ocr: optional boolean
disable_reconstruction: optional boolean
do_not_cache: optional boolean
do_not_unroll_columns: optional boolean
enable_cost_optimizer: optional boolean
extract_charts: optional boolean
extract_layout: optional boolean
extract_printed_page_number: optional boolean
fast_mode: optional boolean
formatting_instruction: optional string
gpt4o_api_key: optional string
gpt4o_mode: optional boolean
guess_xlsx_sheet_name: optional boolean
hide_footers: optional boolean
hide_headers: optional boolean
high_res_ocr: optional boolean
html_make_all_elements_visible: optional boolean
html_remove_fixed_elements: optional boolean
html_remove_navigation_elements: optional boolean
http_proxy: optional string
ignore_document_elements_for_layout_detection: optional boolean
images_to_save: optional array of "screenshot" or "embedded" or "layout"
Accepts one of the following:
"screenshot"
"embedded"
"layout"
inline_images_in_markdown: optional boolean
input_s3_path: optional string
input_s3_region: optional string
input_url: optional string
internal_is_screenshot_job: optional boolean
invalidate_cache: optional boolean
is_formatting_instruction: optional boolean
job_timeout_extra_time_per_page_in_seconds: optional number
job_timeout_in_seconds: optional number
keep_page_separator_when_merging_tables: optional boolean
languages: optional array of ParsingLanguages
Accepts one of the following:
"af"
"az"
"bs"
"cs"
"cy"
"da"
"de"
"en"
"es"
"et"
"fr"
"ga"
"hr"
"hu"
"id"
"is"
"it"
"ku"
"la"
"lt"
"lv"
"mi"
"ms"
"mt"
"nl"
"no"
"oc"
"pi"
"pl"
"pt"
"ro"
"rs_latin"
"sk"
"sl"
"sq"
"sv"
"sw"
"tl"
"tr"
"uz"
"vi"
"ar"
"fa"
"ug"
"ur"
"bn"
"as"
"mni"
"ru"
"rs_cyrillic"
"be"
"bg"
"uk"
"mn"
"abq"
"ady"
"kbd"
"ava"
"dar"
"inh"
"che"
"lbe"
"lez"
"tab"
"tjk"
"hi"
"mr"
"ne"
"bh"
"mai"
"ang"
"bho"
"mah"
"sck"
"new"
"gom"
"sa"
"bgc"
"th"
"ch_sim"
"ch_tra"
"ja"
"ko"
"ta"
"te"
"kn"
layout_aware: optional boolean
line_level_bounding_box: optional boolean
markdown_table_multiline_header_separator: optional string
max_pages: optional number
max_pages_enforced: optional number
merge_tables_across_pages_in_markdown: optional boolean
model: optional string
outlined_table_extraction: optional boolean
output_pdf_of_document: optional boolean
output_s3_path_prefix: optional string
output_s3_region: optional string
output_tables_as_HTML: optional boolean
page_error_tolerance: optional number
page_header_prefix: optional string
page_header_suffix: optional string
page_prefix: optional string
page_separator: optional string
page_suffix: optional string
parse_mode: optional ParsingMode

Enum for representing the mode of parsing to be used.

Accepts one of the following:
"parse_page_without_llm"
"parse_page_with_llm"
"parse_page_with_lvm"
"parse_page_with_agent"
"parse_page_with_layout_agent"
"parse_document_with_llm"
"parse_document_with_lvm"
"parse_document_with_agent"
parsing_instruction: optional string
precise_bounding_box: optional boolean
premium_mode: optional boolean
presentation_out_of_bounds_content: optional boolean
presentation_skip_embedded_data: optional boolean
preserve_layout_alignment_across_pages: optional boolean
preserve_very_small_text: optional boolean
preset: optional string
priority: optional "low" or "medium" or "high" or "critical"

The priority for the request. This field may be ignored or overwritten depending on the organization tier.

Accepts one of the following:
"low"
"medium"
"high"
"critical"
project_id: optional string
remove_hidden_text: optional boolean
replace_failed_page_mode: optional FailPageMode

Enum for representing the different available page error handling modes.

Accepts one of the following:
"raw_text"
"blank_page"
"error_message"
replace_failed_page_with_error_message_prefix: optional string
replace_failed_page_with_error_message_suffix: optional string
save_images: optional boolean
skip_diagonal_text: optional boolean
specialized_chart_parsing_agentic: optional boolean
specialized_chart_parsing_efficient: optional boolean
specialized_chart_parsing_plus: optional boolean
specialized_image_parsing: optional boolean
spreadsheet_extract_sub_tables: optional boolean
spreadsheet_force_formula_computation: optional boolean
strict_mode_buggy_font: optional boolean
strict_mode_image_extraction: optional boolean
strict_mode_image_ocr: optional boolean
strict_mode_reconstruction: optional boolean
structured_output: optional boolean
structured_output_json_schema: optional string
structured_output_json_schema_name: optional string
system_prompt: optional string
system_prompt_append: optional string
take_screenshot: optional boolean
target_pages: optional string
tier: optional string
use_vendor_multimodal_model: optional boolean
user_prompt: optional string
vendor_multimodal_api_key: optional string
vendor_multimodal_model_name: optional string
version: optional string
webhook_configurations: optional array of WebhookConfiguration { webhook_events, webhook_headers, webhook_output_format, webhook_url }

The outbound webhook configurations

webhook_events: optional array of "extract.pending" or "extract.success" or "extract.error" or 13 more

List of event names to subscribe to

Accepts one of the following:
"extract.pending"
"extract.success"
"extract.error"
"extract.partial_success"
"extract.cancelled"
"parse.pending"
"parse.success"
"parse.error"
"parse.partial_success"
"parse.cancelled"
"classify.pending"
"classify.success"
"classify.error"
"classify.partial_success"
"classify.cancelled"
"unmapped_event"
webhook_headers: optional map[string]

Custom HTTP headers to include with webhook requests.

webhook_output_format: optional string

The output format to use for the webhook. Defaults to string if none supplied. Currently supported values: string, json

webhook_url: optional string

The URL to send webhook notifications to.

webhook_url: optional string
managed_pipeline_id: optional string

The ID of the ManagedPipeline this playground pipeline is linked to.

formatuuid
metadata_config: optional PipelineMetadataConfig { excluded_embed_metadata_keys, excluded_llm_metadata_keys }

Metadata configuration for the pipeline.

excluded_embed_metadata_keys: optional array of string

List of metadata keys to exclude from embeddings

excluded_llm_metadata_keys: optional array of string

List of metadata keys to exclude from LLM during retrieval

pipeline_type: optional PipelineType

Type of pipeline. Either PLAYGROUND or MANAGED.

Accepts one of the following:
"PLAYGROUND"
"MANAGED"
preset_retrieval_parameters: optional PresetRetrievalParams { alpha, class_name, dense_similarity_cutoff, 11 more }

Preset retrieval parameters for the pipeline.

alpha: optional number

Alpha value for hybrid retrieval to determine the weights between dense and sparse retrieval. 0 is sparse retrieval and 1 is dense retrieval.

maximum1
minimum0
class_name: optional string
dense_similarity_cutoff: optional number

Minimum similarity score wrt query for retrieval

maximum1
minimum0
dense_similarity_top_k: optional number

Number of nodes for dense retrieval.

maximum100
minimum1
enable_reranking: optional boolean

Enable reranking for retrieval

files_top_k: optional number

Number of files to retrieve (only for retrieval mode files_via_metadata and files_via_content).

maximum5
minimum1
rerank_top_n: optional number

Number of reranked nodes for returning.

maximum100
minimum1
retrieval_mode: optional RetrievalMode

The retrieval mode for the query.

Accepts one of the following:
"chunks"
"files_via_metadata"
"files_via_content"
"auto_routed"
Deprecatedretrieve_image_nodes: optional boolean

Whether to retrieve image nodes.

retrieve_page_figure_nodes: optional boolean

Whether to retrieve page figure nodes.

retrieve_page_screenshot_nodes: optional boolean

Whether to retrieve page screenshot nodes.

search_filters: optional MetadataFilters { filters, condition }

Metadata filters for vector stores.

filters: array of object { key, value, operator } or MetadataFilters { filters, condition }
Accepts one of the following:
MetadataFilter = object { key, value, operator }

Comprehensive metadata filter for vector stores to support more operators.

Value uses Strict types, as int, float and str are compatible types and were all converted to string before.

See: https://docs.pydantic.dev/latest/usage/types/#strict-types

key: string
value: number or string or array of string or 2 more
Accepts one of the following:
UnionMember0 = number
UnionMember1 = string
UnionMember2 = array of string
UnionMember3 = array of number
UnionMember4 = array of number
operator: optional "==" or ">" or "<" or 11 more

Vector store filter operator.

Accepts one of the following:
"=="
">"
"<"
"!="
">="
"<="
"in"
"nin"
"any"
"all"
"text_match"
"text_match_insensitive"
"contains"
"is_empty"
MetadataFilters = object { filters, condition }

Metadata filters for vector stores.

filters: array of object { key, value, operator } or MetadataFilters { filters, condition }
Accepts one of the following:
MetadataFilter = object { key, value, operator }

Comprehensive metadata filter for vector stores to support more operators.

Value uses Strict types, as int, float and str are compatible types and were all converted to string before.

See: https://docs.pydantic.dev/latest/usage/types/#strict-types

key: string
value: number or string or array of string or 2 more
Accepts one of the following:
UnionMember0 = number
UnionMember1 = string
UnionMember2 = array of string
UnionMember3 = array of number
UnionMember4 = array of number
operator: optional "==" or ">" or "<" or 11 more

Vector store filter operator.

Accepts one of the following:
"=="
">"
"<"
"!="
">="
"<="
"in"
"nin"
"any"
"all"
"text_match"
"text_match_insensitive"
"contains"
"is_empty"
MetadataFilters { filters, condition }
condition: optional "and" or "or" or "not"

Vector store filter conditions to combine different filters.

Accepts one of the following:
"and"
"or"
"not"
condition: optional "and" or "or" or "not"

Vector store filter conditions to combine different filters.

Accepts one of the following:
"and"
"or"
"not"
search_filters_inference_schema: optional map[map[unknown] or array of unknown or string or 2 more]

JSON Schema that will be used to infer search_filters. Omit or leave as null to skip inference.

Accepts one of the following:
UnionMember0 = map[unknown]
UnionMember1 = array of unknown
UnionMember2 = string
UnionMember3 = number
UnionMember4 = boolean
sparse_similarity_top_k: optional number

Number of nodes for sparse retrieval.

maximum100
minimum1
sparse_model_config: optional SparseModelConfig { class_name, model_type }

Configuration for sparse embedding models used in hybrid search.

This allows users to choose between Splade and BM25 models for sparse retrieval in managed data sinks.

class_name: optional string
model_type: optional "splade" or "bm25" or "auto"

The sparse model type to use. 'bm25' uses Qdrant's FastEmbed BM25 model (default for new pipelines), 'splade' uses HuggingFace Splade model, 'auto' selects based on deployment mode (BYOC uses term frequency, Cloud uses Splade).

Accepts one of the following:
"splade"
"bm25"
"auto"
status: optional "CREATED" or "DELETING"

Status of the pipeline.

Accepts one of the following:
"CREATED"
"DELETING"
transform_config: optional AutoTransformConfig { chunk_overlap, chunk_size, mode } or AdvancedModeTransformConfig { chunking_config, mode, segmentation_config }

Configuration for the transformation.

Accepts one of the following:
AutoTransformConfig = object { chunk_overlap, chunk_size, mode }
chunk_overlap: optional number

Chunk overlap for the transformation.

chunk_size: optional number

Chunk size for the transformation.

exclusiveMinimum0
mode: optional "auto"
AdvancedModeTransformConfig = object { chunking_config, mode, segmentation_config }
chunking_config: optional object { mode } or object { chunk_overlap, chunk_size, mode } or object { chunk_overlap, chunk_size, mode, separator } or 2 more

Configuration for the chunking.

Accepts one of the following:
NoneChunkingConfig = object { mode }
mode: optional "none"
CharacterChunkingConfig = object { chunk_overlap, chunk_size, mode }
chunk_overlap: optional number
chunk_size: optional number
mode: optional "character"
TokenChunkingConfig = object { chunk_overlap, chunk_size, mode, separator }
chunk_overlap: optional number
chunk_size: optional number
mode: optional "token"
separator: optional string
SentenceChunkingConfig = object { chunk_overlap, chunk_size, mode, 2 more }
chunk_overlap: optional number
chunk_size: optional number
mode: optional "sentence"
paragraph_separator: optional string
separator: optional string
SemanticChunkingConfig = object { breakpoint_percentile_threshold, buffer_size, mode }
breakpoint_percentile_threshold: optional number
buffer_size: optional number
mode: optional "semantic"
mode: optional "advanced"
segmentation_config: optional object { mode } or object { mode, page_separator } or object { mode }

Configuration for the segmentation.

Accepts one of the following:
NoneSegmentationConfig = object { mode }
mode: optional "none"
PageSegmentationConfig = object { mode, page_separator }
mode: optional "page"
page_separator: optional string
ElementSegmentationConfig = object { mode }
mode: optional "element"
updated_at: optional string

Update datetime

formatdate-time
PipelineCreate = object { name, data_sink, data_sink_id, 10 more }

Schema for creating a pipeline.

name: string
data_sink: optional DataSinkCreate { component, name, sink_type }

Schema for creating a data sink.

component: map[unknown] or CloudPineconeVectorStore { api_key, index_name, class_name, 3 more } or CloudPostgresVectorStore { database, embed_dim, host, 10 more } or 5 more

Component that implements the data sink

Accepts one of the following:
UnionMember0 = map[unknown]
CloudPineconeVectorStore = object { api_key, index_name, class_name, 3 more }

Cloud Pinecone Vector Store.

This class is used to store the configuration for a Pinecone vector store, so that it can be created and used in LlamaCloud.

Args: api_key (str): API key for authenticating with Pinecone index_name (str): name of the Pinecone index namespace (optional[str]): namespace to use in the Pinecone index insert_kwargs (optional[dict]): additional kwargs to pass during insertion

api_key: string

The API key for authenticating with Pinecone

formatpassword
index_name: string
class_name: optional string
insert_kwargs: optional map[unknown]
namespace: optional string
supports_nested_metadata_filters: optional true
CloudPostgresVectorStore = object { database, embed_dim, host, 10 more }
database: string
embed_dim: number
host: string
password: string
port: number
schema_name: string
table_name: string
user: string
class_name: optional string
hnsw_settings: optional PgVectorHnswSettings { distance_method, ef_construction, ef_search, 2 more }

HNSW settings for PGVector.

distance_method: optional "l2" or "ip" or "cosine" or 3 more

The distance method to use.

Accepts one of the following:
"l2"
"ip"
"cosine"
"l1"
"hamming"
"jaccard"
ef_construction: optional number

The number of edges to use during the construction phase.

minimum1

The number of edges to use during the search phase.

minimum1
m: optional number

The number of bi-directional links created for each new element.

minimum1
vector_type: optional "vector" or "half_vec" or "bit" or "sparse_vec"

The type of vector to use.

Accepts one of the following:
"vector"
"half_vec"
"bit"
"sparse_vec"
perform_setup: optional boolean
supports_nested_metadata_filters: optional boolean
CloudQdrantVectorStore = object { api_key, collection_name, url, 4 more }

Cloud Qdrant Vector Store.

This class is used to store the configuration for a Qdrant vector store, so that it can be created and used in LlamaCloud.

Args: collection_name (str): name of the Qdrant collection url (str): url of the Qdrant instance api_key (str): API key for authenticating with Qdrant max_retries (int): maximum number of retries in case of a failure. Defaults to 3 client_kwargs (dict): additional kwargs to pass to the Qdrant client

api_key: string
collection_name: string
url: string
class_name: optional string
client_kwargs: optional map[unknown]
max_retries: optional number
supports_nested_metadata_filters: optional true
CloudAzureAISearchVectorStore = object { search_service_api_key, search_service_endpoint, class_name, 8 more }

Cloud Azure AI Search Vector Store.

search_service_api_key: string
search_service_endpoint: string
class_name: optional string
client_id: optional string
client_secret: optional string
embedding_dimension: optional number
filterable_metadata_field_keys: optional map[unknown]
index_name: optional string
search_service_api_version: optional string
supports_nested_metadata_filters: optional true
tenant_id: optional string

Cloud MongoDB Atlas Vector Store.

This class is used to store the configuration for a MongoDB Atlas vector store, so that it can be created and used in LlamaCloud.

Args: mongodb_uri (str): URI for connecting to MongoDB Atlas db_name (str): name of the MongoDB database collection_name (str): name of the MongoDB collection vector_index_name (str): name of the MongoDB Atlas vector index fulltext_index_name (str): name of the MongoDB Atlas full-text index

CloudMilvusVectorStore = object { uri, token, class_name, 3 more }

Cloud Milvus Vector Store.

uri: string
token: optional string
class_name: optional string
collection_name: optional string
embedding_dimension: optional number
supports_nested_metadata_filters: optional boolean
CloudAstraDBVectorStore = object { token, api_endpoint, collection_name, 4 more }

Cloud AstraDB Vector Store.

This class is used to store the configuration for an AstraDB vector store, so that it can be created and used in LlamaCloud.

Args: token (str): The Astra DB Application Token to use. api_endpoint (str): The Astra DB JSON API endpoint for your database. collection_name (str): Collection name to use. If not existing, it will be created. embedding_dimension (int): Length of the embedding vectors in use. keyspace (optional[str]): The keyspace to use. If not provided, 'default_keyspace'

token: string

The Astra DB Application Token to use

formatpassword
api_endpoint: string

The Astra DB JSON API endpoint for your database

collection_name: string

Collection name to use. If not existing, it will be created

embedding_dimension: number

Length of the embedding vectors in use

class_name: optional string
keyspace: optional string

The keyspace to use. If not provided, 'default_keyspace'

supports_nested_metadata_filters: optional true
name: string

The name of the data sink.

sink_type: "PINECONE" or "POSTGRES" or "QDRANT" or 4 more
Accepts one of the following:
"PINECONE"
"POSTGRES"
"QDRANT"
"AZUREAI_SEARCH"
"MONGODB_ATLAS"
"MILVUS"
"ASTRA_DB"
data_sink_id: optional string

Data sink ID. When provided instead of data_sink, the data sink will be looked up by ID.

formatuuid
embedding_config: optional AzureOpenAIEmbeddingConfig { component, type } or CohereEmbeddingConfig { component, type } or GeminiEmbeddingConfig { component, type } or 4 more
Accepts one of the following:
AzureOpenAIEmbeddingConfig = object { component, type }
component: optional AzureOpenAIEmbedding { additional_kwargs, api_base, api_key, 12 more }

Configuration for the Azure OpenAI embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for Azure deployment.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for Azure OpenAI API.

azure_deployment: optional string

The Azure deployment to use.

azure_endpoint: optional string

The Azure endpoint to use.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
type: optional "AZURE_EMBEDDING"

Type of the embedding model.

CohereEmbeddingConfig = object { component, type }
component: optional CohereEmbedding { api_key, class_name, embed_batch_size, 5 more }

Configuration for the Cohere embedding model.

api_key: string

The Cohere API key.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embedding_type: optional string

Embedding type. If not provided float embedding_type is used when needed.

input_type: optional string

Model Input type. If not provided, search_document and search_query are used when needed.

model_name: optional string

The modelId of the Cohere model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

truncate: optional string

Truncation type - START/ END/ NONE

type: optional "COHERE_EMBEDDING"

Type of the embedding model.

GeminiEmbeddingConfig = object { component, type }
component: optional GeminiEmbedding { api_base, api_key, class_name, 6 more }

Configuration for the Gemini embedding model.

api_base: optional string

API base to access the model. Defaults to None.

api_key: optional string

API key to access the model. Defaults to None.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
model_name: optional string

The modelId of the Gemini model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

task_type: optional string

The task for embedding model.

title: optional string

Title is only applicable for retrieval_document tasks, and is used to represent a document title. For other tasks, title is invalid.

transport: optional string

Transport to access the model. Defaults to None.

type: optional "GEMINI_EMBEDDING"

Type of the embedding model.

HuggingFaceInferenceAPIEmbeddingConfig = object { component, type }
component: optional HuggingFaceInferenceAPIEmbedding { token, class_name, cookies, 9 more }

Configuration for the HuggingFace Inference API embedding model.

token: optional string or boolean

Hugging Face token. Will default to the locally saved token. Pass token=False if you don’t want to send your token to the server.

Accepts one of the following:
UnionMember0 = string
UnionMember1 = boolean
class_name: optional string
cookies: optional map[string]

Additional cookies to send to the server.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
headers: optional map[string]

Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values.

model_name: optional string

Hugging Face model name. If None, the task will be used.

num_workers: optional number

The number of workers to use for async embedding calls.

pooling: optional "cls" or "mean" or "last"

Enum of possible pooling choices with pooling behaviors.

Accepts one of the following:
"cls"
"mean"
"last"
query_instruction: optional string

Instruction to prepend during query embedding.

task: optional string

Optional task to pick Hugging Face's recommended model, used when model_name is left as default of None.

text_instruction: optional string

Instruction to prepend during text embedding.

timeout: optional number

The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available.

type: optional "HUGGINGFACE_API_EMBEDDING"

Type of the embedding model.

OpenAIEmbeddingConfig = object { component, type }
component: optional OpenAIEmbedding { additional_kwargs, api_base, api_key, 10 more }

Configuration for the OpenAI embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the OpenAI API.

api_base: optional string

The base URL for OpenAI API.

api_key: optional string

The OpenAI API key.

api_version: optional string

The version for OpenAI API.

class_name: optional string
default_headers: optional map[string]

The default headers for API requests.

dimensions: optional number

The number of dimensions on the output embedding vectors. Works only with v3 embedding models.

embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

Maximum number of retries.

minimum0
model_name: optional string

The name of the OpenAI embedding model.

num_workers: optional number

The number of workers to use for async embedding calls.

reuse_client: optional boolean

Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.

timeout: optional number

Timeout for each request.

minimum0
type: optional "OPENAI_EMBEDDING"

Type of the embedding model.

VertexAIEmbeddingConfig = object { component, type }
component: optional VertexTextEmbedding { client_email, location, private_key, 9 more }

Configuration for the VertexAI embedding model.

client_email: string

The client email for the VertexAI credentials.

location: string

The default location to use when making API calls.

private_key: string

The private key for the VertexAI credentials.

private_key_id: string

The private key ID for the VertexAI credentials.

project: string

The default GCP project to use when making Vertex API calls.

token_uri: string

The token URI for the VertexAI credentials.

additional_kwargs: optional map[unknown]

Additional kwargs for the Vertex.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embed_mode: optional "default" or "classification" or "clustering" or 2 more

The embedding mode to use.

Accepts one of the following:
"default"
"classification"
"clustering"
"similarity"
"retrieval"
model_name: optional string

The modelId of the VertexAI model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

type: optional "VERTEXAI_EMBEDDING"

Type of the embedding model.

BedrockEmbeddingConfig = object { component, type }
component: optional BedrockEmbedding { additional_kwargs, aws_access_key_id, aws_secret_access_key, 9 more }

Configuration for the Bedrock embedding model.

additional_kwargs: optional map[unknown]

Additional kwargs for the bedrock client.

aws_access_key_id: optional string

AWS Access Key ID to use

aws_secret_access_key: optional string

AWS Secret Access Key to use

aws_session_token: optional string

AWS Session Token to use

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
max_retries: optional number

The maximum number of API retries.

exclusiveMinimum0
model_name: optional string

The modelId of the Bedrock model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

profile_name: optional string

The name of aws profile to use. If not given, then the default profile is used.

region_name: optional string

AWS region name to use. Uses region configured in AWS CLI if not passed

timeout: optional number

The timeout for the Bedrock API request in seconds. It will be used for both connect and read timeouts.

type: optional "BEDROCK_EMBEDDING"

Type of the embedding model.

embedding_model_config_id: optional string

Embedding model config ID. When provided instead of embedding_config, the embedding model config will be looked up by ID.

formatuuid
llama_parse_parameters: optional LlamaParseParameters { adaptive_long_table, aggressive_table_extraction, annotate_links, 115 more }

Settings that can be configured for how to use LlamaParse to parse files within a LlamaCloud pipeline.

adaptive_long_table: optional boolean
aggressive_table_extraction: optional boolean
auto_mode: optional boolean
auto_mode_configuration_json: optional string
auto_mode_trigger_on_image_in_page: optional boolean
auto_mode_trigger_on_regexp_in_page: optional string
auto_mode_trigger_on_table_in_page: optional boolean
auto_mode_trigger_on_text_in_page: optional string
azure_openai_api_version: optional string
azure_openai_deployment_name: optional string
azure_openai_endpoint: optional string
azure_openai_key: optional string
bbox_bottom: optional number
bbox_left: optional number
bbox_right: optional number
bbox_top: optional number
bounding_box: optional string
compact_markdown_table: optional boolean
complemental_formatting_instruction: optional string
content_guideline_instruction: optional string
continuous_mode: optional boolean
disable_image_extraction: optional boolean
disable_ocr: optional boolean
disable_reconstruction: optional boolean
do_not_cache: optional boolean
do_not_unroll_columns: optional boolean
enable_cost_optimizer: optional boolean
extract_charts: optional boolean
extract_layout: optional boolean
extract_printed_page_number: optional boolean
fast_mode: optional boolean
formatting_instruction: optional string
gpt4o_api_key: optional string
gpt4o_mode: optional boolean
guess_xlsx_sheet_name: optional boolean
hide_footers: optional boolean
hide_headers: optional boolean
high_res_ocr: optional boolean
html_make_all_elements_visible: optional boolean
html_remove_fixed_elements: optional boolean
html_remove_navigation_elements: optional boolean
http_proxy: optional string
ignore_document_elements_for_layout_detection: optional boolean
images_to_save: optional array of "screenshot" or "embedded" or "layout"
Accepts one of the following:
"screenshot"
"embedded"
"layout"
inline_images_in_markdown: optional boolean
input_s3_path: optional string
input_s3_region: optional string
input_url: optional string
internal_is_screenshot_job: optional boolean
invalidate_cache: optional boolean
is_formatting_instruction: optional boolean
job_timeout_extra_time_per_page_in_seconds: optional number
job_timeout_in_seconds: optional number
keep_page_separator_when_merging_tables: optional boolean
languages: optional array of ParsingLanguages
Accepts one of the following:
"af"
"az"
"bs"
"cs"
"cy"
"da"
"de"
"en"
"es"
"et"
"fr"
"ga"
"hr"
"hu"
"id"
"is"
"it"
"ku"
"la"
"lt"
"lv"
"mi"
"ms"
"mt"
"nl"
"no"
"oc"
"pi"
"pl"
"pt"
"ro"
"rs_latin"
"sk"
"sl"
"sq"
"sv"
"sw"
"tl"
"tr"
"uz"
"vi"
"ar"
"fa"
"ug"
"ur"
"bn"
"as"
"mni"
"ru"
"rs_cyrillic"
"be"
"bg"
"uk"
"mn"
"abq"
"ady"
"kbd"
"ava"
"dar"
"inh"
"che"
"lbe"
"lez"
"tab"
"tjk"
"hi"
"mr"
"ne"
"bh"
"mai"
"ang"
"bho"
"mah"
"sck"
"new"
"gom"
"sa"
"bgc"
"th"
"ch_sim"
"ch_tra"
"ja"
"ko"
"ta"
"te"
"kn"
layout_aware: optional boolean
line_level_bounding_box: optional boolean
markdown_table_multiline_header_separator: optional string
max_pages: optional number
max_pages_enforced: optional number
merge_tables_across_pages_in_markdown: optional boolean
model: optional string
outlined_table_extraction: optional boolean
output_pdf_of_document: optional boolean
output_s3_path_prefix: optional string
output_s3_region: optional string
output_tables_as_HTML: optional boolean
page_error_tolerance: optional number
page_header_prefix: optional string
page_header_suffix: optional string
page_prefix: optional string
page_separator: optional string
page_suffix: optional string
parse_mode: optional ParsingMode

Enum for representing the mode of parsing to be used.

Accepts one of the following:
"parse_page_without_llm"
"parse_page_with_llm"
"parse_page_with_lvm"
"parse_page_with_agent"
"parse_page_with_layout_agent"
"parse_document_with_llm"
"parse_document_with_lvm"
"parse_document_with_agent"
parsing_instruction: optional string
precise_bounding_box: optional boolean
premium_mode: optional boolean
presentation_out_of_bounds_content: optional boolean
presentation_skip_embedded_data: optional boolean
preserve_layout_alignment_across_pages: optional boolean
preserve_very_small_text: optional boolean
preset: optional string
priority: optional "low" or "medium" or "high" or "critical"

The priority for the request. This field may be ignored or overwritten depending on the organization tier.

Accepts one of the following:
"low"
"medium"
"high"
"critical"
project_id: optional string
remove_hidden_text: optional boolean
replace_failed_page_mode: optional FailPageMode

Enum for representing the different available page error handling modes.

Accepts one of the following:
"raw_text"
"blank_page"
"error_message"
replace_failed_page_with_error_message_prefix: optional string
replace_failed_page_with_error_message_suffix: optional string
save_images: optional boolean
skip_diagonal_text: optional boolean
specialized_chart_parsing_agentic: optional boolean
specialized_chart_parsing_efficient: optional boolean
specialized_chart_parsing_plus: optional boolean
specialized_image_parsing: optional boolean
spreadsheet_extract_sub_tables: optional boolean
spreadsheet_force_formula_computation: optional boolean
strict_mode_buggy_font: optional boolean
strict_mode_image_extraction: optional boolean
strict_mode_image_ocr: optional boolean
strict_mode_reconstruction: optional boolean
structured_output: optional boolean
structured_output_json_schema: optional string
structured_output_json_schema_name: optional string
system_prompt: optional string
system_prompt_append: optional string
take_screenshot: optional boolean
target_pages: optional string
tier: optional string
use_vendor_multimodal_model: optional boolean
user_prompt: optional string
vendor_multimodal_api_key: optional string
vendor_multimodal_model_name: optional string
version: optional string
webhook_configurations: optional array of WebhookConfiguration { webhook_events, webhook_headers, webhook_output_format, webhook_url }

The outbound webhook configurations

webhook_events: optional array of "extract.pending" or "extract.success" or "extract.error" or 13 more

List of event names to subscribe to

Accepts one of the following:
"extract.pending"
"extract.success"
"extract.error"
"extract.partial_success"
"extract.cancelled"
"parse.pending"
"parse.success"
"parse.error"
"parse.partial_success"
"parse.cancelled"
"classify.pending"
"classify.success"
"classify.error"
"classify.partial_success"
"classify.cancelled"
"unmapped_event"
webhook_headers: optional map[string]

Custom HTTP headers to include with webhook requests.

webhook_output_format: optional string

The output format to use for the webhook. Defaults to string if none supplied. Currently supported values: string, json

webhook_url: optional string

The URL to send webhook notifications to.

webhook_url: optional string
managed_pipeline_id: optional string

The ID of the ManagedPipeline this playground pipeline is linked to.

formatuuid
metadata_config: optional PipelineMetadataConfig { excluded_embed_metadata_keys, excluded_llm_metadata_keys }

Metadata configuration for the pipeline.

excluded_embed_metadata_keys: optional array of string

List of metadata keys to exclude from embeddings

excluded_llm_metadata_keys: optional array of string

List of metadata keys to exclude from LLM during retrieval

pipeline_type: optional PipelineType

Type of pipeline. Either PLAYGROUND or MANAGED.

Accepts one of the following:
"PLAYGROUND"
"MANAGED"
preset_retrieval_parameters: optional PresetRetrievalParams { alpha, class_name, dense_similarity_cutoff, 11 more }

Preset retrieval parameters for the pipeline.

alpha: optional number

Alpha value for hybrid retrieval to determine the weights between dense and sparse retrieval. 0 is sparse retrieval and 1 is dense retrieval.

maximum1
minimum0
class_name: optional string
dense_similarity_cutoff: optional number

Minimum similarity score wrt query for retrieval

maximum1
minimum0
dense_similarity_top_k: optional number

Number of nodes for dense retrieval.

maximum100
minimum1
enable_reranking: optional boolean

Enable reranking for retrieval

files_top_k: optional number

Number of files to retrieve (only for retrieval mode files_via_metadata and files_via_content).

maximum5
minimum1
rerank_top_n: optional number

Number of reranked nodes for returning.

maximum100
minimum1
retrieval_mode: optional RetrievalMode

The retrieval mode for the query.

Accepts one of the following:
"chunks"
"files_via_metadata"
"files_via_content"
"auto_routed"
Deprecatedretrieve_image_nodes: optional boolean

Whether to retrieve image nodes.

retrieve_page_figure_nodes: optional boolean

Whether to retrieve page figure nodes.

retrieve_page_screenshot_nodes: optional boolean

Whether to retrieve page screenshot nodes.

search_filters: optional MetadataFilters { filters, condition }

Metadata filters for vector stores.

filters: array of object { key, value, operator } or MetadataFilters { filters, condition }
Accepts one of the following:
MetadataFilter = object { key, value, operator }

Comprehensive metadata filter for vector stores to support more operators.

Value uses Strict types, as int, float and str are compatible types and were all converted to string before.

See: https://docs.pydantic.dev/latest/usage/types/#strict-types

key: string
value: number or string or array of string or 2 more
Accepts one of the following:
UnionMember0 = number
UnionMember1 = string
UnionMember2 = array of string
UnionMember3 = array of number
UnionMember4 = array of number
operator: optional "==" or ">" or "<" or 11 more

Vector store filter operator.

Accepts one of the following:
"=="
">"
"<"
"!="
">="
"<="
"in"
"nin"
"any"
"all"
"text_match"
"text_match_insensitive"
"contains"
"is_empty"
MetadataFilters = object { filters, condition }

Metadata filters for vector stores.

filters: array of object { key, value, operator } or MetadataFilters { filters, condition }
Accepts one of the following:
MetadataFilter = object { key, value, operator }

Comprehensive metadata filter for vector stores to support more operators.

Value uses Strict types, as int, float and str are compatible types and were all converted to string before.

See: https://docs.pydantic.dev/latest/usage/types/#strict-types

key: string
value: number or string or array of string or 2 more
Accepts one of the following:
UnionMember0 = number
UnionMember1 = string
UnionMember2 = array of string
UnionMember3 = array of number
UnionMember4 = array of number
operator: optional "==" or ">" or "<" or 11 more

Vector store filter operator.

Accepts one of the following:
"=="
">"
"<"
"!="
">="
"<="
"in"
"nin"
"any"
"all"
"text_match"
"text_match_insensitive"
"contains"
"is_empty"
MetadataFilters { filters, condition }
condition: optional "and" or "or" or "not"

Vector store filter conditions to combine different filters.

Accepts one of the following:
"and"
"or"
"not"
condition: optional "and" or "or" or "not"

Vector store filter conditions to combine different filters.

Accepts one of the following:
"and"
"or"
"not"
search_filters_inference_schema: optional map[map[unknown] or array of unknown or string or 2 more]

JSON Schema that will be used to infer search_filters. Omit or leave as null to skip inference.

Accepts one of the following:
UnionMember0 = map[unknown]
UnionMember1 = array of unknown
UnionMember2 = string
UnionMember3 = number
UnionMember4 = boolean
sparse_similarity_top_k: optional number

Number of nodes for sparse retrieval.

maximum100
minimum1
sparse_model_config: optional SparseModelConfig { class_name, model_type }

Configuration for sparse embedding models used in hybrid search.

This allows users to choose between Splade and BM25 models for sparse retrieval in managed data sinks.

class_name: optional string
model_type: optional "splade" or "bm25" or "auto"

The sparse model type to use. 'bm25' uses Qdrant's FastEmbed BM25 model (default for new pipelines), 'splade' uses HuggingFace Splade model, 'auto' selects based on deployment mode (BYOC uses term frequency, Cloud uses Splade).

Accepts one of the following:
"splade"
"bm25"
"auto"
status: optional string

Status of the pipeline deployment.

transform_config: optional AutoTransformConfig { chunk_overlap, chunk_size, mode } or AdvancedModeTransformConfig { chunking_config, mode, segmentation_config }

Configuration for the transformation.

Accepts one of the following:
AutoTransformConfig = object { chunk_overlap, chunk_size, mode }
chunk_overlap: optional number

Chunk overlap for the transformation.

chunk_size: optional number

Chunk size for the transformation.

exclusiveMinimum0
mode: optional "auto"
AdvancedModeTransformConfig = object { chunking_config, mode, segmentation_config }
chunking_config: optional object { mode } or object { chunk_overlap, chunk_size, mode } or object { chunk_overlap, chunk_size, mode, separator } or 2 more

Configuration for the chunking.

Accepts one of the following:
NoneChunkingConfig = object { mode }
mode: optional "none"
CharacterChunkingConfig = object { chunk_overlap, chunk_size, mode }
chunk_overlap: optional number
chunk_size: optional number
mode: optional "character"
TokenChunkingConfig = object { chunk_overlap, chunk_size, mode, separator }
chunk_overlap: optional number
chunk_size: optional number
mode: optional "token"
separator: optional string
SentenceChunkingConfig = object { chunk_overlap, chunk_size, mode, 2 more }
chunk_overlap: optional number
chunk_size: optional number
mode: optional "sentence"
paragraph_separator: optional string
separator: optional string
SemanticChunkingConfig = object { breakpoint_percentile_threshold, buffer_size, mode }
breakpoint_percentile_threshold: optional number
buffer_size: optional number
mode: optional "semantic"
mode: optional "advanced"
segmentation_config: optional object { mode } or object { mode, page_separator } or object { mode }

Configuration for the segmentation.

Accepts one of the following:
NoneSegmentationConfig = object { mode }
mode: optional "none"
PageSegmentationConfig = object { mode, page_separator }
mode: optional "page"
page_separator: optional string
ElementSegmentationConfig = object { mode }
mode: optional "element"
PipelineMetadataConfig = object { excluded_embed_metadata_keys, excluded_llm_metadata_keys }
excluded_embed_metadata_keys: optional array of string

List of metadata keys to exclude from embeddings

excluded_llm_metadata_keys: optional array of string

List of metadata keys to exclude from LLM during retrieval

PipelineType = "PLAYGROUND" or "MANAGED"

Enum for representing the type of a pipeline

Accepts one of the following:
"PLAYGROUND"
"MANAGED"
PresetRetrievalParams = object { alpha, class_name, dense_similarity_cutoff, 11 more }

Schema for the search params for an retrieval execution that can be preset for a pipeline.

alpha: optional number

Alpha value for hybrid retrieval to determine the weights between dense and sparse retrieval. 0 is sparse retrieval and 1 is dense retrieval.

maximum1
minimum0
class_name: optional string
dense_similarity_cutoff: optional number

Minimum similarity score wrt query for retrieval

maximum1
minimum0
dense_similarity_top_k: optional number

Number of nodes for dense retrieval.

maximum100
minimum1
enable_reranking: optional boolean

Enable reranking for retrieval

files_top_k: optional number

Number of files to retrieve (only for retrieval mode files_via_metadata and files_via_content).

maximum5
minimum1
rerank_top_n: optional number

Number of reranked nodes for returning.

maximum100
minimum1
retrieval_mode: optional RetrievalMode

The retrieval mode for the query.

Accepts one of the following:
"chunks"
"files_via_metadata"
"files_via_content"
"auto_routed"
Deprecatedretrieve_image_nodes: optional boolean

Whether to retrieve image nodes.

retrieve_page_figure_nodes: optional boolean

Whether to retrieve page figure nodes.

retrieve_page_screenshot_nodes: optional boolean

Whether to retrieve page screenshot nodes.

search_filters: optional MetadataFilters { filters, condition }

Metadata filters for vector stores.

filters: array of object { key, value, operator } or MetadataFilters { filters, condition }
Accepts one of the following:
MetadataFilter = object { key, value, operator }

Comprehensive metadata filter for vector stores to support more operators.

Value uses Strict types, as int, float and str are compatible types and were all converted to string before.

See: https://docs.pydantic.dev/latest/usage/types/#strict-types

key: string
value: number or string or array of string or 2 more
Accepts one of the following:
UnionMember0 = number
UnionMember1 = string
UnionMember2 = array of string
UnionMember3 = array of number
UnionMember4 = array of number
operator: optional "==" or ">" or "<" or 11 more

Vector store filter operator.

Accepts one of the following:
"=="
">"
"<"
"!="
">="
"<="
"in"
"nin"
"any"
"all"
"text_match"
"text_match_insensitive"
"contains"
"is_empty"
MetadataFilters = object { filters, condition }

Metadata filters for vector stores.

filters: array of object { key, value, operator } or MetadataFilters { filters, condition }
Accepts one of the following:
MetadataFilter = object { key, value, operator }

Comprehensive metadata filter for vector stores to support more operators.

Value uses Strict types, as int, float and str are compatible types and were all converted to string before.

See: https://docs.pydantic.dev/latest/usage/types/#strict-types

key: string
value: number or string or array of string or 2 more
Accepts one of the following:
UnionMember0 = number
UnionMember1 = string
UnionMember2 = array of string
UnionMember3 = array of number
UnionMember4 = array of number
operator: optional "==" or ">" or "<" or 11 more

Vector store filter operator.

Accepts one of the following:
"=="
">"
"<"
"!="
">="
"<="
"in"
"nin"
"any"
"all"
"text_match"
"text_match_insensitive"
"contains"
"is_empty"
MetadataFilters { filters, condition }
condition: optional "and" or "or" or "not"

Vector store filter conditions to combine different filters.

Accepts one of the following:
"and"
"or"
"not"
condition: optional "and" or "or" or "not"

Vector store filter conditions to combine different filters.

Accepts one of the following:
"and"
"or"
"not"
search_filters_inference_schema: optional map[map[unknown] or array of unknown or string or 2 more]

JSON Schema that will be used to infer search_filters. Omit or leave as null to skip inference.

Accepts one of the following:
UnionMember0 = map[unknown]
UnionMember1 = array of unknown
UnionMember2 = string
UnionMember3 = number
UnionMember4 = boolean
sparse_similarity_top_k: optional number

Number of nodes for sparse retrieval.

maximum100
minimum1
RetrievalMode = "chunks" or "files_via_metadata" or "files_via_content" or "auto_routed"
Accepts one of the following:
"chunks"
"files_via_metadata"
"files_via_content"
"auto_routed"
SparseModelConfig = object { class_name, model_type }

Configuration for sparse embedding models used in hybrid search.

This allows users to choose between Splade and BM25 models for sparse retrieval in managed data sinks.

class_name: optional string
model_type: optional "splade" or "bm25" or "auto"

The sparse model type to use. 'bm25' uses Qdrant's FastEmbed BM25 model (default for new pipelines), 'splade' uses HuggingFace Splade model, 'auto' selects based on deployment mode (BYOC uses term frequency, Cloud uses Splade).

Accepts one of the following:
"splade"
"bm25"
"auto"
VertexAIEmbeddingConfig = object { component, type }
component: optional VertexTextEmbedding { client_email, location, private_key, 9 more }

Configuration for the VertexAI embedding model.

client_email: string

The client email for the VertexAI credentials.

location: string

The default location to use when making API calls.

private_key: string

The private key for the VertexAI credentials.

private_key_id: string

The private key ID for the VertexAI credentials.

project: string

The default GCP project to use when making Vertex API calls.

token_uri: string

The token URI for the VertexAI credentials.

additional_kwargs: optional map[unknown]

Additional kwargs for the Vertex.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embed_mode: optional "default" or "classification" or "clustering" or 2 more

The embedding mode to use.

Accepts one of the following:
"default"
"classification"
"clustering"
"similarity"
"retrieval"
model_name: optional string

The modelId of the VertexAI model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

type: optional "VERTEXAI_EMBEDDING"

Type of the embedding model.

VertexTextEmbedding = object { client_email, location, private_key, 9 more }
client_email: string

The client email for the VertexAI credentials.

location: string

The default location to use when making API calls.

private_key: string

The private key for the VertexAI credentials.

private_key_id: string

The private key ID for the VertexAI credentials.

project: string

The default GCP project to use when making Vertex API calls.

token_uri: string

The token URI for the VertexAI credentials.

additional_kwargs: optional map[unknown]

Additional kwargs for the Vertex.

class_name: optional string
embed_batch_size: optional number

The batch size for embedding calls.

maximum2048
exclusiveMinimum0
embed_mode: optional "default" or "classification" or "clustering" or 2 more

The embedding mode to use.

Accepts one of the following:
"default"
"classification"
"clustering"
"similarity"
"retrieval"
model_name: optional string

The modelId of the VertexAI model to use.

num_workers: optional number

The number of workers to use for async embedding calls.

PipelinesSync

Sync Pipeline
POST/api/v1/pipelines/{pipeline_id}/sync
Cancel Pipeline Sync
POST/api/v1/pipelines/{pipeline_id}/sync/cancel

PipelinesData Sources

List Pipeline Data Sources
GET/api/v1/pipelines/{pipeline_id}/data-sources
Add Data Sources To Pipeline
PUT/api/v1/pipelines/{pipeline_id}/data-sources
Update Pipeline Data Source
PUT/api/v1/pipelines/{pipeline_id}/data-sources/{data_source_id}
Get Pipeline Data Source Status
GET/api/v1/pipelines/{pipeline_id}/data-sources/{data_source_id}/status
Sync Pipeline Data Source
POST/api/v1/pipelines/{pipeline_id}/data-sources/{data_source_id}/sync
ModelsExpand Collapse
PipelineDataSource = object { id, component, data_source_id, 13 more }

Schema for a data source in a pipeline.

id: string

Unique identifier

formatuuid
component: map[unknown] or CloudS3DataSource { bucket, aws_access_id, aws_access_secret, 5 more } or CloudAzStorageBlobDataSource { account_url, container_name, account_key, 8 more } or 8 more

Component that implements the data source

Accepts one of the following:
UnionMember0 = map[unknown]
CloudS3DataSource = object { bucket, aws_access_id, aws_access_secret, 5 more }
bucket: string

The name of the S3 bucket to read from.

aws_access_id: optional string

The AWS access ID to use for authentication.

aws_access_secret: optional string

The AWS access secret to use for authentication.

formatpassword
class_name: optional string
prefix: optional string

The prefix of the S3 objects to read from.

regex_pattern: optional string

The regex pattern to filter S3 objects. Must be a valid regex pattern.

s3_endpoint_url: optional string

The S3 endpoint URL to use for authentication.

supports_access_control: optional boolean
CloudAzStorageBlobDataSource = object { account_url, container_name, account_key, 8 more }
account_url: string

The Azure Storage Blob account URL to use for authentication.

container_name: string

The name of the Azure Storage Blob container to read from.

account_key: optional string

The Azure Storage Blob account key to use for authentication.

formatpassword
account_name: optional string

The Azure Storage Blob account name to use for authentication.

blob: optional string

The blob name to read from.

class_name: optional string
client_id: optional string

The Azure AD client ID to use for authentication.

client_secret: optional string

The Azure AD client secret to use for authentication.

formatpassword
prefix: optional string

The prefix of the Azure Storage Blob objects to read from.

supports_access_control: optional boolean
tenant_id: optional string

The Azure AD tenant ID to use for authentication.

CloudOneDriveDataSource = object { client_id, client_secret, tenant_id, 6 more }
client_id: string

The client ID to use for authentication.

client_secret: string

The client secret to use for authentication.

formatpassword
tenant_id: string

The tenant ID to use for authentication.

user_principal_name: string

The user principal name to use for authentication.

class_name: optional string
folder_id: optional string

The ID of the OneDrive folder to read from.

folder_path: optional string

The path of the OneDrive folder to read from.

required_exts: optional array of string

The list of required file extensions.

supports_access_control: optional true
CloudSharepointDataSource = object { client_id, client_secret, tenant_id, 11 more }
client_id: string

The client ID to use for authentication.

client_secret: string

The client secret to use for authentication.

formatpassword
tenant_id: string

The tenant ID to use for authentication.

class_name: optional string
drive_name: optional string

The name of the Sharepoint drive to read from.

exclude_path_patterns: optional array of string

List of regex patterns for file paths to exclude. Files whose paths (including filename) match any pattern will be excluded. Example: ['/temp/', '/backup/', '.git/', '.tmp$', '^~']

folder_id: optional string

The ID of the Sharepoint folder to read from.

folder_path: optional string

The path of the Sharepoint folder to read from.

get_permissions: optional boolean

Whether to get permissions for the sharepoint site.

include_path_patterns: optional array of string

List of regex patterns for file paths to include. Full paths (including filename) must match at least one pattern to be included. Example: ['/reports/', '/docs/..pdf$', '^Report..pdf$']

required_exts: optional array of string

The list of required file extensions.

site_id: optional string

The ID of the SharePoint site to download from.

site_name: optional string

The name of the SharePoint site to download from.

supports_access_control: optional true
CloudSlackDataSource = object { slack_token, channel_ids, channel_patterns, 6 more }
slack_token: string

Slack Bot Token.

formatpassword
channel_ids: optional string

Slack Channel.

channel_patterns: optional string

Slack Channel name pattern.

class_name: optional string
earliest_date: optional string

Earliest date.

earliest_date_timestamp: optional number

Earliest date timestamp.

latest_date: optional string

Latest date.

latest_date_timestamp: optional number

Latest date timestamp.

supports_access_control: optional boolean
CloudNotionPageDataSource = object { integration_token, class_name, database_ids, 2 more }
integration_token: string

The integration token to use for authentication.

formatpassword
class_name: optional string
database_ids: optional string

The Notion Database Id to read content from.

page_ids: optional string

The Page ID's of the Notion to read from.

supports_access_control: optional boolean
CloudConfluenceDataSource = object { authentication_mechanism, server_url, api_token, 10 more }
authentication_mechanism: string

Type of Authentication for connecting to Confluence APIs.

server_url: string

The server URL of the Confluence instance.

api_token: optional string

The API token to use for authentication.

formatpassword
class_name: optional string
cql: optional string

The CQL query to use for fetching pages.

failure_handling: optional FailureHandlingConfig { skip_list_failures }

Configuration for handling failures during processing. Key-value object controlling failure handling behaviors.

Example: { "skip_list_failures": true }

Currently supports:

  • skip_list_failures: Skip failed batches/lists and continue processing
skip_list_failures: optional boolean

Whether to skip failed batches/lists and continue processing

index_restricted_pages: optional boolean

Whether to index restricted pages.

keep_markdown_format: optional boolean

Whether to keep the markdown format.

label: optional string

The label to use for fetching pages.

page_ids: optional string

The page IDs of the Confluence to read from.

space_key: optional string

The space key to read from.

supports_access_control: optional boolean
user_name: optional string

The username to use for authentication.

CloudJiraDataSource = object { authentication_mechanism, query, api_token, 5 more }

Cloud Jira Data Source integrating JiraReader.

authentication_mechanism: string

Type of Authentication for connecting to Jira APIs.

query: string

JQL (Jira Query Language) query to search.

api_token: optional string

The API/ Access Token used for Basic, PAT and OAuth2 authentication.

formatpassword
class_name: optional string
cloud_id: optional string

The cloud ID, used in case of OAuth2.

email: optional string

The email address to use for authentication.

server_url: optional string

The server url for Jira Cloud.

supports_access_control: optional boolean
CloudJiraDataSourceV2 = object { authentication_mechanism, query, server_url, 10 more }

Cloud Jira Data Source integrating JiraReaderV2.

authentication_mechanism: string

Type of Authentication for connecting to Jira APIs.

query: string

JQL (Jira Query Language) query to search.

server_url: string

The server url for Jira Cloud.

api_token: optional string

The API Access Token used for Basic, PAT and OAuth2 authentication.

formatpassword
api_version: optional "2" or "3"

Jira REST API version to use (2 or 3). 3 supports Atlassian Document Format (ADF).

Accepts one of the following:
"2"
"3"
class_name: optional string
cloud_id: optional string

The cloud ID, used in case of OAuth2.

email: optional string

The email address to use for authentication.

expand: optional string

Fields to expand in the response.

fields: optional array of string

List of fields to retrieve from Jira. If None, retrieves all fields.

get_permissions: optional boolean

Whether to fetch project role permissions and issue-level security

requests_per_minute: optional number

Rate limit for Jira API requests per minute.

supports_access_control: optional boolean
CloudBoxDataSource = object { authentication_mechanism, class_name, client_id, 6 more }
authentication_mechanism: "developer_token" or "ccg"

The type of authentication to use (Developer Token or CCG)

Accepts one of the following:
"developer_token"
"ccg"
class_name: optional string
client_id: optional string

Box API key used for identifying the application the user is authenticating with

client_secret: optional string

Box API secret used for making auth requests.

formatpassword
developer_token: optional string

Developer token for authentication if authentication_mechanism is 'developer_token'.

formatpassword
enterprise_id: optional string

Box Enterprise ID, if provided authenticates as service.

folder_id: optional string

The ID of the Box folder to read from.

supports_access_control: optional boolean
user_id: optional string

Box User ID, if provided authenticates as user.

data_source_id: string

The ID of the data source.

formatuuid
last_synced_at: string

The last time the data source was automatically synced.

formatdate-time
name: string

The name of the data source.

pipeline_id: string

The ID of the pipeline.

formatuuid
project_id: string
source_type: "S3" or "AZURE_STORAGE_BLOB" or "GOOGLE_DRIVE" or 8 more
Accepts one of the following:
"S3"
"AZURE_STORAGE_BLOB"
"GOOGLE_DRIVE"
"MICROSOFT_ONEDRIVE"
"MICROSOFT_SHAREPOINT"
"SLACK"
"NOTION_PAGE"
"CONFLUENCE"
"JIRA"
"JIRA_V2"
"BOX"
created_at: optional string

Creation datetime

formatdate-time
custom_metadata: optional map[map[unknown] or array of unknown or string or 2 more]

Custom metadata that will be present on all data loaded from the data source

Accepts one of the following:
UnionMember0 = map[unknown]
UnionMember1 = array of unknown
UnionMember2 = string
UnionMember3 = number
UnionMember4 = boolean
status: optional "NOT_STARTED" or "IN_PROGRESS" or "SUCCESS" or 2 more

The status of the data source in the pipeline.

Accepts one of the following:
"NOT_STARTED"
"IN_PROGRESS"
"SUCCESS"
"ERROR"
"CANCELLED"
status_updated_at: optional string

The last time the status was updated.

formatdate-time
sync_interval: optional number

The interval at which the data source should be synced.

sync_schedule_set_by: optional string

The id of the user who set the sync schedule.

updated_at: optional string

Update datetime

formatdate-time
version_metadata: optional DataSourceReaderVersionMetadata { reader_version }

Version metadata for the data source

reader_version: optional "1.0" or "2.0" or "2.1"

The version of the reader to use for this data source.

Accepts one of the following:
"1.0"
"2.0"
"2.1"

PipelinesImages

List File Page Screenshots
GET/api/v1/files/{id}/page_screenshots
Get File Page Screenshot
GET/api/v1/files/{id}/page_screenshots/{page_index}
Get File Page Figure
GET/api/v1/files/{id}/page-figures/{page_index}/{figure_name}
List File Pages Figures
GET/api/v1/files/{id}/page-figures

PipelinesFiles

Get Pipeline File Status Counts
GET/api/v1/pipelines/{pipeline_id}/files/status-counts
Get Pipeline File Status
GET/api/v1/pipelines/{pipeline_id}/files/{file_id}/status
Add Files To Pipeline Api
PUT/api/v1/pipelines/{pipeline_id}/files
Update Pipeline File
PUT/api/v1/pipelines/{pipeline_id}/files/{file_id}
Delete Pipeline File
DELETE/api/v1/pipelines/{pipeline_id}/files/{file_id}
List Pipeline Files2
Deprecated
GET/api/v1/pipelines/{pipeline_id}/files2
ModelsExpand Collapse
PipelineFile = object { id, pipeline_id, config_hash, 16 more }

Schema for a file that is associated with a pipeline.

id: string

Unique identifier

formatuuid
pipeline_id: string

The ID of the pipeline that the file is associated with

formatuuid
config_hash: optional map[map[unknown] or array of unknown or string or 2 more]

Hashes for the configuration of the pipeline.

Accepts one of the following:
UnionMember0 = map[unknown]
UnionMember1 = array of unknown
UnionMember2 = string
UnionMember3 = number
UnionMember4 = boolean
created_at: optional string

Creation datetime

formatdate-time
custom_metadata: optional map[map[unknown] or array of unknown or string or 2 more]

Custom metadata for the file

Accepts one of the following:
UnionMember0 = map[unknown]
UnionMember1 = array of unknown
UnionMember2 = string
UnionMember3 = number
UnionMember4 = boolean
data_source_id: optional string

The ID of the data source that the file belongs to

formatuuid
external_file_id: optional string

The ID of the file in the external system

file_id: optional string

The ID of the file

formatuuid
file_size: optional number

Size of the file in bytes

minimum0
file_type: optional string

File type (e.g. pdf, docx, etc.)

maxLength3000
minLength1
indexed_page_count: optional number

The number of pages that have been indexed for this file

last_modified_at: optional string

The last modified time of the file

formatdate-time
name: optional string

Name of the file

maxLength3000
minLength1
permission_info: optional map[map[unknown] or array of unknown or string or 2 more]

Permission information for the file

Accepts one of the following:
UnionMember0 = map[unknown]
UnionMember1 = array of unknown
UnionMember2 = string
UnionMember3 = number
UnionMember4 = boolean
project_id: optional string

The ID of the project that the file belongs to

formatuuid
resource_info: optional map[map[unknown] or array of unknown or string or 2 more]

Resource information for the file

Accepts one of the following:
UnionMember0 = map[unknown]
UnionMember1 = array of unknown
UnionMember2 = string
UnionMember3 = number
UnionMember4 = boolean
status: optional "NOT_STARTED" or "IN_PROGRESS" or "SUCCESS" or 2 more

Status of the pipeline file

Accepts one of the following:
"NOT_STARTED"
"IN_PROGRESS"
"SUCCESS"
"ERROR"
"CANCELLED"
status_updated_at: optional string

The last time the status was updated

formatdate-time
updated_at: optional string

Update datetime

formatdate-time

PipelinesMetadata

Import Pipeline Metadata
PUT/api/v1/pipelines/{pipeline_id}/metadata
Delete Pipeline Files Metadata
DELETE/api/v1/pipelines/{pipeline_id}/metadata

PipelinesDocuments

Create Batch Pipeline Documents
POST/api/v1/pipelines/{pipeline_id}/documents
Paginated List Pipeline Documents
GET/api/v1/pipelines/{pipeline_id}/documents/paginated
Get Pipeline Document
GET/api/v1/pipelines/{pipeline_id}/documents/{document_id}
Delete Pipeline Document
DELETE/api/v1/pipelines/{pipeline_id}/documents/{document_id}
Get Pipeline Document Status
GET/api/v1/pipelines/{pipeline_id}/documents/{document_id}/status
Sync Pipeline Document
POST/api/v1/pipelines/{pipeline_id}/documents/{document_id}/sync
List Pipeline Document Chunks
GET/api/v1/pipelines/{pipeline_id}/documents/{document_id}/chunks
Upsert Batch Pipeline Documents
PUT/api/v1/pipelines/{pipeline_id}/documents
ModelsExpand Collapse
CloudDocument = object { id, metadata, text, 4 more }

Cloud document stored in S3.

id: string
metadata: map[unknown]
text: string
excluded_embed_metadata_keys: optional array of string
excluded_llm_metadata_keys: optional array of string
page_positions: optional array of number

indices in the CloudDocument.text where a new page begins. e.g. Second page starts at index specified by page_positions[1].

status_metadata: optional map[unknown]
CloudDocumentCreate = object { metadata, text, id, 3 more }

Create a new cloud document.

metadata: map[unknown]
text: string
id: optional string
excluded_embed_metadata_keys: optional array of string
excluded_llm_metadata_keys: optional array of string
page_positions: optional array of number

indices in the CloudDocument.text where a new page begins. e.g. Second page starts at index specified by page_positions[1].

TextNode = object { class_name, embedding, end_char_idx, 11 more }

Provided for backward compatibility.

Note: we keep the field with the typo "seperator" to maintain backward compatibility for serialized objects.

class_name: optional string
embedding: optional array of number

Embedding of the node.

end_char_idx: optional number

End char index of the node.

excluded_embed_metadata_keys: optional array of string

Metadata keys that are excluded from text for the embed model.

excluded_llm_metadata_keys: optional array of string

Metadata keys that are excluded from text for the LLM.

extra_info: optional map[unknown]

A flat dictionary of metadata fields

id_: optional string

Unique ID of the node.

metadata_seperator: optional string

Separator between metadata fields when converting to string.

metadata_template: optional string

Template for how metadata is formatted, with {key} and {value} placeholders.

mimetype: optional string

MIME type of the node content.

relationships: optional map[object { node_id, class_name, hash, 2 more } or array of object { node_id, class_name, hash, 2 more } ]

A mapping of relationships to other node information.

Accepts one of the following:
RelatedNodeInfo = object { node_id, class_name, hash, 2 more }
node_id: string
class_name: optional string
hash: optional string
metadata: optional map[unknown]
node_type: optional "1" or "2" or "3" or 2 more or string
Accepts one of the following:
ObjectType = "1" or "2" or "3" or 2 more
Accepts one of the following:
"1"
"2"
"3"
"4"
"5"
UnionMember1 = string
UnionMember1 = array of object { node_id, class_name, hash, 2 more }
node_id: string
class_name: optional string
hash: optional string
metadata: optional map[unknown]
node_type: optional "1" or "2" or "3" or 2 more or string
Accepts one of the following:
ObjectType = "1" or "2" or "3" or 2 more
Accepts one of the following:
"1"
"2"
"3"
"4"
"5"
UnionMember1 = string
start_char_idx: optional number

Start char index of the node.

text: optional string

Text content of the node.

text_template: optional string

Template for how text is formatted, with {content} and {metadata_str} placeholders.