## Create Retriever `retrievers.create(RetrieverCreateParams**kwargs) -> Retriever` **post** `/api/v1/retrievers` Create a new Retriever. ### Parameters - `name: str` A name for the retriever tool. Will default to the pipeline name if not provided. - `organization_id: Optional[str]` - `project_id: Optional[str]` - `pipelines: Optional[Iterable[RetrieverPipelineParam]]` The pipelines this retriever uses. - `description: Optional[str]` A description of the retriever tool. - `name: Optional[str]` A name for the retriever tool. Will default to the pipeline name if not provided. - `pipeline_id: str` The ID of the pipeline this tool uses. - `preset_retrieval_parameters: Optional[PresetRetrievalParams]` Parameters for retrieval configuration. - `alpha: Optional[float]` Alpha value for hybrid retrieval to determine the weights between dense and sparse retrieval. 0 is sparse retrieval and 1 is dense retrieval. - `class_name: Optional[str]` - `dense_similarity_cutoff: Optional[float]` Minimum similarity score wrt query for retrieval - `dense_similarity_top_k: Optional[int]` Number of nodes for dense retrieval. - `enable_reranking: Optional[bool]` Enable reranking for retrieval - `files_top_k: Optional[int]` Number of files to retrieve (only for retrieval mode files_via_metadata and files_via_content). - `rerank_top_n: Optional[int]` Number of reranked nodes for returning. - `retrieval_mode: Optional[RetrievalMode]` The retrieval mode for the query. - `"chunks"` - `"files_via_metadata"` - `"files_via_content"` - `"auto_routed"` - `retrieve_image_nodes: Optional[bool]` Whether to retrieve image nodes. - `retrieve_page_figure_nodes: Optional[bool]` Whether to retrieve page figure nodes. - `retrieve_page_screenshot_nodes: Optional[bool]` Whether to retrieve page screenshot nodes. - `search_filters: Optional[MetadataFilters]` Metadata filters for vector stores. - `filters: List[Filter]` - `class FilterMetadataFilter: …` Comprehensive metadata filter for vector stores to support more operators. Value uses Strict types, as int, float and str are compatible types and were all converted to string before. See: https://docs.pydantic.dev/latest/usage/types/#strict-types - `key: str` - `value: Union[float, str, List[str], 3 more]` - `float` - `str` - `List[str]` - `List[float]` - `List[int]` - `operator: Optional[Literal["==", ">", "<", 11 more]]` Vector store filter operator. - `"=="` - `">"` - `"<"` - `"!="` - `">="` - `"<="` - `"in"` - `"nin"` - `"any"` - `"all"` - `"text_match"` - `"text_match_insensitive"` - `"contains"` - `"is_empty"` - `class MetadataFilters: …` Metadata filters for vector stores. - `condition: Optional[Literal["and", "or", "not"]]` Vector store filter conditions to combine different filters. - `"and"` - `"or"` - `"not"` - `search_filters_inference_schema: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]` JSON Schema that will be used to infer search_filters. Omit or leave as null to skip inference. - `Dict[str, object]` - `List[object]` - `str` - `float` - `bool` - `sparse_similarity_top_k: Optional[int]` Number of nodes for sparse retrieval. ### Returns - `class Retriever: …` An entity that retrieves context nodes from several sub RetrieverTools. - `id: str` Unique identifier - `name: str` A name for the retriever tool. Will default to the pipeline name if not provided. - `project_id: str` The ID of the project this retriever resides in. - `created_at: Optional[datetime]` Creation datetime - `pipelines: Optional[List[RetrieverPipeline]]` The pipelines this retriever uses. - `description: Optional[str]` A description of the retriever tool. - `name: Optional[str]` A name for the retriever tool. Will default to the pipeline name if not provided. - `pipeline_id: str` The ID of the pipeline this tool uses. - `preset_retrieval_parameters: Optional[PresetRetrievalParams]` Parameters for retrieval configuration. - `alpha: Optional[float]` Alpha value for hybrid retrieval to determine the weights between dense and sparse retrieval. 0 is sparse retrieval and 1 is dense retrieval. - `class_name: Optional[str]` - `dense_similarity_cutoff: Optional[float]` Minimum similarity score wrt query for retrieval - `dense_similarity_top_k: Optional[int]` Number of nodes for dense retrieval. - `enable_reranking: Optional[bool]` Enable reranking for retrieval - `files_top_k: Optional[int]` Number of files to retrieve (only for retrieval mode files_via_metadata and files_via_content). - `rerank_top_n: Optional[int]` Number of reranked nodes for returning. - `retrieval_mode: Optional[RetrievalMode]` The retrieval mode for the query. - `"chunks"` - `"files_via_metadata"` - `"files_via_content"` - `"auto_routed"` - `retrieve_image_nodes: Optional[bool]` Whether to retrieve image nodes. - `retrieve_page_figure_nodes: Optional[bool]` Whether to retrieve page figure nodes. - `retrieve_page_screenshot_nodes: Optional[bool]` Whether to retrieve page screenshot nodes. - `search_filters: Optional[MetadataFilters]` Metadata filters for vector stores. - `filters: List[Filter]` - `class FilterMetadataFilter: …` Comprehensive metadata filter for vector stores to support more operators. Value uses Strict types, as int, float and str are compatible types and were all converted to string before. See: https://docs.pydantic.dev/latest/usage/types/#strict-types - `key: str` - `value: Union[float, str, List[str], 3 more]` - `float` - `str` - `List[str]` - `List[float]` - `List[int]` - `operator: Optional[Literal["==", ">", "<", 11 more]]` Vector store filter operator. - `"=="` - `">"` - `"<"` - `"!="` - `">="` - `"<="` - `"in"` - `"nin"` - `"any"` - `"all"` - `"text_match"` - `"text_match_insensitive"` - `"contains"` - `"is_empty"` - `class MetadataFilters: …` Metadata filters for vector stores. - `condition: Optional[Literal["and", "or", "not"]]` Vector store filter conditions to combine different filters. - `"and"` - `"or"` - `"not"` - `search_filters_inference_schema: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]` JSON Schema that will be used to infer search_filters. Omit or leave as null to skip inference. - `Dict[str, object]` - `List[object]` - `str` - `float` - `bool` - `sparse_similarity_top_k: Optional[int]` Number of nodes for sparse retrieval. - `updated_at: Optional[datetime]` Update datetime ### Example ```python import os from llama_cloud import LlamaCloud client = LlamaCloud( api_key=os.environ.get("LLAMA_CLOUD_API_KEY"), # This is the default and can be omitted ) retriever = client.retrievers.create( name="x", ) print(retriever.id) ``` #### Response ```json { "id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e", "name": "x", "project_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e", "created_at": "2019-12-27T18:11:19.117Z", "pipelines": [ { "description": "description", "name": "x", "pipeline_id": "182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e", "preset_retrieval_parameters": { "alpha": 0, "class_name": "class_name", "dense_similarity_cutoff": 0, "dense_similarity_top_k": 1, "enable_reranking": true, "files_top_k": 1, "rerank_top_n": 1, "retrieval_mode": "chunks", "retrieve_image_nodes": true, "retrieve_page_figure_nodes": true, "retrieve_page_screenshot_nodes": true, "search_filters": { "filters": [ { "key": "key", "value": 0, "operator": "==" } ], "condition": "and" }, "search_filters_inference_schema": { "foo": { "foo": "bar" } }, "sparse_similarity_top_k": 1 } } ], "updated_at": "2019-12-27T18:11:19.117Z" } ```