Client for interacting with the LangSmith API.
Client(
self,
api_url: Optional[str] = None,
*,
api_key: Optional[str] = None,
retry_config: Optional[Retry] = None,
timeout_ms: Optional[Union[int, tuple[int, int]]] = None,
web_url: Optional[str] = None,
session: Optional[requests.Session] = None,
auto_batch_tracing: bool = True,
anonymizer: Optional[Callable[[dict], dict]] = None,
hide_inputs: Optional[Union[Callable[[dict], dict], bool]] = None,
hide_outputs: Optional[Union[Callable[[dict], dict], bool]] = None,
hide_metadata: Optional[Union[Callable[[dict], dict], bool]] = None,
omit_traced_runtime_info: bool = False,
process_buffered_run_ops: Optional[Callable[[Sequence[dict]], Sequence[dict]]] = None,
run_ops_buffer_size: Optional[int] = None,
run_ops_buffer_timeout_ms: Optional[float] = None,
info: Optional[Union[dict, ls_schemas.LangSmithInfo]] = None,
api_urls: Optional[dict[str, str]] = None,
otel_tracer_provider: Optional[TracerProvider] = None,
otel_enabled: Optional[bool] = None,
tracing_sampling_rate: Optional[float] = None,
workspace_id: Optional[str] = None,
max_batch_size_bytes: Optional[int] = None,
headers: Optional[dict[str, str]] = None,
tracing_error_callback: Optional[Callable[[Exception], None]] = None,
disable_prompt_cache: bool = False,
cache: Optional[Union[bool, PromptCache]] = None
)| Name | Type | Description |
|---|---|---|
api_url | Optional[str] | Default: NoneURL for the LangSmith API. Defaults to the |
api_key | Optional[str] | Default: NoneAPI key for the LangSmith API. Defaults to the |
retry_config | Optional[Retry] | Default: NoneRetry configuration for the |
timeout_ms | Optional[Union[int, tuple[int, int]]] | Default: NoneTimeout for the Can also be a 2-tuple of |
web_url | Optional[str] | Default: NoneURL for the LangSmith web app. Default is auto-inferred from the |
session | Optional[requests.Session] | Default: NoneThe session to use for requests. If |
auto_batch_tracing | bool | Default: TrueWhether to automatically batch tracing. |
anonymizer | Optional[Callable[[dict], dict]] | Default: NoneA function applied for masking serialized run inputs and outputs, before sending to the API. |
hide_inputs | Optional[Union[Callable[[dict], dict], bool]] | Default: NoneWhether to hide run inputs when tracing with this client. If If a function, applied to all run inputs when creating runs. |
hide_outputs | Optional[Union[Callable[[dict], dict], bool]] | Default: NoneWhether to hide run outputs when tracing with this client. If If a function, applied to all run outputs when creating runs. |
hide_metadata | Optional[Union[Callable[[dict], dict], bool]] | Default: NoneWhether to hide run metadata when tracing with this client. If If a function, applied to all run metadata when creating runs. |
omit_traced_runtime_info | bool | Default: FalseWhether to omit runtime information from traced runs. If Defaults to |
process_buffered_run_ops | Optional[Callable[[Sequence[dict]], Sequence[dict]]] | Default: NoneA function applied to buffered run operations that allows for modification of the raw run dicts before they are converted to multipart and compressed. Useful specifically for high throughput tracing where you need to apply a rate-limited API or other costly process to the runs before they are sent to the API. Note that the buffer will only flush automatically when
|
run_ops_buffer_size | Optional[int] | Default: NoneMaximum number of run operations to collect in the
buffer before applying Required when |
run_ops_buffer_timeout_ms | Optional[float] | Default: NoneMaximum time in milliseconds to wait before flushing the run ops buffer when new runs are added. Defaults to Only used when |
info | Optional[Union[dict, ls_schemas.LangSmithInfo]] | Default: NoneThe information about the LangSmith API. If not provided, it will be fetched from the API. |
api_urls | Optional[dict[str, str]] | Default: NoneA dictionary of write API URLs and their corresponding API keys. Useful for multi-tenant setups. Data is only read from the first URL in the dictionary. However, ONLY
Runs are written ( |
otel_tracer_provider | Optional[TracerProvider] | Default: NoneOptional tracer provider for OpenTelemetry integration. If not provided, a LangSmith-specific tracer provider will be used. |
tracing_sampling_rate | Optional[float] | Default: NoneThe sampling rate for tracing. If provided, overrides the Should be a float between |
workspace_id | Optional[str] | Default: NoneThe workspace ID. Required for org-scoped API keys. |
max_batch_size_bytes | Optional[int] | Default: NoneThe maximum size of a batch of runs in bytes. If not provided, the default is set by the server. |
headers | Optional[dict[str, str]] | Default: NoneAdditional HTTP headers to include in all requests. These headers will be merged with the default headers (User-Agent, Accept, x-api-key, etc.). Custom headers will not override the default required headers. |
tracing_error_callback | Optional[Callable[[Exception], None]] | Default: NoneOptional callback function to handle errors. Called when exceptions occur during tracing operations. |
disable_prompt_cache | bool | Default: FalseDisable prompt caching for this client. By default, prompt caching is enabled globally using a singleton cache.
Set this to To configure the global cache, use Example |
cache | Optional[Union[bool, PromptCache]] | Default: None[Deprecated] Control prompt caching behavior. This parameter is deprecated. Use
Example |
| Name | Type |
|---|---|
| api_url | Optional[str] |
| api_key | Optional[str] |
| retry_config | Optional[Retry] |
| timeout_ms | Optional[Union[int, tuple[int, int]]] |
| web_url | Optional[str] |
| session | Optional[requests.Session] |
| auto_batch_tracing | bool |
| anonymizer | Optional[Callable[[dict], dict]] |
| hide_inputs | Optional[Union[Callable[[dict], dict], bool]] |
| hide_outputs | Optional[Union[Callable[[dict], dict], bool]] |
| hide_metadata | Optional[Union[Callable[[dict], dict], bool]] |
| omit_traced_runtime_info | bool |
| process_buffered_run_ops | Optional[Callable[[Sequence[dict]], Sequence[dict]]] |
| run_ops_buffer_size | Optional[int] |
| run_ops_buffer_timeout_ms | Optional[float] |
| info | Optional[Union[dict, ls_schemas.LangSmithInfo]] |
| api_urls | Optional[dict[str, str]] |
| otel_tracer_provider | Optional[TracerProvider] |
| otel_enabled | Optional[bool] |
| tracing_sampling_rate | Optional[float] |
| workspace_id | Optional[str] |
| max_batch_size_bytes | Optional[int] |
| headers | Optional[dict[str, str]] |
| tracing_error_callback | Optional[Callable[[Exception], None]] |
| disable_prompt_cache | bool |
| cache | Optional[Union[bool, PromptCache]] |
Return the API key used for authentication.
Return the workspace ID used for API requests.
Get the information about the LangSmith API.
Send a request with retries.
Upload a dataframe as individual examples to the LangSmith API.
Upload a CSV file to the LangSmith API.
Persist a run to the LangSmith API.
Batch ingest/upsert multiple runs in the Langsmith system.
Batch ingest/upsert multiple runs in the Langsmith system.
Update a run in the LangSmith API.
Force flush the currently buffered compressed runs.
Flush either queue or compressed buffer, depending on mode.
Read a run from the LangSmith API.
Read runs for a single thread.
List runs from the LangSmith API.
List threads and fetch the runs for each thread.
Get aggregate statistics over queried runs.
Takes in similar query parameters to list_runs and returns statistics
based on the runs that match the query.
Get the URL for a run.
Not recommended for use within your agent runtime. More for use interacting with runs after the fact for data analysis or ETL workloads.
Get a share link for a run.
Delete share link for a run.
Retrieve the shared link for a specific run.
Get share state for a run.
Get shared runs.
Get shared runs.
Retrieve the shared schema of a dataset.
Get a share link for a dataset.
Delete share link for a dataset.
Get shared datasets.
Get shared examples.
List shared projects.
Create a project on the LangSmith API.
Update a LangSmith project.
Read a project from the LangSmith API.
Check if a project exists.
Read the record-level information from an experiment into a Pandas DF.
This will fetch whatever data exists in the DB. Results are not immediately available in the DB upon evaluation run completion.
Feedback score values will be returned as an average across all runs for the experiment. Non-numeric feedback scores will be omitted.
List projects from the LangSmith API.
Delete a project from LangSmith.
Create a dataset in the LangSmith API.
Check whether a dataset exists in your tenant.
Read a dataset from the LangSmith API.
Get the difference between two versions of a dataset.
Download a dataset in OpenAI Jsonl format and load it as a list of dicts.
List the datasets on the LangSmith API.
Delete a dataset from the LangSmith API.
Update the tags of a dataset.
If the tag is already assigned to a different version of this dataset, the tag will be moved to the new version. The as_of parameter is used to determine which version of the dataset to apply the new tags to. It must be an exact version of the dataset to succeed. You can use the read_dataset_version method to find the exact version to apply the tags to.
List dataset versions.
Get dataset version by as_of or exact tag.
Use this to retrieve the dataset version to a timestamp or for a given tag.
Clone a public dataset to your own langsmith tenant.
This operation is idempotent. If you already have a dataset with the given name, this function will do nothing.
Add an example (row) to an LLM-type dataset.
Add an example (row) to a Chat-type dataset.
Add an example (row) to a dataset from a run.
Update examples using multipart.
.. deprecated:: 0.3.9
Use Client.update_examples instead. Will be removed in 0.4.0.
Upload examples using multipart.
.. deprecated:: 0.3.9
Use Client.create_examples instead. Will be removed in 0.4.0.
Upsert examples.
.. deprecated:: 0.3.9
Use Client.create_examples and Client.update_examples instead. Will be
removed in 0.4.0.
Create examples in a dataset.
Create a dataset example in the LangSmith API.
Examples are rows in a dataset, containing the inputs and expected outputs (or other reference information) for a model or chain.
Read an example from the LangSmith API.
Retrieve the example rows of the specified dataset.
Enable dataset indexing. Examples are indexed by their inputs.
This enables searching for similar examples by inputs with
client.similar_examples().
Sync dataset index.
This already happens automatically every 5 minutes, but you can call this to force a sync.
Retrieve the dataset examples whose inputs best match the current inputs.
Must have few-shot indexing enabled for the dataset. See client.index_dataset().
Update a specific example.
Update multiple examples.
Examples are expected to all be part of the same dataset.
Delete an example by ID.
Delete multiple examples by ID.
example_ids : Sequence[ID_TYPE] The IDs of the examples to delete. hard_delete : bool, default=False If True, permanently delete the examples. If False, soft delete them.
Get the splits for a dataset.
Update the splits for a dataset.
Evaluate a run.
Evaluate a run asynchronously.
Create feedback for a run.
To enable feedback to be batch uploaded in the background you must
specify trace_id. We highly encourage this for latency-sensitive environments.
Update a feedback in the LangSmith API.
Read a feedback from the LangSmith API.
List the feedback objects on the LangSmith API.
Delete a feedback by ID.
Create feedback from a presigned token or URL.
Create a pre-signed URL to send feedback data to.
This is useful for giving browser-based clients a way to upload feedback data directly to LangSmith without accessing the API key.
Create a pre-signed URL to send feedback data to.
This is useful for giving browser-based clients a way to upload feedback data directly to LangSmith without accessing the API key.
List the feedback ingest tokens for a run.
List feedback formulas.
Get a feedback formula by ID.
Create a feedback formula.
Update a feedback formula.
Delete a feedback formula by ID.
Create a feedback configuration.
Defines how feedback with a given key should be interpreted. If an identical configuration already exists for the key, it is returned unchanged. If a different configuration already exists for the key, an error is raised.
List feedback configurations.
Update a feedback configuration.
Only the provided fields will be updated; others remain unchanged.
Delete a feedback configuration.
This performs a soft delete. The configuration can be recreated later with the same key.
List the annotation queues on the LangSmith API.
Create an annotation queue on the LangSmith API.
Read an annotation queue with the specified queue_id.
Update an annotation queue with the specified queue_id.
Delete an annotation queue with the specified queue_id.
Add runs to an annotation queue with the specified queue_id.
Delete a run from an annotation queue with the specified queue_id and run_id.
Get a run from an annotation queue at the specified index.
Create a comparative experiment on the LangSmith API.
These experiments compare 2 or more experiment results over a shared dataset.
Like a prompt.
Unlike a prompt.
List prompts with pagination.
Get a specific prompt by its identifier.
Create a new prompt.
Does not attach prompt object, just creates an empty prompt.
Create a commit for an existing prompt.
Update a prompt's metadata.
To update the content of a prompt, use push_prompt or create_commit instead.
Delete a prompt.
Pull a prompt object from the LangSmith API.
List commits for a given prompt.
Pull a prompt and return it as a LangChain PromptTemplate.
This method requires langchain-core.
Push a prompt to the LangSmith API.
Can be used to update prompt metadata or prompt content.
If the prompt does not exist, it will be created. If the prompt exists, it will be updated.
Manually trigger cleanup of background threads.
Evaluate a target system on a given dataset.
Evaluate an async target system on a given dataset.
Get results for an experiment, including experiment session aggregated stats and experiment runs for each dataset example.
Experiment results may not be available immediately after the experiment is created.
Generate Insights over your agent chat histories.
Poll the status of an Insights report.