Optional
fields: anyOverridable Anthropic ClientOptions
A maximum number of tokens to generate before stopping.
Model name to use
Model name to use
Whether to stream the results or not
Amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks.
Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Defaults to -1, which disables it.
Does nucleus sampling, in which we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. Defaults to -1, which disables it. Note that you should either alter temperature or top_p, but not both.
Optional
anthropicAnthropic API key
Optional
apiAnthropic API key
Optional
apiOptional
invocationHolds any additional parameters that are valid to pass to anthropic.messages
that are not explicitly specified on this class.
Optional
stopA list of strings upon which to stop generating.
You probably want ["\n\nHuman:"]
, as that's the cue for
the next turn in the dialog agent.
Protected
batchProtected
streamingOptional
kwargs: Partial<ChatAnthropicCallOptions>Formats LangChain StructuredTools to AnthropicTools.
The tools to format
The formatted tools, or undefined if none are passed.
If a mix of AnthropicTools and StructuredTools are passed.
Protected
createCreates a streaming request with retry.
The parameters for creating a completion.
Optional
options: AnthropicRequestOptionsA streaming request.
Protected
getGenerated using TypeDoc
Wrapper around Anthropic large language models.
To use you should have the
@anthropic-ai/sdk
package installed, with theANTHROPIC_API_KEY
environment variable set.Remarks
Any parameters that are valid to be passed to
anthropic.messages
can be passed through invocationKwargs, even if not explicitly available on this class.Example