The params which can be passed to the API at request time.

interface GoogleAIBaseLanguageModelCallOptions {
    convertSystemMessageToHumanContent?: boolean;
    maxOutputTokens?: number;
    model?: string;
    modelName?: string;
    responseMimeType?: "application/json" | "text/plain";
    safetyHandler?: GoogleAISafetyHandler;
    safetySettings?: GoogleAISafetySetting[];
    stopSequences?: string[];
    temperature?: number;
    tools?: StructuredToolInterface[] | GeminiTool[];
    topK?: number;
    topP?: number;
}

Hierarchy (view full)

Properties

convertSystemMessageToHumanContent?: boolean
maxOutputTokens?: number

Maximum number of tokens to generate in the completion.

model?: string

Model to use

modelName?: string

Model to use Alias for model

responseMimeType?: "application/json" | "text/plain"

Available for gemini-1.5-pro. The output format of the generated candidate text. Supported MIME types:

  • text/plain: Text output.
  • application/json: JSON response in the candidates.

Default

"text/plain"
safetyHandler?: GoogleAISafetyHandler
safetySettings?: GoogleAISafetySetting[]
stopSequences?: string[]
temperature?: number

Sampling temperature to use

tools?: StructuredToolInterface[] | GeminiTool[]
topK?: number

Top-k changes how the model selects tokens for output.

A top-k of 1 means the selected token is the most probable among all tokens in the model’s vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).

topP?: number

Top-p changes how the model selects tokens for output.

Tokens are selected from most probable to least until the sum of their probabilities equals the top-p value.

For example, if tokens A, B, and C have a probability of .3, .2, and .1 and the top-p value is .5, then the model will select either A or B as the next token (using temperature).

Generated using TypeDoc