Generate a summary based on the given text input.
Documentation Index
Fetch the complete documentation index at: https://domoinc-jkreitzman-patch-1.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Text Summarization AI request.
Text Summarization AI Service Request.
A prompt template is a string that contains placeholders for parameters that will be replaced with parameter values before the prompt is submitted to the model.
A default prompt template is set for each model configured for the Text Summarization AI Service. Individual requests can override the
default template by including the promptTemplate parameter.
The following request parameters are automatically injected into the prompt template if the associated placeholder is present:
Models with built-in support for system prompts and chat message history do not need to include system or chatContext in the prompt template.
Additional parameters can be provided in the parameters map as key-value pairs.
Text information to be summarized.
The AI session ID. If provided, this request will be associated with the specified AI Session.
The prompt template to use for the Text Summarization task. The default prompt template configured for the model will be used if not provided.
Custom parameters to inject into the prompt template if an associated placeholder is present.
The ID of the model to use for Text Summarization. The specified model must be configured for the Text Summarization AI Service by an Admin.
Additional model-specific configuration parameter key-value pairs. e.g. temperature, max_tokens, etc.
The system message to use for the Text Summarization task. If not provided, the default system message will be used. If the model does not include built-in support for system prompts, this parameter may be included in the prompt template using the "${system}" placeholder.
Configuration for dividing long text into smaller parts or chunks.
Determines the design, structuring and organization of the summarization output.
bulleted, numbered, paragraph, unknown Defines a size boundary to limit the length of the output summary, based on number of words.
Controls randomness in the model's output. Lower values make output more deterministic.
The maximum number of tokens to generate in the response.
Configuration for reasoning behavior and effort level.
TextAIResponse The generated summary and model token usage information.
Response from a text AI Service.
The formatted prompt that was used to generate the response.
The list of choices generated by the model.
The id of the model used to generate the response.
The id of the AI Session associated with this request.
The output of the model.
The token usage from the model provider.