Skip to main content
POST
/
api
/
ai
/
v1
/
text
/
summarize
curl --request POST \
  --url https://{subdomain}.domo.com/api/ai/v1/text/summarize \
  --header 'Content-Type: application/json' \
  --header 'X-DOMO-Developer-Token: <api-key>' \
  --data '
{
  "input": "San Francisco, officially the City and County of San Francisco, is a commercial, financial, and cultural center in Northern California. With a population of 808,437 residents as of 2022, San Francisco is the fourth most populous city in the U.S. state of California. The city covers a land area of 46.9 square miles (121 square kilometers) at the end of the San Francisco Peninsula, making it the second-most densely populated large U.S. city after New York City and the fifth-most densely populated U.S. county, behind only four New York City boroughs. Among the 92 U.S. cities proper with over 250,000 residents, San Francisco is ranked first by per capita income and sixth by aggregate income as of 2022."
}
'
{
  "prompt": "Write a 5 to 10 words summary of the following text. ```...``` CONCISE SUMMARY:",
  "output": "Vibrant, densely populated commercial and cultural hub in Northern California.",
  "modelId": "domo.domo_ai.domogpt-summarize-v1:anthropic"
}

Documentation Index

Fetch the complete documentation index at: https://domoinc-jkreitzman-patch-1.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Authorizations

X-DOMO-Developer-Token
string
header
required

Body

application/json

Text Summarization AI request.

Text Summarization AI Service Request.

Prompt Templates

A prompt template is a string that contains placeholders for parameters that will be replaced with parameter values before the prompt is submitted to the model.

A default prompt template is set for each model configured for the Text Summarization AI Service. Individual requests can override the default template by including the promptTemplate parameter.

Prompt Template Parameters

The following request parameters are automatically injected into the prompt template if the associated placeholder is present:

  • input
  • system

Models with built-in support for system prompts and chat message history do not need to include system or chatContext in the prompt template.

Additional parameters can be provided in the parameters map as key-value pairs.

Prompt Template Examples

  • "${input}"
  • "${system}\n${input}"
input
string
required

Text information to be summarized.

sessionId
string<uuid>

The AI session ID. If provided, this request will be associated with the specified AI Session.

promptTemplate
object

The prompt template to use for the Text Summarization task. The default prompt template configured for the model will be used if not provided.

parameters
object

Custom parameters to inject into the prompt template if an associated placeholder is present.

model
string

The ID of the model to use for Text Summarization. The specified model must be configured for the Text Summarization AI Service by an Admin.

modelConfiguration
object

Additional model-specific configuration parameter key-value pairs. e.g. temperature, max_tokens, etc.

system
string

The system message to use for the Text Summarization task. If not provided, the default system message will be used. If the model does not include built-in support for system prompts, this parameter may be included in the prompt template using the "${system}" placeholder.

chunkingConfiguration
object

Configuration for dividing long text into smaller parts or chunks.

outputStyle
enum<string>

Determines the design, structuring and organization of the summarization output.

Available options:
bulleted,
numbered,
paragraph,
unknown
outputWordLength
object

Defines a size boundary to limit the length of the output summary, based on number of words.

temperature
number<double>

Controls randomness in the model's output. Lower values make output more deterministic.

maxTokens
integer<int32>

The maximum number of tokens to generate in the response.

reasoningConfig
object

Configuration for reasoning behavior and effort level.

Response

TextAIResponse The generated summary and model token usage information.

Response from a text AI Service.

prompt
string

The formatted prompt that was used to generate the response.

choices
object[]
deprecated

The list of choices generated by the model.

modelId
string

The id of the model used to generate the response.

sessionId
string<uuid>

The id of the AI Session associated with this request.

output
string

The output of the model.

modelProviderUsage
object

The token usage from the model provider.