Storm OpenAPI(en)
  1. Node Description
Storm OpenAPI(en)
  • Welcome to STORM
    • Introduction
  • Quickstart
    • Getting Started
      • Creating an Agent
      • Account Creation
    • Core Scenario
      • Document Upload
      • Workflow Design
      • Test
      • Deployment
      • Channel Integration
  • Feature Guide
    • Console
      • Agent Management
      • Dashboard
      • Permission Management
    • Agent Template
      • Knowledge Retrieval
      • Character Dialogue
      • Consultation Record Analysis
      • SQL Query
      • News Article
    • Agent Builder
      • Knowledge
        • Documents
        • Folders
        • Feedback
      • Workflow
        • Node Description
          • LLM
          • Search(RAG)
          • API
          • IF ELSE
          • Variable Declaration and Assignment
          • Postprocessing
      • Test
      • Log
      • Dashboard
    • Admin Tools
      • Deployment Management
      • Channel Integration
      • Model Fine-Tuning
      • Training Data Quality Management
      • Other Settings
  • Apis
    • Agent
      • Deploy Agent
      • View Agent Deployment History
      • View Agent
    • Bucket
      • Create Bucket
      • View Bucket
    • Document
      • Document Training Request by file
      • Document Training Request by URL
      • View Documents
      • Delete Document
    • Chat
      • Send Chat (non-stream)
      • Send Chat (stream)
      • Search Context
    • STORM Parse
      • /convert/md
    • Instance Agent
      • Add Instance Session
      • Upload Instance Document
      • Request RAG Source For Query
      • Delete Instance Session
  • Learn More
    • FAQ
    • Pricing
  1. Node Description

LLM

LLM Node#

The LLM (Large Language Model) node is responsible for generating responses based on user input using a language model. It performs natural language processing tasks according to the agent’s purpose.

LLM Model#

Group 633083 4 (1).png
Select the LLM model that the agent will use to generate responses.
Available models may vary depending on your billing plan (for details, please contact the sales team).
Refer to the following model guides to select the LLM best suited for your service:
GPT
OpenAI
Azure
Claude
Anthropic
Vertex
Hyper Clova
Gemini
Llama
Xionic
Xionic is Sionic’s proprietary model. (Contact the sales team if you wish to use it.)

LLM Parameters#

Top P
Top P controls the randomness of the generated text using nucleus sampling.
The model selects words randomly from the smallest possible set whose cumulative probability equals P%.
For example, with Top P = 0.9, the model only considers the words whose cumulative probability reaches 90%.
A higher Top P (closer to 1) results in more diverse and creative outputs.
Maximum Length
Temperature
Frequency Penalty
Presence Penalty
Stop Sequences
Seed

Prompt Configuration#

Group 633083 7.png
Customize the instruction prompts used by the model during generation.
Prompts are divided into SYSTEM and USER roles:
SYSTEM Prompt: Defines the model’s tone, behavior, and personality.
It provides overarching guidance for how the model should respond.
USER Prompt: Contains the actual user query or task request.
The model generates responses based on this input.

Additional Features#

Group 633083 6.png

Function Call#

Allows the model to call predefined functions for integration with external systems.
Each function requires parameters such as name, description, and properties.

Knowledge Chunk Format#

Defines how retrieved knowledge is passed into the LLM.
{{Index}} — Automatically numbers retrieved items.
{{Filename}} — Displays the source file name.
{{Page}} — Shows page metadata (may not exist for all Korean documents).
{{Content}} — Includes the main retrieved content.

History Turn Count#

Specifies how many previous conversation turns the model can reference.
A higher value improves contextual understanding but increases token usage.
Modified at 2025-10-20 05:46:50
Previous
Feedback
Next
Search(RAG)
Built with