Storm OpenAPI(en)
  1. Core Scenario
Storm OpenAPI(en)
  • Welcome to STORM
    • Introduction
  • Quickstart
    • Getting Started
      • Creating an Agent
      • Account Creation
    • Core Scenario
      • Document Upload
      • Workflow Design
      • Test
      • Deployment
      • Channel Integration
  • Feature Guide
    • Console
      • Agent Management
      • Dashboard
      • Permission Management
    • Agent Template
      • Knowledge Retrieval
      • Character Dialogue
      • Consultation Record Analysis
      • SQL Query
      • News Article
    • Agent Builder
      • Knowledge
        • Documents
        • Folders
        • Feedback
      • Workflow
        • Node Description
          • LLM
          • Search(RAG)
          • API
          • IF ELSE
          • Variable Declaration and Assignment
          • Postprocessing
      • Test
      • Log
      • Dashboard
    • Admin Tools
      • Deployment Management
      • Channel Integration
      • Model Fine-Tuning
      • Training Data Quality Management
      • Other Settings
  • Apis
    • Agent
      • Deploy Agent
      • View Agent Deployment History
      • View Agent
    • Bucket
      • Create Bucket
      • View Bucket
    • Document
      • Document Training Request by file
      • Document Training Request by URL
      • View Documents
      • Delete Document
    • Chat
      • Send Chat (non-stream)
      • Send Chat (stream)
      • Search Context
    • STORM Parse
      • /convert/md
    • Instance Agent
      • Add Instance Session
      • Upload Instance Document
      • Request RAG Source For Query
      • Delete Instance Session
  • Learn More
    • FAQ
    • Pricing
  1. Core Scenario

Test

Test#

You can test your agent’s learned content and provide feedback to improve its performance.
Learn more about testing

Group 632953 (2).png
Enter user-like queries in the chat input at the bottom to simulate real interactions.



Dev ↔ Live Environment#

You can switch between Dev and Live modes for testing.
The Live environment becomes available only after deployment and allows you to test the deployed version of your agent.
For detailed information, see Deployment Management.



Folder-Specific Testing#

Group 632954 (3).png
You can test using documents from a specific folder only.
In this case, the system will reference and search within documents stored in the selected folder only.



Viewing Answer Sources#

The Answer Source feature allows you to verify which data the LLM used to generate its response.
This is useful for assessing the reliability and accuracy of responses during testing and validation.
image31.png
Source Chunks
Displays a labeled list of knowledge chunks used in the response.
This helps identify exactly which data segments contributed to the answer.
Each chunk is assigned a unique number—clicking the number opens the original text where that chunk appears, enabling quick context review.
Source Documents
You can directly view the original document from which the chunk was extracted using the built-in document viewer.
This is especially useful when text chunks alone don’t provide sufficient context.
Supported only for PDF file formats.
Detailed Logs
Includes information such as search results retrieved during response generation, the prompt sent to the model, and the composition of the final response.
Useful for debugging or analyzing issues found during testing.
Modified at 2025-10-20 05:32:37
Previous
Workflow Design
Next
Deployment
Built with