Storm OpenAPI(en)
  1. Agent Builder
Storm OpenAPI(en)
  • Welcome to STORM
    • Introduction
  • Quickstart
    • Getting Started
      • Creating an Agent
      • Account Creation
    • Core Scenario
      • Document Upload
      • Workflow Design
      • Test
      • Deployment
      • Channel Integration
  • Feature Guide
    • Console
      • Agent Management
      • Dashboard
      • Permission Management
    • Agent Template
      • Knowledge Retrieval
      • Character Dialogue
      • Consultation Record Analysis
      • SQL Query
      • News Article
    • Agent Builder
      • Knowledge
        • Documents
        • Folders
        • Feedback
      • Workflow
        • Node Description
          • LLM
          • Search(RAG)
          • API
          • IF ELSE
          • Variable Declaration and Assignment
          • Postprocessing
      • Test
      • Log
      • Dashboard
    • Admin Tools
      • Deployment Management
      • Channel Integration
      • Model Fine-Tuning
      • Training Data Quality Management
      • Other Settings
  • Apis
    • Agent
      • Deploy Agent
      • View Agent Deployment History
      • View Agent
    • Bucket
      • Create Bucket
      • View Bucket
    • Document
      • Document Training Request by file
      • Document Training Request by URL
      • View Documents
      • Delete Document
    • Chat
      • Send Chat (non-stream)
      • Send Chat (stream)
      • Search Context
    • STORM Parse
      • /convert/md
    • Instance Agent
      • Add Instance Session
      • Upload Instance Document
      • Request RAG Source For Query
      • Delete Instance Session
  • Learn More
    • FAQ
    • Pricing
  1. Agent Builder

Test

Testing#

Purpose of Testing#

The testing feature is a critical step for verifying that the service operates as intended before it is made available to end users. It serves the following purposes:
1.
Functionality Verification
Checks whether each feature of the platform works as designed. This is the foundation for ensuring system stability and reliability.
2.
Knowledge Search Performance Check
Evaluates whether the system can accurately retrieve and provide relevant knowledge or information based on user queries. This step validates the precision and efficiency of knowledge-based features.
3.
Prompt Interpretation and Execution Verification
Confirms that the system correctly interprets the intent behind prompts and performs the requested tasks properly. This ensures accurate communication and logical system responses.
4.
Error Analysis and Improvement Feedback
Identifies abnormal behaviors or errors found during testing, analyzes their causes, and facilitates learning for improvement. This helps maintain continuous system quality enhancement.
Testing is an essential preparatory step to ensure a stable and trustworthy experience for users.

How to Test#

Group 632953 (2).png
Enter text in the chat window below as if you were a real user.

Dev ↔ Live Environment Settings#

You can test the agent separately in Dev or Live environments.
The Live environment becomes available after deployment, allowing you to test the deployed version.
For detailed information, refer to Deployment Management.

Selecting a Folder#

Group 632954 (3).png
You can limit testing to a specific folder.
When selected, only documents stored within that folder are searched and utilized during testing.

Checking Answer Sources#

The Answer Source feature helps verify which data the LLM used to generate its response, ensuring answer credibility and accuracy during testing and validation.
image31.png
Source Chunks
Displays a labeled list of knowledge chunks used in the answer. This allows you to identify which data sources contributed to the response.
Each chunk is assigned a unique number, and clicking it opens the corresponding original text for context review.
Source Documents
Shows the original document from which the knowledge chunk was extracted using an integrated viewer.
Particularly useful when understanding text-only chunks is difficult.
Supported only for PDF files.
Detailed Logs
Displays the retrieval results, prompts, and response generation flow at the time of answering.
Useful for debugging and analyzing any issues identified during testing.

Feedback#

The feedback feature evaluates the quality of an agent’s responses during testing and provides data for performance improvement.
Users can rate each response as “Good 👍” or “Needs Improvement 👎”, each serving a specific purpose:
Group 633222.png
Good 👍
Selected when the agent retrieves and generates appropriate, accurate, and meaningful responses.
Recorded when the response aligns well with user prompts or expectations.
Collected “Good” feedback is later used as training data for fine-tuning RAG embedding models.
Model fine-tuning support is available upon request through the Sales Team.
Needs Improvement 👎
Selected when the agent fails to meet expectations or provides inaccurate or irrelevant responses.
Example cases:
Could not retrieve relevant information or the knowledge base lacked necessary data.
The LLM misinterpreted the retrieved information.
A specific question has a known “correct” answer — users can register it manually to improve accuracy.
Administrators can add ideal answers manually, allowing precise control of future responses.
The feedback function goes beyond simple evaluation — it’s a key mechanism for systematically improving agent performance. Actively using feedback helps build more precise and reliable responses.

When Expected Answers Are Not Returned#

When a System Error Message Appears#

Click the “Report Error” button in the top-right corner.
The issue will be directly sent to the technical support team.

When Responses Like “I don’t know” or Incorrect Answers Appear#

Contact the technical support team with the following details:
chat ID
expected answer
document(s) and content that should have been referenced
Alternatively, you can investigate the cause yourself:
Check the Answer Source Panel
If the expected document is listed as a source:
The issue may be related to LLM reasoning.
Try adjusting the LLM model, parameters, or prompts.
If the expected document is not included:
Confirm whether the related source document has been trained.
If not trained, the agent cannot retrieve relevant knowledge.
→ Upload and train the missing document.
The document may exist but not be included in the current agent version.
→ Try switching between Dev/Live or redeploy the Live version.
If parsing failed during upload (e.g., via STORM Parse), verify and correct the conversion results.
If the Document Is Trained but Retrieval Still Fails
Use the Feedback feature to directly inject the correct knowledge into the agent for improved accuracy.
Modified at 2025-10-20 05:53:21
Previous
Postprocessing
Next
Log
Built with