Testing#
Purpose of Testing#
The testing feature is a critical step for verifying that the service operates as intended before it is made available to end users. It serves the following purposes:1.
Functionality Verification
Checks whether each feature of the platform works as designed. This is the foundation for ensuring system stability and reliability.
2.
Knowledge Search Performance Check
Evaluates whether the system can accurately retrieve and provide relevant knowledge or information based on user queries. This step validates the precision and efficiency of knowledge-based features.
3.
Prompt Interpretation and Execution Verification
Confirms that the system correctly interprets the intent behind prompts and performs the requested tasks properly. This ensures accurate communication and logical system responses.
4.
Error Analysis and Improvement Feedback
Identifies abnormal behaviors or errors found during testing, analyzes their causes, and facilitates learning for improvement. This helps maintain continuous system quality enhancement.
Testing is an essential preparatory step to ensure a stable and trustworthy experience for users.How to Test#
Enter text in the chat window below as if you were a real user.
Dev ↔ Live Environment Settings#
You can test the agent separately in Dev or Live environments.
The Live environment becomes available after deployment, allowing you to test the deployed version. Selecting a Folder#
You can limit testing to a specific folder.
When selected, only documents stored within that folder are searched and utilized during testing.
Checking Answer Sources#
The Answer Source feature helps verify which data the LLM used to generate its response, ensuring answer credibility and accuracy during testing and validation.Displays a labeled list of knowledge chunks used in the answer. This allows you to identify which data sources contributed to the response.
Each chunk is assigned a unique number, and clicking it opens the corresponding original text for context review.
Shows the original document from which the knowledge chunk was extracted using an integrated viewer.
Particularly useful when understanding text-only chunks is difficult.
Supported only for PDF files.
Displays the retrieval results, prompts, and response generation flow at the time of answering.
Useful for debugging and analyzing any issues identified during testing.
Feedback#
The feedback feature evaluates the quality of an agent’s responses during testing and provides data for performance improvement.Users can rate each response as “Good 👍” or “Needs Improvement 👎”, each serving a specific purpose:Selected when the agent retrieves and generates appropriate, accurate, and meaningful responses.
Recorded when the response aligns well with user prompts or expectations.
Collected “Good” feedback is later used as training data for fine-tuning RAG embedding models.
Model fine-tuning support is available upon request through the Sales Team.
Selected when the agent fails to meet expectations or provides inaccurate or irrelevant responses.
Could not retrieve relevant information or the knowledge base lacked necessary data.
The LLM misinterpreted the retrieved information.
A specific question has a known “correct” answer — users can register it manually to improve accuracy.
Administrators can add ideal answers manually, allowing precise control of future responses.
The feedback function goes beyond simple evaluation — it’s a key mechanism for systematically improving agent performance. Actively using feedback helps build more precise and reliable responses.
When Expected Answers Are Not Returned#
When a System Error Message Appears#
Click the “Report Error” button in the top-right corner.
The issue will be directly sent to the technical support team.
When Responses Like “I don’t know” or Incorrect Answers Appear#
Contact the technical support team with the following details:document(s) and content that should have been referenced
Alternatively, you can investigate the cause yourself:Check the Answer Source PanelIf the expected document is listed as a source:The issue may be related to LLM reasoning.
Try adjusting the LLM model, parameters, or prompts.
If the expected document is not included:Confirm whether the related source document has been trained.If not trained, the agent cannot retrieve relevant knowledge.
→ Upload and train the missing document.
The document may exist but not be included in the current agent version.
→ Try switching between Dev/Live or redeploy the Live version.
If parsing failed during upload (e.g., via STORM Parse), verify and correct the conversion results.
If the Document Is Trained but Retrieval Still FailsUse the Feedback feature to directly inject the correct knowledge into the agent for improved accuracy.
Modified at 2025-10-20 05:53:21