32 lines
1.6 KiB
Markdown
32 lines
1.6 KiB
Markdown
# Data Analysis
|
|
|
|
The **Overview -- Data Analysis** section displays metrics such as usage, active users, and LLM (Language Learning Model) invocation costs. This allows you to continuously improve the effectiveness, engagement, and cost-efficiency of your application operations. We will gradually provide more useful visualization capabilities, so please let us know what you need.
|
|
|
|
<figure><img src="/en/.gitbook/assets/guides/monitoring/analysis/image (6) (1) (1) (1).png" alt=""><figcaption><p>Overview—Data Analysis</p></figcaption></figure>
|
|
|
|
***
|
|
|
|
**Total Messages**
|
|
|
|
Reflects the total number of interactions AI has each day, with each user question answered counted as one message. Prompt orchestration and debugging sessions are not included.
|
|
|
|
**Active Users**
|
|
|
|
The number of unique users who have had effective interactions with the AI, defined as having more than one question-and-answer exchange. Prompt orchestration and debugging sessions are not included.
|
|
|
|
**Average Session Interactions**
|
|
|
|
Reflects the number of continuous interactions per session user. For example, if a user has a 10-round Q\&A with the AI, it is counted as 10. This metric reflects user engagement. It is available only for conversational applications.
|
|
|
|
**Token Output Speed**
|
|
|
|
The number of tokens output per second, indirectly reflecting the model's generation rate and the application's usage frequency.
|
|
|
|
**User Satisfaction Rate**
|
|
|
|
The number of likes per 1,000 messages, reflecting the proportion of users who are very satisfied with the answers.
|
|
|
|
**Token Usage**
|
|
|
|
Reflects the daily token expenditure for language model requests by the application, useful for cost control.
|