+
+ When selecting a model, the name with CHAT on the right is a chat model. This model takes a list of messages as input and returns a generated message as output. Although the chat format is designed to simplify multi-turn conversations, it is also useful for single-turn tasks without any conversation. Chat models use chat messages as input and output, including three types of messages: SYSTEM, USER, and ASSISTANT:
+
+ * `SYSTEM`
+ * System messages help set the behavior of the AI assistant. For example, you can modify the AI assistant's personality or provide specific instructions on how it should behave throughout the conversation. System messages are optional, and the model's behavior without system messages may be similar to using a generic message like "You are a helpful assistant."
+ * `USER`
+ * User messages provide requests or comments for the AI assistant to respond to.
+ * `ASSISTANT`
+ * Assistant messages store previous assistant responses but can also be written by you to provide examples of the desired behavior.
+* **Stop Sequences**
+
+ These are specific words, phrases, or characters used to signal the LLM to stop generating text.
+* **Content Blocks in Expert Mode Prompts**
+ *
+
+ In an app configured with a dataset, the user inputs a query, and the app uses this query as a retrieval condition for the dataset. The retrieved results are organized and replace the `context` variable, allowing the LLM to reference the context content to provide an answer.
+ *
+
+ The query content is only available in text completion models for conversational applications. The content input by the user in the conversation will replace this variable, triggering a new round of dialogue.
+ *
+
+ Conversation history is only available in text completion models for conversational applications. During multiple conversations in a conversational application, Dify assembles and concatenates the historical conversation records according to built-in rules and replaces the `conversation history` variable. The Human and Assistant prefixes can be modified by clicking the `...` after `conversation history`.
+* **Initial Template**
+
+ In **Expert Mode**, before formal orchestration, the prompt box provides an initial template that you can directly modify to make more customized requests to the LLM. Note: There are differences based on the type of application and mode.
+
+ For details, please refer to 👉[prompt-engineering-template.md](prompt-engineering-template.md "mention")
+
+## Comparison of Two Modes
+
+| Comparison Dimension | Simple Mode | Expert Mode |
+|----------------------|-------------|-------------|
+| Built-in Prompt Visibility | Encapsulated and Invisible | Open and Visible |
+| Automatic Orchestration | Available | Unavailable |
+| Difference in Text Completion and Chat Model Selection | None | Different orchestration after selecting text completion and chat models |
+| Variable Insertion | Available | Available |
+| Content Block Validation | None | Available |
+| SYSTEM / USER / ASSISTANT Message Type Orchestration | None | Available |
+| Context Parameter Settings | Configurable | Configurable |
+| View PROMPT LOG | View full prompt log | View full prompt log |
+| Stop Sequences Parameter Settings | None | Configurable |
+
+## Operating Instructions
+
+### 1. How to Enter Expert Mode
+
+After creating an application, you can switch to **Expert Mode** on the prompt orchestration page, where you can edit the complete application prompts.
+
+
Expert Mode Entry

Context Parameter Settings

Shortcut Key “/”

Debug Log Entry

View Prompt Log in Debug Preview Interface

View Prompt Log in Logs and Annotations Interface