diff --git a/en/.gitbook/assets/111.png b/en/.gitbook/assets/111.png
new file mode 100644
index 0000000..c535273
Binary files /dev/null and b/en/.gitbook/assets/111.png differ
diff --git a/en/.gitbook/assets/222.png b/en/.gitbook/assets/222.png
new file mode 100644
index 0000000..b270367
Binary files /dev/null and b/en/.gitbook/assets/222.png differ
diff --git a/en/.gitbook/assets/3.png b/en/.gitbook/assets/3.png
new file mode 100644
index 0000000..35d98ff
Binary files /dev/null and b/en/.gitbook/assets/3.png differ
diff --git a/en/.gitbook/assets/333.png b/en/.gitbook/assets/333.png
new file mode 100644
index 0000000..9351032
Binary files /dev/null and b/en/.gitbook/assets/333.png differ
diff --git a/en/.gitbook/assets/33333.png b/en/.gitbook/assets/33333.png
new file mode 100644
index 0000000..dcc5503
Binary files /dev/null and b/en/.gitbook/assets/33333.png differ
diff --git a/en/.gitbook/assets/344.png b/en/.gitbook/assets/344.png
new file mode 100644
index 0000000..3403c6d
Binary files /dev/null and b/en/.gitbook/assets/344.png differ
diff --git a/en/.gitbook/assets/4.png b/en/.gitbook/assets/4.png
new file mode 100644
index 0000000..23adb4c
Binary files /dev/null and b/en/.gitbook/assets/4.png differ
diff --git a/en/.gitbook/assets/Content Block (1).png b/en/.gitbook/assets/Content Block (1).png
new file mode 100644
index 0000000..ee9c22c
Binary files /dev/null and b/en/.gitbook/assets/Content Block (1).png differ
diff --git a/en/.gitbook/assets/Context.png b/en/.gitbook/assets/Context.png
new file mode 100644
index 0000000..d12d844
Binary files /dev/null and b/en/.gitbook/assets/Context.png differ
diff --git a/en/.gitbook/assets/QUERY.png b/en/.gitbook/assets/QUERY.png
new file mode 100644
index 0000000..c8d8cf2
Binary files /dev/null and b/en/.gitbook/assets/QUERY.png differ
diff --git a/en/.gitbook/assets/history (1).png b/en/.gitbook/assets/history (1).png
new file mode 100644
index 0000000..d221108
Binary files /dev/null and b/en/.gitbook/assets/history (1).png differ
diff --git a/en/.gitbook/assets/history.png b/en/.gitbook/assets/history.png
new file mode 100644
index 0000000..d221108
Binary files /dev/null and b/en/.gitbook/assets/history.png differ
diff --git a/en/.gitbook/assets/iphone.png b/en/.gitbook/assets/iphone.png
new file mode 100644
index 0000000..93bcd7a
Binary files /dev/null and b/en/.gitbook/assets/iphone.png differ
diff --git a/en/.gitbook/assets/log.png b/en/.gitbook/assets/log.png
new file mode 100644
index 0000000..55f0a00
Binary files /dev/null and b/en/.gitbook/assets/log.png differ
diff --git a/en/.gitbook/assets/log1.png b/en/.gitbook/assets/log1.png
new file mode 100644
index 0000000..3714ead
Binary files /dev/null and b/en/.gitbook/assets/log1.png differ
diff --git a/en/.gitbook/assets/screenshot-20231017-092613 (1).png b/en/.gitbook/assets/screenshot-20231017-092613 (1).png
new file mode 100644
index 0000000..43bd4f7
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20231017-092613 (1).png differ
diff --git a/en/.gitbook/assets/screenshot-20231017-092613.png b/en/.gitbook/assets/screenshot-20231017-092613.png
new file mode 100644
index 0000000..43bd4f7
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20231017-092613.png differ
diff --git a/en/.gitbook/assets/screenshot-20231017-092957.png b/en/.gitbook/assets/screenshot-20231017-092957.png
new file mode 100644
index 0000000..454a22b
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20231017-092957.png differ
diff --git a/en/.gitbook/assets/screenshot-20231018-173202 (1).png b/en/.gitbook/assets/screenshot-20231018-173202 (1).png
new file mode 100644
index 0000000..4bf8ed7
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20231018-173202 (1).png differ
diff --git a/en/.gitbook/assets/screenshot-20231018-173202.png b/en/.gitbook/assets/screenshot-20231018-173202.png
new file mode 100644
index 0000000..4bf8ed7
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20231018-173202.png differ
diff --git a/en/.gitbook/assets/top k.png b/en/.gitbook/assets/top k.png
new file mode 100644
index 0000000..e3ce1d9
Binary files /dev/null and b/en/.gitbook/assets/top k.png differ
diff --git a/en/SUMMARY.md b/en/SUMMARY.md
index f900eed..c6a6369 100644
--- a/en/SUMMARY.md
+++ b/en/SUMMARY.md
@@ -38,6 +38,8 @@
## Advanced
+* [Advanced Mode for Prompt Engineering](advanced/advanced-mode-for-prompt-engineering/README.md)
+ * [Prompt Template](advanced/advanced-mode-for-prompt-engineering/prompt-template.md)
* [Datasets\&Index](advanced/datasets/README.md)
* [Sync from Notion](advanced/datasets/sync-from-notion.md)
* [Maintain Datasets Via Api](advanced/datasets/maintain-dataset-via-api.md)
diff --git a/en/advanced/advanced-mode-for-prompt-engineering/README.md b/en/advanced/advanced-mode-for-prompt-engineering/README.md
new file mode 100644
index 0000000..f8566a1
--- /dev/null
+++ b/en/advanced/advanced-mode-for-prompt-engineering/README.md
@@ -0,0 +1,193 @@
+# Advanced Mode for Prompt Engineering
+
+## Feature Introduction
+
+Currently, the orchestration for creating apps in Dify is set to **Simple Mode** by default. This is ideal for non-tech-savvy individuals who want to quickly make an app. For example, if you want to create a corporate knowledge-base ChatBot or an article summary Generator, you can use the **Simple Mode** to design `Pre-prompt` words, add `Query`, integrate `Context`, and other straightforward steps to launch a complete app. For more head to 👉 [text-generation-application.md](../../application/prompt-engineering/text-generation-application.md "mention") and [conversation-application.md](../../application/prompt-engineering/conversation-application.md "mention").
+
+However, you surely want to design prompts in a more customized manner if you're a developer who has conducted in-depth research on prompts, then you should opt for the **Advanced Mode**. In this mode, you are granted permission to customize comprehensive prompts rather than using the pre-packaged prompts from Dify. You can modify the built-in prompts, rearrange the placement of `Context` and `History` , set necessary parameters, and more. If you're familiar with the OpenAI's Playground, you can get up to speed with this mode more quickly.
+
+Well, before you try the new mode, you should be aware of some essential elements in it:
+
+* **Complete**
+
+ When choosing a model, if you see "COMPLETE" on the right side of the model name, it indicates a Text completion model e.g.
+
+ This type of model accepts a freeform text string and generates a text completion, attempting to match any context or pattern you provide. For example, if you write the prompt `As René Descartes said, "I think, therefore"`, it's highly likely that the model will return `"I am."` as the completion.
+
+
+* **Chat**
+
+ When choosing a model, if you see "CHAT" on the right side of the model name, it indicates a Chat completions model e.g.
+
+ This type of model takes a list of messages as input and returns a message generated by the model as output. It consists of three message types: `SYSTEM`, `USER`, and `ASSISTANT`.
+
+ * `SYSTEM`
+
+ System messages help guide the behavior of the AI assistant. For example, you can alter the personality of the AI assistant or provide specific instructions on how it should perform throughout the conversation. System messages are optional. Without system messages, the AI assistant might behave like it's using generic messages such as `"you are a helpful assistant."`
+ * `USER`
+
+ User messages provide requests or comments for the AI assistant to respond to.
+ * `ASSISTANT`
+
+ Assistant messages store previous assistant responses, but they can also be written by you to provide examples of desired behavior.
+
+
+* **Stop\_Sequences**
+
+ Stop\_Sequences refers to specific words, phrases, or characters used to send a signal to LLM to stop generating text.
+
+
+
+* **Blocks**
+
+
+
+ When users input a query, the app processes the query as search criteria for the dataset. The organized results from the search then replace the variable `Context`, allowing the LLM to reference the content for its response.
+
+
+
+
+
+ The query content is only available in the Text completion models of conversational applications. The content entered by the user during the conversation will replace this variable, initiating a new turn of dialogue.
+
+
+
+
+
+ The conversation history is only available in the Text completion model of conversational applications. When engaging in multiple conversations in dialogue applications, Dify will assemble and concatenate the historical dialogue records according to built-in rules and replace the 'Conversation History' variable. The `Human` and `Assistant` prefixes can be modified by clicking on the `...` after "Conversation History".
+
+
+* #### **Prompt Template**
+
+ In this mode, before formal orchestration, an initial template is provided in the prompt box. We can directly modify this template to have more customized requirements for LLM. Different types of applications have variations in different modes.
+
+ For more head to 👉 [prompt-template.md](prompt-template.md "mention")
+
+## Comparison of the two modes
+
+
| Comparison Dimension | Simple Mode | Advanced Mode |
|---|---|---|
| Visibility of Built-in Prompts | Invisible | Visible |
| Automatic Design | Available | Disabled |
| Variable Insertion | Available | Available |
| Block Validation | Disabled | Available |
| SYSTEM / USER / ASSISTANT | Invisible | Visible |
| Context parameter settings | Available | Available |
| PROMPT LOG | Available | Available |
| Stop_Sequences | Disabled | Available |

Entrance of the Advanced Mode

Context parameters
will be displayed on the prompt design page.
+{% endhint %}
+
+**TopK:** The value is an integer from 1 to 10.
+
+It is used to filter the text fragments with the highest similarity to the user's query. The system will also dynamically adjust the number of fragments based on the context window size of the selected model. The default system value is 2. This value is recommended to be set between 2 and 5, because we expect to get answers that match the embedded context more closely.\
+
+
+**Score Threshold:** The value is a floating-point number from 0 to 1, with two decimal places.
+
+It is used to set the similarity threshold for text segment selection, i.e., it only recalls text segments that exceed the set score. By default, the system turns this setting off, meaning there's no filtering based on the similarity value of the recalled text segments. When activated, the default value is 0.7. We recommend keeping this setting deactivated by default. If you have more stringent reply requirements, you can set a higher value, though it's not advisable to set it excessively high.
+
+### 3. **Stop\_Sequences**
+
+We do not expect the LLM to generate excessive content. Therefore, it's necessary to set specific words, phrases, or characters to signal the LLM to stop generating text. The default setting is `Human:` .For example, if you've written the _Few-Shot_ below:
+
+```
+Human1: What color is the sky?
+
+Assistant1: The sky is blue.
+
+Human1: What color is the fire?
+
+Assistant1: The fire is red.
+
+Human1: What color is the soil?
+
+Assistant1:
+```
+
+Then, in the model parameters, you need to locate `Stop_Sequences` and input `Human1:`. Do not forget to press the `Tab` key. In this way, the LLM will only respond with one sentence when replying instead of generating any extra dialogues:
+
+```
+The soil is yellow.
+```
+
+Because LLM stops generating content before the next `Human1:`.
+
+### 4.Use "/" to insert Variables and Blocks
+
+You can enter "/" in the text editor to quickly bring up Blocks to insert into the prompt.
+
+
shortcut "/"

Access to the Prompt Log

Prompt Log

Logs & Ann.