diff --git a/en/.gitbook/assets/image (1) (1) (1) (1) (1).png b/en/.gitbook/assets/image (1) (1) (1) (1) (1).png new file mode 100644 index 0000000..8c20eca Binary files /dev/null and b/en/.gitbook/assets/image (1) (1) (1) (1) (1).png differ diff --git a/en/.gitbook/assets/image (1) (1) (1) (1).png b/en/.gitbook/assets/image (1) (1) (1) (1).png index 8c20eca..d11476b 100644 Binary files a/en/.gitbook/assets/image (1) (1) (1) (1).png and b/en/.gitbook/assets/image (1) (1) (1) (1).png differ diff --git a/en/.gitbook/assets/image (1) (1) (1).png b/en/.gitbook/assets/image (1) (1) (1).png index d11476b..abbd931 100644 Binary files a/en/.gitbook/assets/image (1) (1) (1).png and b/en/.gitbook/assets/image (1) (1) (1).png differ diff --git a/en/.gitbook/assets/image (1) (1).png b/en/.gitbook/assets/image (1) (1).png index abbd931..0fdfeda 100644 Binary files a/en/.gitbook/assets/image (1) (1).png and b/en/.gitbook/assets/image (1) (1).png differ diff --git a/en/.gitbook/assets/image (1).png b/en/.gitbook/assets/image (1).png index 0fdfeda..3db705f 100644 Binary files a/en/.gitbook/assets/image (1).png and b/en/.gitbook/assets/image (1).png differ diff --git a/en/.gitbook/assets/image.png b/en/.gitbook/assets/image.png index 3db705f..090c776 100644 Binary files a/en/.gitbook/assets/image.png and b/en/.gitbook/assets/image.png differ diff --git a/en/advanced/annotation-reply.md b/en/advanced/annotation-reply.md index afe1acf..b4200f4 100644 --- a/en/advanced/annotation-reply.md +++ b/en/advanced/annotation-reply.md @@ -19,6 +19,8 @@ The feature provides an alternative system for enhancing retrieval, skipping the 4. Without a match, the query follows the standard LLM or RAG process. 5. Deactivating Annotation Reply ceases matching replies from the annotations. +

Annotation Reply Process

+ ## Activation Navigate to “Build Apps -> Add Feature” to enable the Annotation Reply feature. diff --git a/en/advanced/datasets/README.md b/en/advanced/datasets/README.md index 3280249..3d60773 100644 --- a/en/advanced/datasets/README.md +++ b/en/advanced/datasets/README.md @@ -76,11 +76,11 @@ Modify Documents For technical reasons, if developers make the following changes 1. Adjust segmentation and cleaning settings 2. Re-upload the file -Dify support customizing the segmented and cleaned text by adding, deleting, and editing paragraphs. You can dynamically adjust your segmentation to make your knowledge more accurate. Click **Document --> paragraph --> Edit** in the knowledge to modify paragraphs content and custom keywords. Click **Document --> paragraph --> Add segment --> Add a segment** to manually add new paragraph. Or click **Document --> paragraph --> Add segment --> Batch add** to batch add new paragraph. +Dify support customizing the segmented and cleaned text by adding, deleting, and editing paragraphs. You can dynamically adjust your segmentation to make your knowledge more accurate. Click **Document --> paragraph --> Edit** in the knowledge to modify paragraphs content and custom keywords. Click **Document --> paragraph --> Add segment --> Add a segment** to manually add new paragraph. Or click **Document --> paragraph --> Add segment --> Batch add** to batch add new paragraph.

Edit

-

add

+

add

### Disabling and Archiving of Documents diff --git a/en/advanced/datasets/sync-from-notion.md b/en/advanced/datasets/sync-from-notion.md index dd3f049..eb906de 100644 --- a/en/advanced/datasets/sync-from-notion.md +++ b/en/advanced/datasets/sync-from-notion.md @@ -43,13 +43,11 @@ Click the " **New integration** " button, the type is Internal by default (canno Once the integration is created, you can update its settings as needed under the **Capabilities** tab and click the "**Show**" button under **Secrets** and then copy the Secrets. -
+
Copy it and back to the Dify source code , in the **.env** file configuration related environment variables, environment variables as follows: -**NOTION\_INTEGRATION\_TYPE** = internal -or -**NOTION\_INTEGRATION\_TYPE** = public +**NOTION\_INTEGRATION\_TYPE** = internal or **NOTION\_INTEGRATION\_TYPE** = public **NOTION\_INTERNAL\_SECRET**=you-internal-secret @@ -67,9 +65,9 @@ After your integration has been successfully made public in your [integration’ Back to the Dify source code , in the **.env** file configuration related environment variables , environment variables as follows: -**NOTION\_INTEGRATION\_TYPE**=public +**NOTION\_INTEGRATION\_TYPE**=public -**NOTION\_CLIENT\_SECRET**=you-client-secret +**NOTION\_CLIENT\_SECRET**=you-client-secret **NOTION\_CLIENT\_ID**=you-client-id diff --git a/en/advanced/model-configuration/xinference.md b/en/advanced/model-configuration/xinference.md index ed1f0b7..4561841 100644 --- a/en/advanced/model-configuration/xinference.md +++ b/en/advanced/model-configuration/xinference.md @@ -47,7 +47,7 @@ There are two ways to deploy Xinference, namely [local deployment](https://githu Visit `http://127.0.0.1:9997`, select the model and specification you need to deploy, as shown below: -
+
As different models have different compatibility on different hardware platforms, please refer to [Xinference built-in models](https://inference.readthedocs.io/en/latest/models/builtin/index.html) to ensure the created model supports the current hardware platform. 4. Obtain the model UID diff --git a/en/advanced/retrieval-augment/README.md b/en/advanced/retrieval-augment/README.md index f3a8d78..339c2b3 100644 --- a/en/advanced/retrieval-augment/README.md +++ b/en/advanced/retrieval-augment/README.md @@ -8,7 +8,7 @@ Developers can utilize this technology to cost-effectively build AI-powered cust In the diagram below, when a user asks, "Who is the President of the United States?", the system doesn't directly relay the question to the large model for an answer. Instead, it first conducts a vector search in a knowledge base (like Wikipedia, as shown in the diagram) for the user's query. It finds relevant content through semantic similarity matching (for instance, "Biden is the current 46th President of the United States…"), and then provides the user's question along with the found knowledge to the large model. This enables the model to have sufficient and complete knowledge to answer the question, thereby yielding a more reliable response. -

Basic Architecture of RAG

+

Basic Architecture of RAG

## Why is this necessary?