GITBOOK-66: No subject
|
After Width: | Height: | Size: 88 KiB |
|
Before Width: | Height: | Size: 88 KiB After Width: | Height: | Size: 728 KiB |
|
Before Width: | Height: | Size: 728 KiB After Width: | Height: | Size: 496 KiB |
|
Before Width: | Height: | Size: 496 KiB After Width: | Height: | Size: 252 KiB |
|
Before Width: | Height: | Size: 252 KiB After Width: | Height: | Size: 378 KiB |
|
Before Width: | Height: | Size: 378 KiB After Width: | Height: | Size: 322 KiB |
|
Before Width: | Height: | Size: 322 KiB After Width: | Height: | Size: 219 KiB |
|
Before Width: | Height: | Size: 219 KiB After Width: | Height: | Size: 343 KiB |
|
After Width: | Height: | Size: 154 KiB |
|
Before Width: | Height: | Size: 154 KiB After Width: | Height: | Size: 277 KiB |
|
Before Width: | Height: | Size: 277 KiB After Width: | Height: | Size: 284 KiB |
|
After Width: | Height: | Size: 231 KiB |
|
Before Width: | Height: | Size: 231 KiB After Width: | Height: | Size: 278 KiB |
|
Before Width: | Height: | Size: 278 KiB After Width: | Height: | Size: 284 KiB |
|
After Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 278 KiB |
|
Before Width: | Height: | Size: 278 KiB After Width: | Height: | Size: 129 KiB |
|
After Width: | Height: | Size: 100 KiB |
|
Before Width: | Height: | Size: 100 KiB After Width: | Height: | Size: 156 KiB |
|
Before Width: | Height: | Size: 156 KiB After Width: | Height: | Size: 136 KiB |
|
After Width: | Height: | Size: 146 KiB |
|
Before Width: | Height: | Size: 146 KiB After Width: | Height: | Size: 300 KiB |
|
Before Width: | Height: | Size: 300 KiB After Width: | Height: | Size: 235 KiB |
|
After Width: | Height: | Size: 32 KiB |
|
Before Width: | Height: | Size: 32 KiB After Width: | Height: | Size: 512 KiB |
|
Before Width: | Height: | Size: 512 KiB After Width: | Height: | Size: 307 KiB |
|
After Width: | Height: | Size: 296 KiB |
|
Before Width: | Height: | Size: 296 KiB After Width: | Height: | Size: 284 KiB |
|
Before Width: | Height: | Size: 284 KiB After Width: | Height: | Size: 744 KiB |
|
Before Width: | Height: | Size: 744 KiB After Width: | Height: | Size: 128 KiB |
|
Before Width: | Height: | Size: 128 KiB After Width: | Height: | Size: 322 KiB |
|
Before Width: | Height: | Size: 322 KiB After Width: | Height: | Size: 307 KiB |
|
Before Width: | Height: | Size: 307 KiB After Width: | Height: | Size: 199 KiB |
|
After Width: | Height: | Size: 420 KiB |
|
Before Width: | Height: | Size: 420 KiB After Width: | Height: | Size: 70 KiB |
|
Before Width: | Height: | Size: 70 KiB After Width: | Height: | Size: 783 KiB |
|
Before Width: | Height: | Size: 783 KiB After Width: | Height: | Size: 664 KiB |
|
Before Width: | Height: | Size: 664 KiB After Width: | Height: | Size: 284 KiB |
|
Before Width: | Height: | Size: 284 KiB After Width: | Height: | Size: 219 KiB |
|
Before Width: | Height: | Size: 219 KiB After Width: | Height: | Size: 251 KiB |
|
After Width: | Height: | Size: 106 KiB |
|
Before Width: | Height: | Size: 106 KiB After Width: | Height: | Size: 193 KiB |
|
Before Width: | Height: | Size: 193 KiB After Width: | Height: | Size: 219 KiB |
|
Before Width: | Height: | Size: 219 KiB After Width: | Height: | Size: 284 KiB |
|
After Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 219 KiB |
|
Before Width: | Height: | Size: 219 KiB After Width: | Height: | Size: 274 KiB |
|
After Width: | Height: | Size: 263 KiB |
|
Before Width: | Height: | Size: 263 KiB After Width: | Height: | Size: 216 KiB |
|
Before Width: | Height: | Size: 216 KiB After Width: | Height: | Size: 351 KiB |
|
After Width: | Height: | Size: 125 KiB |
|
Before Width: | Height: | Size: 125 KiB After Width: | Height: | Size: 502 KiB |
|
Before Width: | Height: | Size: 502 KiB After Width: | Height: | Size: 321 KiB |
|
After Width: | Height: | Size: 131 KiB |
|
Before Width: | Height: | Size: 131 KiB After Width: | Height: | Size: 234 KiB |
|
Before Width: | Height: | Size: 234 KiB After Width: | Height: | Size: 250 KiB |
|
After Width: | Height: | Size: 18 KiB |
|
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 288 KiB |
|
Before Width: | Height: | Size: 288 KiB After Width: | Height: | Size: 240 KiB |
|
Before Width: | Height: | Size: 307 KiB After Width: | Height: | Size: 254 KiB |
|
|
@ -58,10 +58,13 @@
|
|||
* [HTTP Request](features/workflow/node/http-request.md)
|
||||
* [Tools](features/workflow/node/tools.md)
|
||||
* [Preview\&Run](features/workflow/preview-and-run/README.md)
|
||||
* [Preview\&Run](features/workflow/preview-and-run/preview-and-run.md)
|
||||
* [Step Test](features/workflow/preview-and-run/step-test.md)
|
||||
* [Log](features/workflow/preview-and-run/log.md)
|
||||
* [Checklist](features/workflow/preview-and-run/checklist.md)
|
||||
* [History](features/workflow/preview-and-run/history.md)
|
||||
* [Publish](features/workflow/publish.md)
|
||||
* [Export/Import](features/workflow/export-import.md)
|
||||
* [RAG (Retrieval Augmented Generation)](features/retrieval-augment/README.md)
|
||||
* [Hybrid Search](features/retrieval-augment/hybrid-search.md)
|
||||
* [Rerank](features/retrieval-augment/rerank.md)
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ The feature provides an alternative system for enhancing retrieval, skipping the
|
|||
4. Without a match, the query follows the standard LLM or RAG process.
|
||||
5. Deactivating Annotation Reply ceases matching replies from the annotations.
|
||||
|
||||
<figure><img src="../.gitbook/assets/image (3) (1).png" alt="" width="563"><figcaption><p>Annotation Reply Process</p></figcaption></figure>
|
||||
<figure><img src="../.gitbook/assets/image (3) (1) (1).png" alt="" width="563"><figcaption><p>Annotation Reply Process</p></figcaption></figure>
|
||||
|
||||
## Activation
|
||||
|
||||
|
|
|
|||
|
|
@ -78,9 +78,9 @@ Modify Documents For technical reasons, if developers make the following changes
|
|||
|
||||
Dify support customizing the segmented and cleaned text by adding, deleting, and editing paragraphs. You can dynamically adjust your segmentation to make your knowledge more accurate. Click **Document --> paragraph --> Edit** in the knowledge to modify paragraphs content and custom keywords. Click **Document --> paragraph --> Add segment --> Add a segment** to manually add new paragraph. Or click **Document --> paragraph --> Add segment --> Batch add** to batch add new paragraph.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1).png" alt=""><figcaption><p>Edit</p></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1) (1).png" alt=""><figcaption><p>Edit</p></figcaption></figure>
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>add</p></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>add</p></figcaption></figure>
|
||||
|
||||
### Disabling and Archiving of Documents
|
||||
|
||||
|
|
|
|||
|
|
@ -39,11 +39,11 @@ Create an integration in your [integration's settings](https://www.notion.so/my-
|
|||
|
||||
Click the " **New integration** " button, the type is Internal by default (cannot be modified), select the associated space, enter the name and upload the logo, and click "**Submit**" to create the integration successfully.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (4) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (4) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Once the integration is created, you can update its settings as needed under the **Capabilities** tab and click the "**Show**" button under **Secrets** and then copy the Secrets.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Copy it and back to the Dify source code , in the **.env** file configuration related environment variables, environment variables as follows:
|
||||
|
||||
|
|
@ -57,11 +57,11 @@ Copy it and back to the Dify source code , in the **.env** file configuration re
|
|||
|
||||
To toggle the switch to public settings, you need to **fill in additional information in the Organization Information** form below, including your company name, website, and Retargeting URL, and click the "Submit" button.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
After your integration has been successfully made public in your [integration’s settings page](https://www.notion.so/my-integrations), you will be able to access the integration’s secrets in the Secrets tab.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Back to the Dify source code , in the **.env** file configuration related environment variables , environment variables as follows:
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Developers can utilize this technology to cost-effectively build AI-powered cust
|
|||
|
||||
In the diagram below, when a user asks, "Who is the President of the United States?", the system doesn't directly relay the question to the large model for an answer. Instead, it first conducts a vector search in a knowledge base (like Wikipedia, as shown in the diagram) for the user's query. It finds relevant content through semantic similarity matching (for instance, "Biden is the current 46th President of the United States…"), and then provides the user's question along with the found knowledge to the large model. This enables the model to have sufficient and complete knowledge to answer the question, thereby yielding a more reliable response.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1).png" alt=""><figcaption><p>Basic Architecture of RAG</p></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>Basic Architecture of RAG</p></figcaption></figure>
|
||||
|
||||
## Why is this necessary?
|
||||
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ In most text search scenarios, it's crucial to ensure that the most relevant res
|
|||
|
||||
In Hybrid Search, vector and keyword indices are pre-established in the database. Upon user query input, the system searches for the most relevant text in documents using both search methods.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (2) (1) (1).png" alt=""><figcaption><p>Hybrid Search</p></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1).png" alt=""><figcaption><p>Hybrid Search</p></figcaption></figure>
|
||||
|
||||
"Hybrid Search" doesn't have a definitive definition; this article exemplifies it as a combination of Vector Search and Keyword Search. However, the term can also apply to other combinations of search algorithms. For instance, we could combine knowledge graph technology, used for retrieving entity relationships, with Vector Search.
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,15 @@
|
|||
# Export/Import
|
||||
|
||||
You can export/import application templates as YAML-format DSL (Domain Specific Language) files within the studio to share applications with your team members.
|
||||
|
||||
To import a DSL file in the studio application list:
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (12).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
To export a DSL file from the studio application list:
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (13).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
To export a DSL file from the workflow orchestration page:
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (14).png" alt=""><figcaption></figcaption></figure>
|
||||
|
|
@ -9,11 +9,11 @@ Workflow reduces system complexity by breaking complex tasks into smaller steps
|
|||
|
||||
To address the complexity of user intent recognition in natural language inputs, Chatflow provides problem understanding nodes, such as question classification, question rewriting, sub-question splitting, etc. In addition, it will also provide LLM with the ability to interact with the external environment, i.e., tool invocation capability, such as online search, mathematical calculation, weather query, drawing, etc.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (15).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (15) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
To solve complex business logic in automation and batch processing scenarios, Workflow provides a wealth of logic nodes, such as code nodes, IF/ELSE nodes, merge nodes, template conversion nodes, etc. In addition, it will also provide the ability to trigger by time and event, facilitating the construction of automated processes.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (2) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (2) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
### Common Cases
|
||||
|
||||
|
|
|
|||
|
|
@ -42,8 +42,8 @@ Variables are crucial for linking the input and output of nodes within a workflo
|
|||
|
||||
* **Chatflow Entry**:
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (10) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (10) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
* **Workflow Entry**:
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (14) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (14) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
|
|
|||
|
|
@ -14,10 +14,10 @@ Answer node can be seamlessly integrated at any point to dynamically deliver con
|
|||
|
||||
Example 1: Output plain text.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (8).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (8) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Example 2: Output image and LLM reply.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (6).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (6) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (7).png" alt="" width="275"><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (7) (1).png" alt="" width="275"><figcaption></figcaption></figure>
|
||||
|
|
|
|||
|
|
@ -6,8 +6,8 @@ The "End" node serves as the termination point of the process, beyond which no f
|
|||
|
||||
Single-Path Execution Example:
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (2).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (2) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Multi-Path Execution Example:
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (5).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (5) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
|
|
|||
|
|
@ -1,2 +1,7 @@
|
|||
# HTTP Request
|
||||
|
||||
HTTP Request node lets you craft and dispatch HTTP requests to specified endpoints, enabling a wide range of integrations and data exchanges with external services. The node supports all common HTTP request methods, and lets you fully customize over the URL, headers, query parameters, body content, and authorization details of the request.
|
||||
|
||||
<figure><img src="https://langgenius.feishu.cn/space/api/box/stream/download/asynccode/?code=YjQ4Y2EyNjhlNWQ3NDA0ZGJiMWUzYTYyMWFkNWRlOThfek1Dd2c1Z3VwdU1jVGpqMkNrM2hzUUZmMXFEUldaOGpfVG9rZW46WGJwOGJuQ0pJb245TFN4aUtXUmNuUktFblVjXzE3MTI1OTc2ODg6MTcxMjYwMTI4OF9WNA" alt="" width="375"><figcaption></figcaption></figure>
|
||||
|
||||
A really handy feature with HTTP request is the ability to dynamically construct the request by inserting variables in different fields. For instance, in a customer support scenario, variables such as username or customer ID can be used to personalize automated responses sent via a POST request, or retrieve individual-specific information related to the customer.The HTTP request returns `body`, `status_code`, `headers`, and `files` as outputs. If the response includes files of [MIME](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics\_of\_HTTP/MIME\_types/Common\_types) types (currently limited to images), the node automatically saves these as `files` for downstream use.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Invoking a Large Language Model for Question Answering or Natural Language Processing. Within an LLM node, you can select an appropriate model, compose prompts, set the context referenced in the prompts, configure memory settings, and adjust the memory window size.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (9).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (9) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Configuring an LLM node primarily involves two steps:
|
||||
|
||||
|
|
@ -13,7 +13,7 @@ Configuring an LLM node primarily involves two steps:
|
|||
|
||||
Before selecting a model suitable for your task, you must complete the model configuration in "System Settings—Model Provider". The specific configuration method can be referenced in the [model configuration instructions](https://docs.dify.ai/v/zh-hans/guides/model-configuration). After selecting a model, you can configure its parameters.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (10).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (10) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
**Write Prompts**
|
||||
|
||||
|
|
@ -21,15 +21,15 @@ Within an LLM node, you can customize the model input prompts. If you choose a c
|
|||
|
||||
For instance, in a knowledge base Q\&A scenario, after linking the "Result" variable from the knowledge base retrieval node in "Context", inserting the "Context" special variable in the prompts will use the text retrieved from the knowledge base as the context background information for the model input.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (12).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (12) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
In the prompt editor, you can bring up the variable insertion menu by typing "/" or "{" to insert special variable blocks or variables from preceding flow nodes into the prompts as context content.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (13).png" alt="" width="375"><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (13) (1).png" alt="" width="375"><figcaption></figcaption></figure>
|
||||
|
||||
If you opt for a completion model, the system provides preset prompt templates for conversational applications. You can customize the content of the prompts and insert special variable blocks like "Conversation History" and "Context" at appropriate positions by typing "/" or "{", enabling richer conversational functionalities.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (14).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (14) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
**Memory Toggle Settings** In conversational applications (Chatflow), the LLM node defaults to enabling system memory settings. In multi-turn dialogues, the system stores historical dialogue messages and passes them into the model. In workflow applications (Workflow), system memory is turned off by default, and no memory setting options are provided.
|
||||
|
||||
|
|
|
|||
|
|
@ -12,6 +12,6 @@ Configuring the Question Classifier Node involves three main components:
|
|||
|
||||
**Selecting the Input Variable** In conversational customer scenarios, you can use the user input variable from the "Start Node" (sys.query) as the input for the question classifier. In automated/batch processing scenarios, customer feedback or email content can be utilized as input variables.
|
||||
|
||||
**Configuring the Inference Model** The question classifier relies on the natural language processing capabilities of the LLM to categorize text. You will need to configure an inference model for the classifier. Before configuring this model, you might need to complete the model setup in "System Settings - Model Provider". The specific configuration method can be found in the model configuration instructions. After selecting a suitable model, you can configure its parameters.
|
||||
**Configuring the Inference Model** The question classifier relies on the natural language processing capabilities of the LLM to categorize text. You will need to configure an inference model for the classifier. Before configuring this model, you might need to complete the model setup in "System Settings - Model Provider". The specific configuration method can be found in the [model configuration instructions](https://docs.dify.ai/v/zh-hans/guides/model-configuration). After selecting a suitable model, you can configure its parameters.
|
||||
|
||||
**Writing Classification Conditions** You can manually add multiple classifications by composing keywords or descriptive sentences that fit each classification. Based on the descriptions of these conditions, the question classifier can route the dialogue to the appropriate process path according to the semantics of the user's input.
|
||||
|
|
|
|||
|
|
@ -1,2 +1,12 @@
|
|||
# Tools
|
||||
|
||||
Within a workflow, Dify provides both built-in and customizable tools. Before utilizing these tools, you need to "authorize" them. If the built-in tools do not meet your requirements, you can create custom tools within "Dify—Tools".
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image.png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Configuring a tool node generally involves two steps:
|
||||
|
||||
1. **Authorizing the Tool/Creating Custom Tools**
|
||||
2. **Configuring Tool Inputs and Parameters**
|
||||
|
||||
For guidance on creating custom tools and configuring them, please refer to the tool configuration instructions.
|
||||
|
|
|
|||
|
|
@ -1,2 +1,11 @@
|
|||
# Variable Assigner
|
||||
|
||||
The Variable Assigner node serves as a hub for collecting branch outputs within the workflow, ensuring that regardless of which branch is taken, the output can be referenced by a single variable. The output can subsequently be manipulated by nodes downstream.
|
||||
|
||||
<figure><img src="https://langgenius.feishu.cn/space/api/box/stream/download/asynccode/?code=YWFlNDlhZWRmN2U5NmI1MzIyNzcwNDIxOGRjNmFmMjVfV29TVmt0SHdMdjlXRGR2TFdGdFdraGV2SFIxWmt3OFpfVG9rZW46WVJKZ2I5a1Ztb0RMRnp4ZnpQZGN5Z1Y5bnliXzE3MTI1OTc2NDg6MTcxMjYwMTI0OF9WNA" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Variable Assigner supports multiple types of output variables including `String`,`Number`, `Object`, and `Array`. Given the specified output type, you may add input variables from the dropdown list of variables to the node. The list of variables is derived from previous branch outputs and autofiltered based on the specified type.
|
||||
|
||||
<figure><img src="https://langgenius.feishu.cn/space/api/box/stream/download/asynccode/?code=MmQ5OGY3OTBiZjdkYjUzM2M4ODBmMWFmYTc0YzUwMzJfY0JJWkJBcWd1WlVXQTJ4RW56Uk11WE1pOEtZeXN6dzJfVG9rZW46TjA5TGJqWmF4b3VIb3R4eUI4ZWNkUWVkbnNmXzE3MTI1OTc0Mjg6MTcxMjYwMTAyOF9WNA" alt="" width="375"><figcaption></figcaption></figure>
|
||||
|
||||
Variable Assigner gives a single `output` variable of the specified type for downstream use.
|
||||
|
|
|
|||
|
|
@ -4,3 +4,6 @@ description: Checklist
|
|||
|
||||
# Checklist
|
||||
|
||||
Before entering debug mode, you can check the checklist to see if there are any nodes with incomplete configurations or that have not been connected.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (8).png" alt=""><figcaption></figcaption></figure>
|
||||
|
|
|
|||
|
|
@ -4,3 +4,6 @@ description: History
|
|||
|
||||
# History
|
||||
|
||||
In the "Run History," you can view the run results and log information from the historical debugging of the current workflow.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (9).png" alt=""><figcaption></figcaption></figure>
|
||||
|
|
|
|||
|
|
@ -4,3 +4,10 @@ description: Log
|
|||
|
||||
# Log
|
||||
|
||||
Clicking "View Log—Details" allows you to see a comprehensive overview of the run, including information on input/output, metadata, and more, in the details section.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (6).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Clicking "View Log—Trace" enables you to review the input/output, token consumption, runtime duration, etc., of each node throughout the complete execution process of the workflow.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (7).png" alt=""><figcaption></figcaption></figure>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,13 @@
|
|||
# Preview\&Run
|
||||
|
||||
Dify Workflow offers a comprehensive set of execution and debugging features. In conversational applications, clicking "Preview" enters debugging mode.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
In workflow applications, clicking "Run" enters debugging mode.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (2).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Once in debugging mode, you can debug the configured workflow using the interface on the right side of the screen.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (5).png" alt=""><figcaption></figcaption></figure>
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
# Step Test
|
||||
|
||||
Workflow supports step-by-step debugging of nodes, where you can repetitively test whether the execution of the current node meets expectations.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (3).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
After running a step test, you can review the execution status, input/output, and metadata information.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (4).png" alt=""><figcaption></figcaption></figure>
|
||||
|
|
@ -1,2 +1,19 @@
|
|||
# Publish
|
||||
|
||||
After completing debugging, clicking "Publish" in the upper right corner allows you to save and quickly release the workflow as different types of applications. 
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (11).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Conversational applications can be published as:
|
||||
|
||||
* Run App
|
||||
* Embed into Site
|
||||
* Access API Reference
|
||||
|
||||
Workflow applications can be published as:
|
||||
|
||||
* Run App
|
||||
* Batch Run App
|
||||
* Access API Reference
|
||||
|
||||
You can also click "Restore" to preview the last published version of the application. Confirming the restore will use the last published workflow version to overwrite the current workflow version.
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ There are two ways to deploy Xinference, namely [local deployment](https://githu
|
|||
|
||||
Visit `http://127.0.0.1:9997`, select the model and specification you need to deploy, as shown below:
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
As different models have different compatibility on different hardware platforms, please refer to [Xinference built-in models](https://inference.readthedocs.io/en/latest/models/builtin/index.html) to ensure the created model supports the current hardware platform.
|
||||
4. Obtain the model UID
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ Click the "Create Application" button on the homepage to create an application.
|
|||
|
||||
After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “**Prompt Eng.**” to compose the application.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (2) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
**2.1 Fill in Prompts**
|
||||
|
||||
|
|
@ -44,19 +44,19 @@ To add the opening dialogue, click the "Add Feature" button in the upper left co
|
|||
|
||||
And then edit the opening remarks:
|
||||
|
||||
 (1).png>)
|
||||
 (1) (1).png>)
|
||||
|
||||
**2.2 Adding Context**
|
||||
|
||||
If an application wants to generate content based on private contextual conversations, it can use our [knowledge](../../../features/datasets/) feature. Click the "Add" button in the context to add a knowledge base.
|
||||
|
||||
 (1).png>)
|
||||
 (1) (1).png>)
|
||||
|
||||
**2.3 Debugging**
|
||||
|
||||
We fill in the user input on the right side and debug the input content.
|
||||
|
||||
 (1).png>)
|
||||
 (1) (1).png>)
|
||||
|
||||
If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:
|
||||
|
||||
|
|
|
|||
|
|
@ -30,13 +30,13 @@ Prompts are used to give a series of instructions and constraints to the AI resp
|
|||
|
||||
The prompt we are filling in here is: `Translate the content to: {{language}}. The content is as follows:`
|
||||
|
||||
 (1).png>)
|
||||
 (1) (1).png>)
|
||||
|
||||
**2.2 Adding Context**
|
||||
|
||||
If the application wants to generate content based on private contextual conversations, our [knowledge](../../../features/datasets/) feature can be used. Click the "Add" button in the context to add a knowledge base.
|
||||
|
||||
 (1).png>)
|
||||
 (1) (1).png>)
|
||||
|
||||
**2.3 Adding Future: Generate more like this**
|
||||
|
||||
|
|
|
|||
|
|
@ -94,7 +94,7 @@ _I want you to act as an IT Expert in my Notion workspace, using your knowledge
|
|||
|
||||
It's recommended to initially enable the AI to actively furnish the users with a starter sentence, providing a clue as to what they can ask. Furthermore, activating the 'Speech to Text' feature can allow users to interact with your AI assistant using their voice.
|
||||
|
||||
<figure><img src="../../../.gitbook/assets/image (3) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../../.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Finally, Click the "Publish" button on the top right of the page. Now you can click the public URL in the "Overview" section to converse with your personalized AI assistant!
|
||||
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ Currently we support the following plugins:
|
|||
|
||||
We can choose the plugins needed for this conversation before the conversation starts.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (4) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (4) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
If you use the Google search plugin, you need to configure the SerpAPI key.
|
||||
|
||||
|
|
@ -48,7 +48,7 @@ Chat supports knowledge. After selecting the knowledge, the questions asked by t
|
|||
|
||||
We can select the knowledge needed for this conversation before the conversation starts.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (5) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (5) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
### The process of thinking
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ If you have the requirement to fill in variables when you apply the layout, you
|
|||
|
||||
Fill in the necessary content and click the "Start Chat" button to start chatting.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (8) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (8) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
Move to the AI's answer, you can copy the content of the conversation, and give the answer "like" and "dislike".
|
||||
|
||||
|
|
@ -52,4 +52,4 @@ _Please make sure that the device environment you are using is authorized to use
|
|||
|
||||
If the "Quotations and Attribution" feature is enabled during the application arrangement, the dialogue returns will automatically show the quoted knowledge document sources.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (3) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ Click the "Run Batch" tab to enter the batch run page.
|
|||
|
||||
Click the Download Template button to download the template. Edit the template, fill in the content, and save as a `.csv` file.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (13) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (13) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
#### Step 3 Upload the file and run
|
||||
|
||||
|
|
@ -49,7 +49,7 @@ If you need to export the generated content, you can click the download "button"
|
|||
|
||||
Click the "Save" button below the generated results to save the running results. In the "Saved" tab, you can see all saved content.
|
||||
|
||||
<figure><img src="../../.gitbook/assets/image (6) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
<figure><img src="../../.gitbook/assets/image (6) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
||||
|
||||
### Generate more similar results
|
||||
|
||||
|
|
|
|||