GITBOOK-65: No subject

doc/update-nodes-doc
vincehe 2024-04-08 17:31:01 +00:00 committed by gitbook-bot
parent 02cdbece47
commit 37db6f6ba4
No known key found for this signature in database
GPG Key ID: 07D2180C7B12D0FF
11 changed files with 79 additions and 1 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 355 KiB

After

Width:  |  Height:  |  Size: 189 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 127 KiB

After

Width:  |  Height:  |  Size: 213 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 108 KiB

After

Width:  |  Height:  |  Size: 223 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 294 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 225 KiB

View File

@ -1,2 +1,7 @@
# Code
The Code node provides the ultimate level of flexibility, allowing developers to inject custom scripts in Python or Javascript into their workflows and manipulate variables in ways pre-defined nodes cannot. The configuration fields lets you define the expected input/output variables and write the code to execute:
<figure><img src="https://langgenius.feishu.cn/space/api/box/stream/download/asynccode/?code=ZDVmYjg5MjUwYzA3ZjhkODAxOWQxZWI3OGRiZjE1ZDZfTTljRjd1YmlkTFlWRjRqZ0g4QzhsQVhkWFJPcnVPOVlfVG9rZW46T1c1VGJZR1JLb1pjNDJ4ZTV2NmMxN25WbmxoXzE3MTI1OTczNDc6MTcxMjYwMDk0N19WNA" alt="" width="375"><figcaption></figcaption></figure>
The execution environment is sandboxed for both Python and Javascript, meaning that certain functionalities that require extensive system resources or pose security risks are not available. This includes, but is not limited to, direct file system access, network calls, and operating system-level commands.

View File

@ -1,2 +1,5 @@
# IF/ELSE
The IF/ELSE Node allows you to split a workflow into two branches based on if/else conditions. In this node, you can set one or more IF conditions. When the IF condition(s) are met, the workflow proceeds to the next step under the "IS TRUE" branch. If the IF condition(s) are not met, the workflow triggers the next step under the "IS FALSE" branch.
<figure><img src="../../../.gitbook/assets/image (58).png" alt=""><figcaption></figcaption></figure>

View File

@ -1,2 +1,30 @@
# Knowledge Retrieval
\
The Knowledge Base Retrieval Node is designed to query text content related to user questions from the Dify Knowledge Base, which can then be used as context for subsequent answers by the Large Language Model (LLM).
<figure><img src="../../../.gitbook/assets/image (44).png" alt=""><figcaption></figcaption></figure>
Configuring the Knowledge Base Retrieval Node involves three main steps:
1. **Selecting the Query Variable**
2. **Choosing the Knowledge Base for Query**
3. **Configuring the Retrieval Strategy**
**Selecting the Query Variable**
In knowledge base retrieval scenarios, the query variable typically represents the user's input question. In the "Start" node of conversational applications, the system pre-sets "sys.query" as the user input variable. This variable can be used to query the knowledge base for text segments most closely related to the user's question.
**Choosing the Knowledge Base for Query**
Within the knowledge base retrieval node, you can add an existing knowledge base from Dify. For instructions on creating a knowledge base within Dify, please refer to the knowledge base [help documentation](https://docs.dify.ai/v/zh-hans/guides/knowledge-base).
**Configuring the Retrieval Strategy**
It's possible to modify the indexing strategy and retrieval mode for an individual knowledge base within the node. For a detailed explanation of these settings, refer to the knowledge base [help documentation](https://docs.dify.ai/v/zh-hans/learn-more/extended-reading/retrieval-augment/hybrid-search).
<figure><img src="../../../.gitbook/assets/image (49).png" alt=""><figcaption></figcaption></figure>
Dify offers two recall strategies for different knowledge base retrieval scenarios: "N-choose-1 Recall" and "Multi-way Recall". In the N-choose-1 mode, knowledge base queries are executed through function calling, requiring the selection of a system reasoning model. In the multi-way recall mode, a Rerank model needs to be configured for result re-ranking. For a detailed explanation of these two recall strategies, refer to the retrieval mode explanation in the [help documentation](https://docs.dify.ai/v/zh-hans/learn-more/extended-reading/retrieval-augment/retrieval).
<figure><img src="../../../.gitbook/assets/image (51).png" alt=""><figcaption></figcaption></figure>

View File

@ -11,7 +11,7 @@ Configuring an LLM node primarily involves two steps:
**Model Configuration**&#x20;
Before selecting a model suitable for your task, you must complete the model configuration in "System Settings—Model Provider". The specific configuration method can be referenced in the model configuration instructions. After selecting a model, you can configure its parameters.
Before selecting a model suitable for your task, you must complete the model configuration in "System Settings—Model Provider". The specific configuration method can be referenced in the [model configuration instructions](https://docs.dify.ai/v/zh-hans/guides/model-configuration). After selecting a model, you can configure its parameters.
<figure><img src="../../../.gitbook/assets/image (10).png" alt=""><figcaption></figcaption></figure>

View File

@ -1,2 +1,17 @@
# Question Classifier
Question Classifier node defines the categorization conditions for user queries, enabling the LLM to dictate the progression of the dialogue based on these categorizations. As illustrated in a typical customer service robot scenario, the question classifier can serve as a preliminary step to knowledge base retrieval, identifying user intent. Classifying user intent before retrieval can significantly enhance the recall efficiency of the knowledge base.
<figure><img src="../../../.gitbook/assets/image (54).png" alt=""><figcaption></figcaption></figure>
Configuring the Question Classifier Node involves three main components:
1. **Selecting the Input Variable**
2. **Configuring the Inference Model**
3. **Writing the Classification Method**
**Selecting the Input Variable** In conversational customer scenarios, you can use the user input variable from the "Start Node" (sys.query) as the input for the question classifier. In automated/batch processing scenarios, customer feedback or email content can be utilized as input variables.
**Configuring the Inference Model** The question classifier relies on the natural language processing capabilities of the LLM to categorize text. You will need to configure an inference model for the classifier. Before configuring this model, you might need to complete the model setup in "System Settings - Model Provider". The specific configuration method can be found in the model configuration instructions. After selecting a suitable model, you can configure its parameters.
**Writing Classification Conditions** You can manually add multiple classifications by composing keywords or descriptive sentences that fit each classification. Based on the descriptions of these conditions, the question classifier can route the dialogue to the appropriate process path according to the semantics of the user's input.

View File

@ -1,2 +1,29 @@
# Template
Template lets you dynamically format and combine variables from previous nodes into a single text-based output using Jinja2, a powerful templating syntax for Python. It's useful for combining data from multiple sources into a specific structure required by subsequent nodes. The simple example below shows how to assemble an article by piecing together various previous outputs:
<figure><img src="https://langgenius.feishu.cn/space/api/box/stream/download/asynccode/?code=MzE1NTFmODViNTFmMDQzNTg5YTZhMjFiODdlYjI4ZTFfRHRJVEJpNlNWVWVxYWs0c1I3c09OTzFCUUJoWURndkpfVG9rZW46R210aGJUa3R0b3ByTVV4QVlwc2NXNTFRbnZjXzE3MTI1OTczOTM6MTcxMjYwMDk5M19WNA" alt="" width="375"><figcaption></figcaption></figure>
Beyond naive use cases, you can create more complex templates as per Jinja's [documentation](https://jinja.palletsprojects.com/en/3.1.x/templates/) for a variety of tasks. Here's one template that structures retrieved chunks and their relevant metadata from a knowledge retrieval node into a formatted markdown:
```Plain
{% raw %}
{% for item in chunks %}
### Chunk {{ loop.index }}.
### Similarity: {{ item.metadata.score | default('N/A') }}
#### {{ item.title }}
##### Content
{{ item.content | replace('\n', '\n\n') }}
---
{% endfor %}
{% endraw %}
```
<figure><img src="https://langgenius.feishu.cn/space/api/box/stream/download/asynccode/?code=ZjFhMzg4MzVkNTU2MmY1ZDg0NTVjY2RiMWM5MDU4YmVfRUwybmxWQlZHc0pxNE5CVGx0b0JaYmVDdzlFeEJUMDBfVG9rZW46SktUN2JKaFhLb3hEemV4NVZQMmN1TGhsbmlmXzE3MTI1OTczOTM6MTcxMjYwMDk5M19WNA" alt=""><figcaption></figcaption></figure>
This template node can then be used within a Chatflow to return intermediate outputs to the end user, before a LLM response is initiated.
> The `Answer` node in a Chatflow is non-terminal. It can be inserted anywhere to output responses at multiple points within the flow.