GITBOOK-64: No subject

doc/update-nodes-doc
vincehe 2024-04-08 17:20:12 +00:00 committed by gitbook-bot
parent 908872aed3
commit 02cdbece47
No known key found for this signature in database
GPG Key ID: 07D2180C7B12D0FF
66 changed files with 109 additions and 27 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 88 KiB

After

Width:  |  Height:  |  Size: 728 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 728 KiB

After

Width:  |  Height:  |  Size: 496 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 496 KiB

After

Width:  |  Height:  |  Size: 252 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 252 KiB

After

Width:  |  Height:  |  Size: 378 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 378 KiB

After

Width:  |  Height:  |  Size: 322 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 322 KiB

After

Width:  |  Height:  |  Size: 219 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 154 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 154 KiB

After

Width:  |  Height:  |  Size: 277 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 231 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 231 KiB

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 278 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 100 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 146 KiB

After

Width:  |  Height:  |  Size: 300 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 512 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 296 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 296 KiB

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 284 KiB

After

Width:  |  Height:  |  Size: 744 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 744 KiB

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 128 KiB

After

Width:  |  Height:  |  Size: 322 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 322 KiB

After

Width:  |  Height:  |  Size: 307 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 369 KiB

After

Width:  |  Height:  |  Size: 189 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

After

Width:  |  Height:  |  Size: 225 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 420 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 420 KiB

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 783 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 783 KiB

After

Width:  |  Height:  |  Size: 664 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 664 KiB

After

Width:  |  Height:  |  Size: 284 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 284 KiB

After

Width:  |  Height:  |  Size: 219 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 676 KiB

After

Width:  |  Height:  |  Size: 215 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 106 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 106 KiB

After

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 193 KiB

After

Width:  |  Height:  |  Size: 219 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 219 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 263 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 263 KiB

After

Width:  |  Height:  |  Size: 216 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 125 KiB

After

Width:  |  Height:  |  Size: 502 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 KiB

After

Width:  |  Height:  |  Size: 234 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 512 KiB

After

Width:  |  Height:  |  Size: 307 KiB

View File

@ -19,7 +19,7 @@ The feature provides an alternative system for enhancing retrieval, skipping the
4. Without a match, the query follows the standard LLM or RAG process.
5. Deactivating Annotation Reply ceases matching replies from the annotations.
<figure><img src="../.gitbook/assets/image (3).png" alt="" width="563"><figcaption><p>Annotation Reply Process</p></figcaption></figure>
<figure><img src="../.gitbook/assets/image (3) (1).png" alt="" width="563"><figcaption><p>Annotation Reply Process</p></figcaption></figure>
## Activation

View File

@ -78,9 +78,9 @@ Modify Documents For technical reasons, if developers make the following changes
Dify support customizing the segmented and cleaned text by adding, deleting, and editing paragraphs. You can dynamically adjust your segmentation to make your knowledge more accurate. Click **Document --> paragraph --> Edit** in the knowledge to modify paragraphs content and custom keywords. Click **Document --> paragraph --> Add segment --> Add a segment** to manually add new paragraph. Or click **Document --> paragraph --> Add segment --> Batch add** to batch add new paragraph.
<figure><img src="../../.gitbook/assets/image (3) (1) (1).png" alt=""><figcaption><p>Edit</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1).png" alt=""><figcaption><p>Edit</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1).png" alt=""><figcaption><p>add</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>add</p></figcaption></figure>
### Disabling and Archiving of Documents

View File

@ -39,11 +39,11 @@ Create an integration in your [integration's settings](https://www.notion.so/my-
Click the " **New integration** " button, the type is Internal by default (cannot be modified), select the associated space, enter the name and upload the logo, and click "**Submit**" to create the integration successfully.
<figure><img src="../../.gitbook/assets/image (4).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (4) (1).png" alt=""><figcaption></figcaption></figure>
Once the integration is created, you can update its settings as needed under the **Capabilities** tab and click the "**Show**" button under **Secrets** and then copy the Secrets.
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
Copy it and back to the Dify source code , in the **.env** file configuration related environment variables, environment variables as follows:
@ -57,11 +57,11 @@ Copy it and back to the Dify source code , in the **.env** file configuration re
To toggle the switch to public settings, you need to **fill in additional information in the Organization Information** form below, including your company name, website, and Retargeting URL, and click the "Submit" button.
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
After your integration has been successfully made public in your [integrations settings page](https://www.notion.so/my-integrations), you will be able to access the integrations secrets in the Secrets tab.
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
Back to the Dify source code , in the **.env** file configuration related environment variables , environment variables as follows:

View File

@ -8,7 +8,7 @@ Developers can utilize this technology to cost-effectively build AI-powered cust
In the diagram below, when a user asks, "Who is the President of the United States?", the system doesn't directly relay the question to the large model for an answer. Instead, it first conducts a vector search in a knowledge base (like Wikipedia, as shown in the diagram) for the user's query. It finds relevant content through semantic similarity matching (for instance, "Biden is the current 46th President of the United States…"), and then provides the user's question along with the found knowledge to the large model. This enables the model to have sufficient and complete knowledge to answer the question, thereby yielding a more reliable response.
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption><p>Basic Architecture of RAG</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1).png" alt=""><figcaption><p>Basic Architecture of RAG</p></figcaption></figure>
## Why is this necessary?

View File

@ -29,7 +29,7 @@ In most text search scenarios, it's crucial to ensure that the most relevant res
In Hybrid Search, vector and keyword indices are pre-established in the database. Upon user query input, the system searches for the most relevant text in documents using both search methods.
<figure><img src="../../.gitbook/assets/image (2) (1).png" alt=""><figcaption><p>Hybrid Search</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (2) (1) (1).png" alt=""><figcaption><p>Hybrid Search</p></figcaption></figure>
"Hybrid Search" doesn't have a definitive definition; this article exemplifies it as a combination of Vector Search and Keyword Search. However, the term can also apply to other combinations of search algorithms. For instance, we could combine knowledge graph technology, used for retrieving entity relationships, with Vector Search.

View File

@ -9,11 +9,11 @@ Workflow reduces system complexity by breaking complex tasks into smaller steps
To address the complexity of user intent recognition in natural language inputs, Chatflow provides problem understanding nodes, such as question classification, question rewriting, sub-question splitting, etc. In addition, it will also provide LLM with the ability to interact with the external environment, i.e., tool invocation capability, such as online search, mathematical calculation, weather query, drawing, etc.
<figure><img src="../../.gitbook/assets/image.png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (15).png" alt=""><figcaption></figcaption></figure>
To solve complex business logic in automation and batch processing scenarios, Workflow provides a wealth of logic nodes, such as code nodes, IF/ELSE nodes, merge nodes, template conversion nodes, etc. In addition, it will also provide the ability to trigger by time and event, facilitating the construction of automated processes.
<figure><img src="../../.gitbook/assets/image (2).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (2) (1).png" alt=""><figcaption></figcaption></figure>
### Common Cases

View File

@ -42,8 +42,8 @@ Variables are crucial for linking the input and output of nodes within a workflo
* **Chatflow Entry**:
<figure><img src="../../.gitbook/assets/image (10).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (10) (1).png" alt=""><figcaption></figcaption></figure>
* **Workflow Entry**:
<figure><img src="../../.gitbook/assets/image (14).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (14) (1).png" alt=""><figcaption></figcaption></figure>

View File

@ -4,3 +4,20 @@ description: Answer
# Answer
Defining Reply Content in a Chatflow Process. In a text editor, you have the flexibility to determine the reply format. This includes crafting a fixed block of text, utilizing output variables from preceding steps as the reply content, or merging custom text with variables for the response.
Answer node can be seamlessly integrated at any point to dynamically deliver content into the dialogue responses. This setup supports a live-editing configuration mode, allowing for both text and image content to be arranged together. The configurations include:
1. Outputting the reply content from a Language Model (LLM) node.
2. Outputting generated images.
3. Outputting plain text.
Example 1: Output plain text.
<figure><img src="../../../.gitbook/assets/image (8).png" alt=""><figcaption></figcaption></figure>
Example 2: Output image and LLM reply.
<figure><img src="../../../.gitbook/assets/image (6).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (7).png" alt="" width="275"><figcaption></figcaption></figure>

View File

@ -1,2 +1,13 @@
# End
Defining the Final Output Content of a Workflow Process. Every workflow needs at least one "End" node to output the final result after full execution.&#x20;
The "End" node serves as the termination point of the process, beyond which no further nodes can be added. In workflow applications, execution results are only output when the process reaches the "End" node. If the process involves conditional branching, multiple "End" nodes must be defined.
Single-Path Execution Example:
<figure><img src="../../../.gitbook/assets/image (2).png" alt=""><figcaption></figcaption></figure>
Multi-Path Execution Example:
<figure><img src="../../../.gitbook/assets/image (5).png" alt=""><figcaption></figcaption></figure>

View File

@ -1,2 +1,38 @@
# LLM
Invoking a Large Language Model for Question Answering or Natural Language Processing. Within an LLM node, you can select an appropriate model, compose prompts, set the context referenced in the prompts, configure memory settings, and adjust the memory window size.
<figure><img src="../../../.gitbook/assets/image (9).png" alt=""><figcaption></figcaption></figure>
Configuring an LLM node primarily involves two steps:
1. Selecting a model
2. Composing system prompts
**Model Configuration**&#x20;
Before selecting a model suitable for your task, you must complete the model configuration in "System Settings—Model Provider". The specific configuration method can be referenced in the model configuration instructions. After selecting a model, you can configure its parameters.
<figure><img src="../../../.gitbook/assets/image (10).png" alt=""><figcaption></figcaption></figure>
**Write Prompts**
Within an LLM node, you can customize the model input prompts. If you choose a conversational model, you can customize the content of system prompts, user messages, and assistant messages.&#x20;
For instance, in a knowledge base Q\&A scenario, after linking the "Result" variable from the knowledge base retrieval node in "Context", inserting the "Context" special variable in the prompts will use the text retrieved from the knowledge base as the context background information for the model input.
<figure><img src="../../../.gitbook/assets/image (12).png" alt=""><figcaption></figcaption></figure>
In the prompt editor, you can bring up the variable insertion menu by typing "/" or "{" to insert special variable blocks or variables from preceding flow nodes into the prompts as context content.
<figure><img src="../../../.gitbook/assets/image (13).png" alt="" width="375"><figcaption></figcaption></figure>
If you opt for a completion model, the system provides preset prompt templates for conversational applications. You can customize the content of the prompts and insert special variable blocks like "Conversation History" and "Context" at appropriate positions by typing "/" or "{", enabling richer conversational functionalities.
<figure><img src="../../../.gitbook/assets/image (14).png" alt=""><figcaption></figcaption></figure>
**Memory Toggle Settings** In conversational applications (Chatflow), the LLM node defaults to enabling system memory settings. In multi-turn dialogues, the system stores historical dialogue messages and passes them into the model. In workflow applications (Workflow), system memory is turned off by default, and no memory setting options are provided.
**Memory Window Settings** If the memory window setting is off, the system dynamically passes historical dialogue messages according to the model's context window. With the memory window setting on, you can configure the number of historical dialogue messages to pass based on your needs.
**Dialogue Role Name Settings** Due to differences in model training phases, different models adhere to role name commands to varying degrees, such as Human/Assistant, Human/AI, 人类/助手, etc. To adapt to the prompt response effects of multiple models, the system allows setting dialogue role names, modifying the role prefix in conversation history.

View File

@ -1,2 +1,20 @@
# Start
Defining initial parameters for a workflow process initiation allows for customization at the start node, where you input variables to kick-start the workflow. Every workflow necessitates a start node, acting as the entry point and foundation for the workflow's execution path.
<figure><img src="../../../.gitbook/assets/image (20).png" alt=""><figcaption></figcaption></figure>
Within the "Start" node, you can define input variables of four types:
* **Text**: For short, simple text inputs like names, identifiers, or any other concise data.
* **Paragraph**: Supports longer text entries, suitable for descriptions, detailed queries, or any extensive textual data.
* **Dropdown Options**: Allows the selection from a predefined list of options, enabling users to choose from a set of predetermined values.
* **Number**: For numeric inputs, whether integers or decimals, to be used in calculations, quantities, identifiers, etc.
<figure><img src="../../../.gitbook/assets/image (24).png" alt=""><figcaption></figcaption></figure>
Once the configuration is completed, the workflow's execution will prompt for the values of the variables defined in the start node. This step ensures that the workflow has all the necessary information to proceed with its designated processes.
<figure><img src="../../../.gitbook/assets/image (37).png" alt=""><figcaption></figcaption></figure>
**Tip:** In Chatflow, the start node provides system-built variables: `sys.query` and `sys.files`. `sys.query` is utilized for user question input in conversational applications, enabling the system to process and respond to user queries. `sys.files` is used for file uploads within the conversation, such as uploading an image to understand its content. This requires the integration of image understanding models or tools designed for processing image inputs, allowing the workflow to interpret and act upon the uploaded files effectively.

View File

@ -33,7 +33,7 @@ There are two ways to deploy Xinference, namely [local deployment](https://githu
Visit `http://127.0.0.1:9997`, select the model and specification you need to deploy, as shown below:
<figure><img src="../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
As different models have different compatibility on different hardware platforms, please refer to [Xinference built-in models](https://inference.readthedocs.io/en/latest/models/builtin/index.html) to ensure the created model supports the current hardware platform.
4. Obtain the model UID

View File

@ -22,7 +22,7 @@ Click the "Create Application" button on the homepage to create an application.
After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “**Prompt Eng.**” to compose the application.
<figure><img src="../../../.gitbook/assets/image (2) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (2) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
**2.1 Fill in Prompts**
@ -44,19 +44,19 @@ To add the opening dialogue, click the "Add Feature" button in the upper left co
And then edit the opening remarks:
![](<../../../.gitbook/assets/image (15).png>)
![](<../../../.gitbook/assets/image (15) (1).png>)
**2.2 Adding Context**
If an application wants to generate content based on private contextual conversations, it can use our [knowledge](../../../features/datasets/) feature. Click the "Add" button in the context to add a knowledge base.
![](<../../../.gitbook/assets/image (9).png>)
![](<../../../.gitbook/assets/image (9) (1).png>)
**2.3 Debugging**
We fill in the user input on the right side and debug the input content.
![](<../../../.gitbook/assets/image (11).png>)
![](<../../../.gitbook/assets/image (11) (1).png>)
If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:

View File

@ -30,13 +30,13 @@ Prompts are used to give a series of instructions and constraints to the AI resp
The prompt we are filling in here is: `Translate the content to: {{language}}. The content is as follows:`
![](<../../../.gitbook/assets/image (7).png>)
![](<../../../.gitbook/assets/image (7) (1).png>)
**2.2 Adding Context**
If the application wants to generate content based on private contextual conversations, our [knowledge](../../../features/datasets/) feature can be used. Click the "Add" button in the context to add a knowledge base.
![](<../../../.gitbook/assets/image (12).png>)
![](<../../../.gitbook/assets/image (12) (1).png>)
**2.3 Adding Future: Generate more like this**

View File

@ -94,7 +94,7 @@ _I want you to act as an IT Expert in my Notion workspace, using your knowledge
It's recommended to initially enable the AI to actively furnish the users with a starter sentence, providing a clue as to what they can ask. Furthermore, activating the 'Speech to Text' feature can allow users to interact with your AI assistant using their voice.
<figure><img src="../../../.gitbook/assets/image (3) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (3) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
Finally, Click the "Publish" button on the top right of the page. Now you can click the public URL in the "Overview" section to converse with your personalized AI assistant!

View File

@ -32,7 +32,7 @@ Currently we support the following plugins:
We can choose the plugins needed for this conversation before the conversation starts.
<figure><img src="../../.gitbook/assets/image (4) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (4) (1) (1).png" alt=""><figcaption></figcaption></figure>
If you use the Google search plugin, you need to configure the SerpAPI key.
@ -48,7 +48,7 @@ Chat supports knowledge. After selecting the knowledge, the questions asked by t
We can select the knowledge needed for this conversation before the conversation starts.
<figure><img src="../../.gitbook/assets/image (5).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (5) (1).png" alt=""><figcaption></figcaption></figure>
### The process of thinking

View File

@ -16,7 +16,7 @@ If you have the requirement to fill in variables when you apply the layout, you
Fill in the necessary content and click the "Start Chat" button to start chatting.
<figure><img src="../../.gitbook/assets/image (8).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (8) (1).png" alt=""><figcaption></figcaption></figure>
Move to the AI's answer, you can copy the content of the conversation, and give the answer "like" and "dislike".
@ -52,4 +52,4 @@ _Please make sure that the device environment you are using is authorized to use
If the "Quotations and Attribution" feature is enabled during the application arrangement, the dialogue returns will automatically show the quoted knowledge document sources.
<figure><img src="../../.gitbook/assets/image (3) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (3) (1) (1).png" alt=""><figcaption></figcaption></figure>

View File

@ -35,7 +35,7 @@ Click the "Run Batch" tab to enter the batch run page.
Click the Download Template button to download the template. Edit the template, fill in the content, and save as a `.csv` file.
<figure><img src="../../.gitbook/assets/image (13).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (13) (1).png" alt=""><figcaption></figcaption></figure>
#### Step 3 Upload the file and run
@ -49,7 +49,7 @@ If you need to export the generated content, you can click the download "button"
Click the "Save" button below the generated results to save the running results. In the "Saved" tab, you can see all saved content.
<figure><img src="../../.gitbook/assets/image (6).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (6) (1).png" alt=""><figcaption></figcaption></figure>
### Generate more similar results