GITBOOK-84: Translated Sync Data from Website

pull/138/head
allen 2024-07-05 09:16:39 +00:00 committed by gitbook-bot
parent f8870c91b7
commit 7b3b57d03a
No known key found for this signature in database
GPG Key ID: 07D2180C7B12D0FF
83 changed files with 228 additions and 193 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 526 KiB

After

Width:  |  Height:  |  Size: 274 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 274 KiB

After

Width:  |  Height:  |  Size: 538 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 538 KiB

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

After

Width:  |  Height:  |  Size: 660 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 660 KiB

After

Width:  |  Height:  |  Size: 296 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 296 KiB

After

Width:  |  Height:  |  Size: 360 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 360 KiB

After

Width:  |  Height:  |  Size: 506 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 506 KiB

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 525 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 525 KiB

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 133 KiB

After

Width:  |  Height:  |  Size: 265 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 265 KiB

After

Width:  |  Height:  |  Size: 857 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 857 KiB

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 361 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 361 KiB

After

Width:  |  Height:  |  Size: 506 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 506 KiB

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 674 KiB

After

Width:  |  Height:  |  Size: 597 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 597 KiB

After

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 274 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 274 KiB

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

After

Width:  |  Height:  |  Size: 818 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 818 KiB

After

Width:  |  Height:  |  Size: 361 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 361 KiB

After

Width:  |  Height:  |  Size: 356 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 356 KiB

After

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 573 KiB

After

Width:  |  Height:  |  Size: 497 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 497 KiB

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 273 KiB

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

After

Width:  |  Height:  |  Size: 403 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 403 KiB

After

Width:  |  Height:  |  Size: 337 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 539 KiB

After

Width:  |  Height:  |  Size: 731 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 731 KiB

After

Width:  |  Height:  |  Size: 302 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 302 KiB

After

Width:  |  Height:  |  Size: 403 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 403 KiB

After

Width:  |  Height:  |  Size: 337 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 597 KiB

After

Width:  |  Height:  |  Size: 396 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 396 KiB

After

Width:  |  Height:  |  Size: 310 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 310 KiB

After

Width:  |  Height:  |  Size: 403 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 403 KiB

After

Width:  |  Height:  |  Size: 473 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 296 KiB

After

Width:  |  Height:  |  Size: 597 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 597 KiB

After

Width:  |  Height:  |  Size: 267 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 267 KiB

After

Width:  |  Height:  |  Size: 314 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 314 KiB

After

Width:  |  Height:  |  Size: 824 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 370 KiB

After

Width:  |  Height:  |  Size: 519 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 519 KiB

After

Width:  |  Height:  |  Size: 790 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 790 KiB

After

Width:  |  Height:  |  Size: 386 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 386 KiB

After

Width:  |  Height:  |  Size: 598 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 598 KiB

After

Width:  |  Height:  |  Size: 325 KiB

View File

@ -1,4 +1,4 @@
# Welcome to Dify! # Welcome to Dify
Dify is an open-source large language model (LLM) application development platform. It combines the concepts of Backend-as-a-Service and LLMOps to enable developers to quickly build production-grade generative AI applications. Even non-technical personnel can participate in the definition and data operations of AI applications. Dify is an open-source large language model (LLM) application development platform. It combines the concepts of Backend-as-a-Service and LLMOps to enable developers to quickly build production-grade generative AI applications. Even non-technical personnel can participate in the definition and data operations of AI applications.
@ -10,7 +10,7 @@ You can think of libraries like LangChain as toolboxes with hammers, nails, etc.
Importantly, Dify is **open source**, co-created by a professional full-time team and community. You can self-deploy capabilities similar to Assistants API and GPTs based on any model, maintaining full control over your data with flexible security, all on an easy-to-use interface. Importantly, Dify is **open source**, co-created by a professional full-time team and community. You can self-deploy capabilities similar to Assistants API and GPTs based on any model, maintaining full control over your data with flexible security, all on an easy-to-use interface.
> Our community users summarize their evaluation of Dify's products as simple, restrained, and rapid iteration.  > Our community users summarize their evaluation of Dify's products as simple, restrained, and rapid iteration.
> >
> \- Lu Yu, Dify.AI CEO > \- Lu Yu, Dify.AI CEO
@ -31,5 +31,5 @@ The name Dify comes from Define + Modify, referring to defining and continuously
* Read [**Quick Start**](https://docs.dify.ai/application/creating-an-application) for an overview of Difys application building workflow. * Read [**Quick Start**](https://docs.dify.ai/application/creating-an-application) for an overview of Difys application building workflow.
* Learn how to [**self-deploy Dify** ](https://docs.dify.ai/getting-started/install-self-hosted)to your servers and [**integrate open source models**](https://docs.dify.ai/advanced/model-configuration)**.** * Learn how to [**self-deploy Dify** ](https://docs.dify.ai/getting-started/install-self-hosted)to your servers and [**integrate open source models**](https://docs.dify.ai/advanced/model-configuration)**.**
* Understand Difys [**specifications and roadmap**](getting-started/readme/specifications-and-technical-features.md)**.** * Understand Difys [**specifications and roadmap**](getting-started/readme/features-and-specifications.md)**.**
* [**Star us on GitHub**](https://github.com/langgenius/dify) and read our **Contributor Guidelines.** * [**Star us on GitHub**](https://github.com/langgenius/dify) and read our **Contributor Guidelines.**

View File

@ -1,33 +1,31 @@
# Using Dify Cloud # Cloud Services
{% hint style="info" %} {% hint style="info" %}
Note: Dify is currently in the Beta testing phase. If there are inconsistencies between the documentation and the product, please refer to the actual product experience. Note: Dify is currently in the Beta testing phase. If there are inconsistencies between the documentation and the product, please refer to the actual product experience.
{% endhint %} {% endhint %}
Dify offers a [cloud service](http://cloud.dify.ai) for everyone, so you can use the full functionality of Dify without deploying it yourself. Explore the flexible [Plans and Pricing](https://dify.ai/pricing) and select the plan that best suits your needs and requirements. Dify offers a [cloud service](http://cloud.dify.ai) for everyone, so you can use the full functionality of Dify without deploying it yourself. Explore the flexible [Plans and Pricing](https://dify.ai/pricing) and select the plan that best suits your needs and requirements.
Get started now with the [Sandbox plan](http://cloud.dify.ai), which includes a free trial of 200 OpenAI calls, no credit card required. To use the Sandbox plan of the cloud version, you will need a GitHub or Google account, as well as an OpenAI API key. Here's how you can get started: Get started now with the [Sandbox plan](http://cloud.dify.ai), which includes a free trial of 200 OpenAI calls, no credit card required. To use the Sandbox plan of the cloud version, you will need a GitHub or Google account, as well as an OpenAI API key. Here's how you can get started:
1. Sign up to [Dify Cloud](https://cloud.dify.ai) and create a new Workspace or join an existing one. 1. Sign up to [Dify Cloud](https://cloud.dify.ai) and create a new Workspace or join an existing one.
2. Configure your model provider or use our hosted model provider. 2. Configure your model provider or use our hosted model provider.
3. You can [create an application](../user-guide/creating-dify-apps/creating-an-application.md) now! 3. You can [create an application](../guides/application-orchestrate/creating-an-application.md) now!
### FAQs ### FAQs
**Q: How is my data handled and stored when using Dify Cloud?** **Q: How is my data handled and stored when using Dify Cloud?**
A: When you use Dify Cloud, your user data is securely stored on AWS servers located in the US-East region. This includes both the data you actively input and any generated data from your applications. We prioritize your data's security and integrity, ensuring that it is managed with the highest standards of cloud storage solutions. A: When you use Dify Cloud, your user data is securely stored on AWS servers located in the US-East region. This includes both the data you actively input and any generated data from your applications. We prioritize your data's security and integrity, ensuring that it is managed with the highest standards of cloud storage solutions.
**Q: What measures are in place to protect my API keys and other sensitive information?** **Q: What measures are in place to protect my API keys and other sensitive information?**
A: At Dify, we understand the importance of protecting your API keys and other secrets. These are encrypted at rest, which means Dify cannot view them and that only you, the rightful owner, have access to your secrets. A: At Dify, we understand the importance of protecting your API keys and other secrets. These are encrypted at rest, which means Dify cannot view them and that only you, the rightful owner, have access to your secrets.
**Q: Can you explain how application data is anonymized in Dify Cloud?** **Q: Can you explain how application data is anonymized in Dify Cloud?**
A: In Dify Cloud, we anonymize application data to ensure privacy and reduce encryption and decryption overheads. This means that the data used by applications is not directly associated with identifiable user accounts. By anonymizing the data, we enhance privacy while maintaining the performance of our cloud services. A: In Dify Cloud, we anonymize application data to ensure privacy and reduce encryption and decryption overheads. This means that the data used by applications is not directly associated with identifiable user accounts. By anonymizing the data, we enhance privacy while maintaining the performance of our cloud services.
**Q: What is the process for deleting my account and all associated data from Dify Cloud?** **Q: What is the process for deleting my account and all associated data from Dify Cloud?**
A: If you decide to delete your account and remove all associated data from Dify Cloud, you can simply send a request to our support team at support@dify.ai. We are committed to respecting your privacy and data rights, and upon request, we will erase all your data from our systems, adhering to data protection regulations. A: If you decide to delete your account and remove all associated data from Dify Cloud, you can simply send a request to our support team at support@dify.ai. We are committed to respecting your privacy and data rights, and upon request, we will erase all your data from our systems, adhering to data protection regulations.

View File

@ -1,11 +1,11 @@
# Model Providers # List of Model Providers
Dify supports the below model providers out-of-box: Dify supports the below model providers out-of-box:
<table data-full-width="false"><thead><tr><th align="center">Provider</th><th align="center">LLM</th><th align="center">Embedding</th><th align="center">Rerank</th></tr></thead><tbody><tr><td align="center">OpenAI</td><td align="center">✔️(🛠️)(👓)</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Anthropic</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Azure OpenAI</td><td align="center">✔️(🛠️)(👓)</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Google</td><td align="center">✔️(👓)</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Cohere</td><td align="center">✔️</td><td align="center">✔️</td><td align="center">✔️</td></tr><tr><td align="center">Bedrock</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">together.ai</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Ollama</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Replicate</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Hugging Face</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Zhipu AI</td><td align="center">✔️(🛠️)(👓)</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Baichuan</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Spark</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Minimax</td><td align="center">✔️(🛠️)</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Tongyi</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Wenxin</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Moonshot AI</td><td align="center">✔️(🛠️)</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">deepseek</td><td align="center">✔️(🛠️)</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Jina</td><td align="center"></td><td align="center">✔️</td><td align="center">✔️</td></tr><tr><td align="center">ChatGLM</td><td align="center">✔️(🛠️)</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Xinference</td><td align="center">✔️(🛠️)(👓)</td><td align="center">✔️</td><td align="center">✔️</td></tr><tr><td align="center">OpenLLM</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">LocalAI</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">OpenAI API-Compatible</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr></tbody></table> <table data-full-width="false"><thead><tr><th align="center">Provider</th><th align="center">LLM</th><th align="center">Embedding</th><th align="center">Rerank</th></tr></thead><tbody><tr><td align="center">OpenAI</td><td align="center">✔️(🛠️)(👓)</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Anthropic</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Azure OpenAI</td><td align="center">✔️(🛠️)(👓)</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Google</td><td align="center">✔️(👓)</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Cohere</td><td align="center">✔️</td><td align="center">✔️</td><td align="center">✔️</td></tr><tr><td align="center">Bedrock</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">together.ai</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Ollama</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Replicate</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Hugging Face</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Zhipu AI</td><td align="center">✔️(🛠️)(👓)</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Baichuan</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Spark</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Minimax</td><td align="center">✔️(🛠️)</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">Tongyi</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Wenxin</td><td align="center">✔️</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Moonshot AI</td><td align="center">✔️(🛠️)</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">deepseek</td><td align="center">✔️(🛠️)</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Jina</td><td align="center"></td><td align="center">✔️</td><td align="center">✔️</td></tr><tr><td align="center">ChatGLM</td><td align="center">✔️(🛠️)</td><td align="center"></td><td align="center"></td></tr><tr><td align="center">Xinference</td><td align="center">✔️(🛠️)(👓)</td><td align="center">✔️</td><td align="center">✔️</td></tr><tr><td align="center">OpenLLM</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">LocalAI</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr><tr><td align="center">OpenAI API-Compatible</td><td align="center">✔️</td><td align="center">✔️</td><td align="center"></td></tr></tbody></table>
where (🛠️) denotes Function Calling and (👓) denotes support for vision. where (🛠️) denotes Function Calling and (👓) denotes support for vision.
*** ***
This table is continuously updated. We also keep track of model providers requested by community members [here](https://github.com/langgenius/dify/discussions/categories/ideas). If you'd like to see a model provider not listed above, please consider contributing by making a PR. To learn more, check out our [contributing.md](../../community/contributing.md "mention") Guide. This table is continuously updated. We also keep track of model providers requested by community members [here](https://github.com/langgenius/dify/discussions/categories/ideas). If you'd like to see a model provider not listed above, please consider contributing by making a PR. To learn more, check out our [contribution.md](../../community/contribution.md "mention") Guide.

View File

@ -1,4 +1,4 @@
# Agent Assistant # Agent
## Definition ## Definition
@ -8,19 +8,19 @@ An Agent Assistant can leverage the reasoning abilities of large language models
To facilitate quick learning and use, application templates for the Agent Assistant are available in the 'Explore' section. You can integrate these templates into your workspace. The new Dify 'Studio' also allows the creation of a custom Agent Assistant to suit individual requirements. This assistant can assist in analyzing financial reports, composing reports, designing logos, and organizing travel plans. To facilitate quick learning and use, application templates for the Agent Assistant are available in the 'Explore' section. You can integrate these templates into your workspace. The new Dify 'Studio' also allows the creation of a custom Agent Assistant to suit individual requirements. This assistant can assist in analyzing financial reports, composing reports, designing logos, and organizing travel plans.
<figure><img src="../../../.gitbook/assets/docs-1.png" alt=""><figcaption><p>Explore-Agent Assistant Application Template</p></figcaption></figure> <figure><img src="../../.gitbook/assets/docs-1.png" alt=""><figcaption><p>Explore-Agent Assistant Application Template</p></figcaption></figure>
After entering 'Studio-Assistant', you can begin orchestrating by choosing the Agent Assistant. After entering 'Studio-Assistant', you can begin orchestrating by choosing the Agent Assistant.
<figure><img src="../../../.gitbook/assets/docs-2.png" alt=""><figcaption><p>Studio-Create Agent Assistant</p></figcaption></figure> <figure><img src="../../.gitbook/assets/docs-2.png" alt=""><figcaption><p>Studio-Create Agent Assistant</p></figcaption></figure>
The task completion ability of the Agent Assistant depends on the inference capabilities of the model selected. We recommend using a more powerful model series like GPT-4 when employing Agent Assistant to achieve more stable task completion results. The task completion ability of the Agent Assistant depends on the inference capabilities of the model selected. We recommend using a more powerful model series like GPT-4 when employing Agent Assistant to achieve more stable task completion results.
<figure><img src="../../../.gitbook/assets/docs-3.png" alt=""><figcaption><p>Selecting the Reasoning Model for Agent Assistant</p></figcaption></figure> <figure><img src="../../.gitbook/assets/docs-3.png" alt=""><figcaption><p>Selecting the Reasoning Model for Agent Assistant</p></figcaption></figure>
You can write prompts for the Agent Assistant in 'Instructions'. To achieve optimal results, you can clearly define its task objectives, workflow, resources, and limitations in the instructions. You can write prompts for the Agent Assistant in 'Instructions'. To achieve optimal results, you can clearly define its task objectives, workflow, resources, and limitations in the instructions.
<figure><img src="../../../.gitbook/assets/docs-4.png" alt=""><figcaption><p>Orchestrating Prompts for Agent Assistant</p></figcaption></figure> <figure><img src="../../.gitbook/assets/docs-4.png" alt=""><figcaption><p>Orchestrating Prompts for Agent Assistant</p></figcaption></figure>
## Adding Tools for the Agent Assistant ## Adding Tools for the Agent Assistant
@ -30,7 +30,7 @@ In the "Tools" section, you are able to add tools that are required for use. The
You have the option to directly use built-in tools in Dify, or you can easily import custom API tools (currently supporting OpenAPI/Swagger and OpenAI Plugin standards). You have the option to directly use built-in tools in Dify, or you can easily import custom API tools (currently supporting OpenAPI/Swagger and OpenAI Plugin standards).
<figure><img src="../../../.gitbook/assets/docs-5.png" alt=""><figcaption><p>Adding Tools for the Assistant</p></figcaption></figure> <figure><img src="../../.gitbook/assets/docs-5.png" alt=""><figcaption><p>Adding Tools for the Assistant</p></figcaption></figure>
The tool allows you to create more powerful AI applications on Dify. For example, you can orchestrate suitable tools for Agent Assistant, enabling it to complete complex tasks through reasoning, step decomposition, and tool invocation. Additionally, the tool facilitates the integration of your application with other systems or services, allowing interaction with the external environment, such as code execution and access to exclusive information sources. The tool allows you to create more powerful AI applications on Dify. For example, you can orchestrate suitable tools for Agent Assistant, enabling it to complete complex tasks through reasoning, step decomposition, and tool invocation. Additionally, the tool facilitates the integration of your application with other systems or services, allowing interaction with the external environment, such as code execution and access to exclusive information sources.
@ -40,22 +40,22 @@ On Dify, two inference modes are provided for Agent Assistant: Function Calling
In the Agent settings, you can modify the iteration limit of the Agent. In the Agent settings, you can modify the iteration limit of the Agent.
<figure><img src="../../../.gitbook/assets/docs-6.png" alt=""><figcaption><p>Function Calling Mode</p></figcaption></figure> <figure><img src="../../.gitbook/assets/docs-6.png" alt=""><figcaption><p>Function Calling Mode</p></figcaption></figure>
<figure><img src="../../../.gitbook/assets/sec-7.png" alt=""><figcaption><p>ReAct Mode</p></figcaption></figure> <figure><img src="../../.gitbook/assets/sec-7.png" alt=""><figcaption><p>ReAct Mode</p></figcaption></figure>
## Configuring the Conversation Opener ## Configuring the Conversation Opener
You can set up a conversation opener and initial questions for your Agent Assistant. The configured conversation opener will be displayed at the beginning of each user's first interaction, showcasing the types of tasks the Agent can perform, along with examples of questions that can be asked. You can set up a conversation opener and initial questions for your Agent Assistant. The configured conversation opener will be displayed at the beginning of each user's first interaction, showcasing the types of tasks the Agent can perform, along with examples of questions that can be asked.
<figure><img src="../../../.gitbook/assets/docs-8.png" alt=""><figcaption><p>Configuring the Conversation Opener and Initial Questions</p></figcaption></figure> <figure><img src="../../.gitbook/assets/docs-8.png" alt=""><figcaption><p>Configuring the Conversation Opener and Initial Questions</p></figcaption></figure>
## Debugging and Preview ## Debugging and Preview
After orchestrating your Agent Assistant, you have the option to debug and preview it before publishing it as an application. This allows you to assess the effectiveness of the agent in completing tasks. After orchestrating your Agent Assistant, you have the option to debug and preview it before publishing it as an application. This allows you to assess the effectiveness of the agent in completing tasks.
<figure><img src="../../../.gitbook/assets/docs-9.png" alt=""><figcaption><p>Debugging and Preview</p></figcaption></figure> <figure><img src="../../.gitbook/assets/docs-9.png" alt=""><figcaption><p>Debugging and Preview</p></figcaption></figure>
## Application Publish ## Application Publish
<figure><img src="../../../.gitbook/assets/docs-10.png" alt=""><figcaption><p>Publishing the Application as a Webapp</p></figcaption></figure> <figure><img src="../../.gitbook/assets/docs-10.png" alt=""><figcaption><p>Publishing the Application as a Webapp</p></figcaption></figure>

View File

@ -1,8 +1,8 @@
# Moderation # Moderation Tool
In our interactions with AI applications, we often have stringent requirements in terms of content security, user experience, and legal regulations. At this point, we need the "Sensitive Word Review" feature to create a better interactive environment for end-users. On the prompt orchestration page, click "Add Function" and locate the "Content Review" toolbox at the bottom: In our interactions with AI applications, we often have stringent requirements in terms of content security, user experience, and legal regulations. At this point, we need the "Sensitive Word Review" feature to create a better interactive environment for end-users. On the prompt orchestration page, click "Add Function" and locate the "Content Review" toolbox at the bottom:
<figure><img src="../.gitbook/assets/content_moderation.png" alt=""><figcaption><p>Content moderation</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/content_moderation.png" alt=""><figcaption><p>Content moderation</p></figcaption></figure>
## Call the OpenAI Moderation API ## Call the OpenAI Moderation API
@ -10,16 +10,16 @@ OpenAI, along with most companies providing LLMs, includes content moderation fe
Now you can also directly call the OpenAI Moderation API on Dify; you can review either input or output content simply by entering the corresponding "preset reply." Now you can also directly call the OpenAI Moderation API on Dify; you can review either input or output content simply by entering the corresponding "preset reply."
<figure><img src="../.gitbook/assets/moderation2.png" alt=""><figcaption><p>OpenAI Moderation</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/moderation2.png" alt=""><figcaption><p>OpenAI Moderation</p></figcaption></figure>
## Keywords ## Keywords
Developers can customize the sensitive words they need to review, such as using "kill" as a keyword to perform an audit action when users input. The preset reply content should be "The content is violating usage policies." It can be anticipated that when a user inputs a text snippet containing "kill" at the terminal, it will trigger the sensitive word review tool and return the preset reply content. Developers can customize the sensitive words they need to review, such as using "kill" as a keyword to perform an audit action when users input. The preset reply content should be "The content is violating usage policies." It can be anticipated that when a user inputs a text snippet containing "kill" at the terminal, it will trigger the sensitive word review tool and return the preset reply content.
<figure><img src="../.gitbook/assets/moderation3.png" alt=""><figcaption><p>Keywords</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/moderation3.png" alt=""><figcaption><p>Keywords</p></figcaption></figure>
## Moderation Extension ## Moderation Extension
Different enterprises often have their own mechanisms for sensitive word moderation. When developing their own AI applications, such as an internal knowledge base ChatBot, enterprises need to moderate the query content input by employees for sensitive words. For this purpose, developers can write an API extension based on their enterprise's internal sensitive word moderation mechanisms, specifically referring to [moderation-extension.md](extension/api\_based\_extension/moderation-extension.md "mention"), which can then be called on Dify to achieve a high degree of customization and privacy protection for sensitive word review. Different enterprises often have their own mechanisms for sensitive word moderation. When developing their own AI applications, such as an internal knowledge base ChatBot, enterprises need to moderate the query content input by employees for sensitive words. For this purpose, developers can write an API extension based on their enterprise's internal sensitive word moderation mechanisms, specifically referring to [Broken link](broken-reference "mention"), which can then be called on Dify to achieve a high degree of customization and privacy protection for sensitive word review.
<figure><img src="../.gitbook/assets/moderation4.png" alt=""><figcaption><p>Moderation Extension</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/moderation4.png" alt=""><figcaption><p>Moderation Extension</p></figcaption></figure>

View File

@ -1,4 +1,4 @@
# Chat App # Conversation Assistant
Conversation applications use a one-question-one-answer mode to have a continuous conversation with the user. Conversation applications use a one-question-one-answer mode to have a continuous conversation with the user.
@ -16,13 +16,13 @@ Here, we use a interviewer application as an example to introduce the way to com
Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select **"Chat App"** as the application type. Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select **"Chat App"** as the application type.
<figure><img src="../../../.gitbook/assets/image (32).png" alt=""><figcaption><p>Create Application</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image (32).png" alt=""><figcaption><p>Create Application</p></figcaption></figure>
#### Step 2: Compose the Application #### Step 2: Compose the Application
After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “**Prompt Eng.**” to compose the application. After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “**Prompt Eng.**” to compose the application.
<figure><img src="../../../.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
**2.1 Fill in Prompts** **2.1 Fill in Prompts**
@ -34,33 +34,33 @@ The prompt we are filling in here is:
> >
> When I am ready, you can start asking questions. > When I am ready, you can start asking questions.
![](<../../../.gitbook/assets/image (38).png>) ![](<../../.gitbook/assets/image (38).png>)
For a better experience, we will add an opening dialogue: `"Hello, {{name}}. I'm your interviewer, Bob. Are you ready?"` For a better experience, we will add an opening dialogue: `"Hello, {{name}}. I'm your interviewer, Bob. Are you ready?"`
To add the opening dialogue, click the "Add Feature" button in the upper left corner, and enable the "Conversation remarkers" feature: To add the opening dialogue, click the "Add Feature" button in the upper left corner, and enable the "Conversation remarkers" feature:
<figure><img src="../../../.gitbook/assets/image (21).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (21).png" alt=""><figcaption></figcaption></figure>
And then edit the opening remarks: And then edit the opening remarks:
![](<../../../.gitbook/assets/image (15) (1) (1).png>) ![](<../../.gitbook/assets/image (15) (1) (1).png>)
**2.2 Adding Context** **2.2 Adding Context**
If an application wants to generate content based on private contextual conversations, it can use our [knowledge](../../../features/datasets/) feature. Click the "Add" button in the context to add a knowledge base. If an application wants to generate content based on private contextual conversations, it can use our [knowledge](broken-reference) feature. Click the "Add" button in the context to add a knowledge base.
![](<../../../.gitbook/assets/image (9) (1) (1).png>) ![](<../../.gitbook/assets/image (9) (1) (1).png>)
**2.3 Debugging** **2.3 Debugging**
We fill in the user input on the right side and debug the input content. We fill in the user input on the right side and debug the input content.
![](<../../../.gitbook/assets/image (11) (1) (1).png>) ![](<../../.gitbook/assets/image (11) (1) (1).png>)
If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model: If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:
![](<../../../.gitbook/assets/image (29).png>) ![](<../../.gitbook/assets/image (29).png>)
We support the GPT-4 model. We support the GPT-4 model.
@ -72,6 +72,6 @@ After debugging the application, click the **"Publish"** button in the upper rig
On the overview page, you can find the sharing address of the application. Click the "Preview" button to preview the shared application. Click the "Share" button to get the sharing link address. Click the "Settings" button to set the shared application information. On the overview page, you can find the sharing address of the application. Click the "Preview" button to preview the shared application. Click the "Share" button to get the sharing link address. Click the "Settings" button to set the shared application information.
<figure><img src="../../../.gitbook/assets/image (47).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (47).png" alt=""><figcaption></figcaption></figure>
If you want to customize the application that you share, you can Fork our open source [WebApp template](https://github.com/langgenius/webapp-conversation). Based on the template, you can modify the application to meet your specific needs and style requirements. If you want to customize the application that you share, you can Fork our open source [WebApp template](https://github.com/langgenius/webapp-conversation). Based on the template, you can modify the application to meet your specific needs and style requirements.

View File

@ -15,7 +15,7 @@ Dify offers a "Backend-as-a-Service" API, providing numerous benefits to AI appl
Choose an application, and find the API Access in the left-side navigation of the Apps section. On this page, you can view the API documentation provided by Dify and manage credentials for accessing the API. Choose an application, and find the API Access in the left-side navigation of the Apps section. On this page, you can view the API documentation provided by Dify and manage credentials for accessing the API.
<figure><img src="../../../.gitbook/assets/API Access.png" alt=""><figcaption><p>API document</p></figcaption></figure> <figure><img src="../../.gitbook/assets/API Access.png" alt=""><figcaption><p>API document</p></figcaption></figure>
You can create multiple access credentials for an application to deliver to different users or developers. This means that API users can use the AI capabilities provided by the application developer, but the underlying Prompt engineering, knowledge, and tool capabilities are encapsulated. You can create multiple access credentials for an application to deliver to different users or developers. This means that API users can use the AI capabilities provided by the application developer, but the underlying Prompt engineering, knowledge, and tool capabilities are encapsulated.
@ -34,7 +34,7 @@ You can find the API documentation and example requests for this application in
For example, here is a sample call an API for text generation: For example, here is a sample call an API for text generation:
{% tabs %} {% tabs %}
{% tab title="cURL" %} {% tab title="cURL" %}
``` ```
curl --location --request POST 'https://api.dify.ai/v1/completion-messages' \ curl --location --request POST 'https://api.dify.ai/v1/completion-messages' \
--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \ --header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \
@ -47,7 +47,7 @@ curl --location --request POST 'https://api.dify.ai/v1/completion-messages' \
``` ```
{% endtab %} {% endtab %}
{% tab title="Python" %} {% tab title="Python" %}
```python ```python
import requests import requests
import json import json
@ -81,7 +81,7 @@ You can find the API documentation and example requests for this application in
For example, here is a sample call an API for chat-messages: For example, here is a sample call an API for chat-messages:
{% tabs %} {% tabs %}
{% tab title="cURL" %} {% tab title="cURL" %}
``` ```
curl --location --request POST 'https://api.dify.ai/v1/chat-messages' \ curl --location --request POST 'https://api.dify.ai/v1/chat-messages' \
--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \ --header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \
@ -97,7 +97,7 @@ curl --location --request POST 'https://api.dify.ai/v1/chat-messages' \
``` ```
{% endtab %} {% endtab %}
{% tab title="Python" %} {% tab title="Python" %}
```python ```python
import requests import requests
import json import json

View File

@ -1,4 +1,4 @@
# Quickstart # Launch Your Webapp Quickly
One of the benefits of creating AI applications with Dify is that you can launch a user-friendly Web application in just a few minutes, based on your Prompt orchestration. One of the benefits of creating AI applications with Dify is that you can launch a user-friendly Web application in just a few minutes, based on your Prompt orchestration.
@ -9,7 +9,7 @@ One of the benefits of creating AI applications with Dify is that you can launch
In the application overview page, you can find a card for the AI site (WebApp). Simply enable WebApp access to get a shareable link for your users. In the application overview page, you can find a card for the AI site (WebApp). Simply enable WebApp access to get a shareable link for your users.
<figure><img src="../../.gitbook/assets/share your App.png" alt=""><figcaption><p>Share your WebApp</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/share your App.png" alt=""><figcaption><p>Share your WebApp</p></figcaption></figure>
We provide a sleek WebApp interface for both of the following applications: We provide a sleek WebApp interface for both of the following applications:
@ -38,8 +38,8 @@ Dify supports embedding your AI application into your business website. With thi
Copy the script code and paste it into the `<head>` or `<body>` tags on your website. Copy the script code and paste it into the `<head>` or `<body>` tags on your website.
<figure><img src="../../.gitbook/assets/image (46).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (46).png" alt=""><figcaption></figcaption></figure>
For example, if you paste the script code into the section of your official website, you will get an AI chatbot on your website: For example, if you paste the script code into the section of your official website, you will get an AI chatbot on your website:
<figure><img src="../../.gitbook/assets/image (42).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (42).png" alt=""><figcaption></figcaption></figure>

View File

@ -1,4 +1,4 @@
# Chat App # Conversation Application
Conversational applications use a question-and-answer model to maintain a dialogue with the user. Conversational applications support the following capabilities (please confirm that the following functions are enabled when the application is programmed): Conversational applications use a question-and-answer model to maintain a dialogue with the user. Conversational applications support the following capabilities (please confirm that the following functions are enabled when the application is programmed):
@ -12,33 +12,33 @@ Conversational applications use a question-and-answer model to maintain a dialog
If you have the requirement to fill in variables when you apply the layout, you need to fill in the information according to the prompts before entering the dialog window: If you have the requirement to fill in variables when you apply the layout, you need to fill in the information according to the prompts before entering the dialog window:
<figure><img src="../../.gitbook/assets/image (45).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (45).png" alt=""><figcaption></figcaption></figure>
Fill in the necessary content and click the "Start Chat" button to start chatting. Fill in the necessary content and click the "Start Chat" button to start chatting.
<figure><img src="../../.gitbook/assets/image (8) (1) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (8) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
Move to the AI's answer, you can copy the content of the conversation, and give the answer "like" and "dislike". Move to the AI's answer, you can copy the content of the conversation, and give the answer "like" and "dislike".
<figure><img src="../../.gitbook/assets/image (30).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (30).png" alt=""><figcaption></figcaption></figure>
### Conversation creation, pinning and deletion ### Conversation creation, pinning and deletion
Click the "New Conversation" button to start a new conversation. Move to a session, and the session can be "pinned" and "deleted". Click the "New Conversation" button to start a new conversation. Move to a session, and the session can be "pinned" and "deleted".
<figure><img src="../../.gitbook/assets/image (43).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (43).png" alt=""><figcaption></figcaption></figure>
### Conversation remarks ### Conversation remarks
If the "Conversation remarks" function is enabled when the application is programmed, the AI application will automatically initiate the first sentence of the dialogue when creating a new dialogue: If the "Conversation remarks" function is enabled when the application is programmed, the AI application will automatically initiate the first sentence of the dialogue when creating a new dialogue:
<figure><img src="../../.gitbook/assets/image (48).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (48).png" alt=""><figcaption></figcaption></figure>
### Follow-up ### Follow-up
If the "Follow-up" function is enabled during the application arrangement, the system will automatically generate 3 related question suggestions after the dialogue: If the "Follow-up" function is enabled during the application arrangement, the system will automatically generate 3 related question suggestions after the dialogue:
<figure><img src="../../.gitbook/assets/image (16).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (16).png" alt=""><figcaption></figcaption></figure>
### Speech to text ### Speech to text
@ -46,10 +46,10 @@ If the "Speech to Text" function is enabled during application programming, you
_Please make sure that the device environment you are using is authorized to use the microphone._ _Please make sure that the device environment you are using is authorized to use the microphone._
<figure><img src="../../.gitbook/assets/image (39).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (39).png" alt=""><figcaption></figcaption></figure>
### Citations and Attributions ### Citations and Attributions
If the "Quotations and Attribution" feature is enabled during the application arrangement, the dialogue returns will automatically show the quoted knowledge document sources. If the "Quotations and Attribution" feature is enabled during the application arrangement, the dialogue returns will automatically show the quoted knowledge document sources.
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (3) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>

View File

@ -1,4 +1,4 @@
# Text Generator # Text Generator Application
The text generation application is an application that automatically generates high-quality text according to the prompts provided by the user. It can generate various types of text, such as article summaries, translations, etc. The text generation application is an application that automatically generates high-quality text according to the prompts provided by the user. It can generate various types of text, such as article summaries, translations, etc.
@ -15,7 +15,7 @@ Let's introduce them separately.
Enter the query content, click the run button, and the result will be generated on the right, as shown in the following figure: Enter the query content, click the run button, and the result will be generated on the right, as shown in the following figure:
<figure><img src="../../.gitbook/assets/image (57).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (57).png" alt=""><figcaption></figcaption></figure>
In the generated results section, click the "Copy" button to copy the content to the clipboard. Click the "Save" button to save the content. You can see the saved content in the "Saved" tab. You can also "like" and "dislike" the generated content. In the generated results section, click the "Copy" button to copy the content to the clipboard. Click the "Save" button to save the content. You can see the saved content in the "Saved" tab. You can also "like" and "dislike" the generated content.
@ -29,17 +29,17 @@ In the above scenario, the batch operation function is used, which is convenient
Click the "Run Batch" tab to enter the batch run page. Click the "Run Batch" tab to enter the batch run page.
<figure><img src="../../.gitbook/assets/image (27).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (27).png" alt=""><figcaption></figcaption></figure>
#### Step 2 Download the template and fill in the content #### Step 2 Download the template and fill in the content
Click the Download Template button to download the template. Edit the template, fill in the content, and save as a `.csv` file. Click the Download Template button to download the template. Edit the template, fill in the content, and save as a `.csv` file.
<figure><img src="../../.gitbook/assets/image (13) (1) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (13) (1) (1).png" alt=""><figcaption></figcaption></figure>
#### Step 3 Upload the file and run #### Step 3 Upload the file and run
<figure><img src="../../.gitbook/assets/image (55).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (55).png" alt=""><figcaption></figcaption></figure>
If you need to export the generated content, you can click the download "button" in the upper right corner to export as a `csv` file. If you need to export the generated content, you can click the download "button" in the upper right corner to export as a `csv` file.
@ -49,10 +49,10 @@ If you need to export the generated content, you can click the download "button"
Click the "Save" button below the generated results to save the running results. In the "Saved" tab, you can see all saved content. Click the "Save" button below the generated results to save the running results. In the "Saved" tab, you can see all saved content.
<figure><img src="../../.gitbook/assets/image (6) (1) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (6) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
### Generate more similar results ### Generate more similar results
If the "more similar" function is turned on when applying the arrangement. Clicking the "more similar" button in the web application generates content similar to the current result. As shown below: If the "more similar" function is turned on when applying the arrangement. Clicking the "more similar" button in the web application generates content similar to the current result. As shown below:
<figure><img src="../../.gitbook/assets/image (22).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (22).png" alt=""><figcaption></figcaption></figure>

View File

@ -19,13 +19,13 @@ The feature provides an alternative system for enhancing retrieval, skipping the
4. Without a match, the query follows the standard LLM or RAG process. 4. Without a match, the query follows the standard LLM or RAG process.
5. Deactivating Annotation Reply ceases matching replies from the annotations. 5. Deactivating Annotation Reply ceases matching replies from the annotations.
<figure><img src="../.gitbook/assets/image (3) (1) (1).png" alt="" width="563"><figcaption><p>Annotation Reply Process</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image (3) (1) (1) (1).png" alt="" width="563"><figcaption><p>Annotation Reply Process</p></figcaption></figure>
## Activation ## Activation
Navigate to “Build Apps -> Add Feature” to enable the Annotation Reply feature. Navigate to “Build Apps -> Add Feature” to enable the Annotation Reply feature.
<figure><img src="../.gitbook/assets/screenshot-20231218-172146 (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-172146 (1).png" alt=""><figcaption></figcaption></figure>
Start by setting the parameters for Annotation Reply. These include the Score threshold and the Embedding model. Start by setting the parameters for Annotation Reply. These include the Score threshold and the Embedding model.
@ -34,48 +34,48 @@ Start by setting the parameters for Annotation Reply. These include the Score th
Select 'Save' for immediate application of these settings. The system then creates and stores embeddings for all existing annotations. Select 'Save' for immediate application of these settings. The system then creates and stores embeddings for all existing annotations.
<figure><img src="../.gitbook/assets/screenshot-20231218-172302.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-172302.png" alt=""><figcaption></figcaption></figure>
## Adding Annotations in Debug Mode ## Adding Annotations in Debug Mode
Annotations can be added or modified directly on the model's replies within the debug and preview interface. Annotations can be added or modified directly on the model's replies within the debug and preview interface.
<figure><img src="../.gitbook/assets/screenshot-20231218-175934.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-175934.png" alt=""><figcaption></figcaption></figure>
Edit and save these replies to ensure high quality. Edit and save these replies to ensure high quality.
<figure><img src="../.gitbook/assets/screenshot-20231218-180013.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-180013.png" alt=""><figcaption></figcaption></figure>
When a user repeats a query, the system uses the relevant saved annotation for a direct reply. When a user repeats a query, the system uses the relevant saved annotation for a direct reply.
<figure><img src="../.gitbook/assets/screenshot-20231218-180135.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-180135.png" alt=""><figcaption></figcaption></figure>
## Enabling Annotations in System Logs ## Enabling Annotations in System Logs
Turn on the Annotation Reply feature under “Build Apps -> Logs and Annotations -> Annotations.” Turn on the Annotation Reply feature under “Build Apps -> Logs and Annotations -> Annotations.”
<figure><img src="../.gitbook/assets/screenshot-20231218-180233.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-180233.png" alt=""><figcaption></figcaption></figure>
## Adjusting Backend Parameters for Annotations ## Adjusting Backend Parameters for Annotations
**Parameter Settings:** These include the Score threshold and Embedding model, just as in the initial configuration. **Parameter Settings:** These include the Score threshold and Embedding model, just as in the initial configuration.
<figure><img src="../.gitbook/assets/screenshot-20231218-180337.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-180337.png" alt=""><figcaption></figcaption></figure>
## Bulk Importing Annotated Q\&As ## Bulk Importing Annotated Q\&As
**Import Process:** Use the provided template to format Q\&A pairs for annotations, then upload them in bulk. **Import Process:** Use the provided template to format Q\&A pairs for annotations, then upload them in bulk.
<figure><img src="../.gitbook/assets/screenshot-20231218-180508.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-180508.png" alt=""><figcaption></figcaption></figure>
## Bulk Exporting Annotated Q\&As ## Bulk Exporting Annotated Q\&As
**Export Function:** This feature allows for a one-time export of all annotated Q\&A pairs stored in the system. **Export Function:** This feature allows for a one-time export of all annotated Q\&A pairs stored in the system.
<figure><img src="../.gitbook/assets/screenshot-20231218-180611.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-180611.png" alt=""><figcaption></figcaption></figure>
## Reviewing Annotation Hit History ## Reviewing Annotation Hit History
View the history of each annotation's use, including edits, queries, replies, sources, similarity scores, and timestamps. This information is valuable for ongoing improvements to your annotations. View the history of each annotation's use, including edits, queries, replies, sources, similarity scores, and timestamps. This information is valuable for ongoing improvements to your annotations.
<figure><img src="../.gitbook/assets/screenshot-20231218-180737.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20231218-180737.png" alt=""><figcaption></figcaption></figure>

View File

@ -2,9 +2,6 @@
In the process of creating AI applications, developers face constantly changing business needs and complex technical challenges. Effectively leveraging extension capabilities can not only enhance the flexibility and functionality of applications but also ensure the security and compliance of enterprise data. Dify offers the following two methods of extension: In the process of creating AI applications, developers face constantly changing business needs and complex technical challenges. Effectively leveraging extension capabilities can not only enhance the flexibility and functionality of applications but also ensure the security and compliance of enterprise data. Dify offers the following two methods of extension:
[API Based Extension](api_based_extension/ "mention") [api-based-extension](api-based-extension/ "mention")
[code-based-extension.md](code-based-extension.md "mention")
[Broken link](broken-reference "mention")

View File

@ -1,4 +1,4 @@
# API Based Extension # API-Based Extension
Developers can extend module capabilities through the API extension module. Currently supported module extensions include: Developers can extend module capabilities through the API extension module. Currently supported module extensions include:
@ -263,4 +263,4 @@ Now, this API endpoint is accessible publicly. You can configure this endpoint i
We recommend that you use Cloudflare Workers to deploy your API extension, because Cloudflare Workers can easily provide a public address and can be used for free. We recommend that you use Cloudflare Workers to deploy your API extension, because Cloudflare Workers can easily provide a public address and can be used for free.
[cloudflare\_workers.md](../../../tutorials/cloudflare\_workers.md "mention") [cloudflare-workers.md](cloudflare-workers.md "mention")

View File

@ -1,4 +1,4 @@
# Expose API Extension on public Internet using Cloudflare Workers # Deploy API Tools with Cloudflare Workers
## Getting Started ## Getting Started
@ -43,9 +43,9 @@ npm run deploy
After successful deployment, you will get a public internet address, which you can add in Dify as an API Endpoint. Please note not to miss the `endpoint` path. After successful deployment, you will get a public internet address, which you can add in Dify as an API Endpoint. Please note not to miss the `endpoint` path.
<figure><img src="../.gitbook/assets/api_extension_edit.png" alt=""><figcaption><p>Adding API Endpoint in Dify</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/api_extension_edit.png" alt=""><figcaption><p>Adding API Endpoint in Dify</p></figcaption></figure>
<figure><img src="../.gitbook/assets/app_tools_edit.png" alt=""><figcaption><p>Adding API Tool in the App edit page</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/app_tools_edit.png" alt=""><figcaption><p>Adding API Tool in the App edit page</p></figcaption></figure>
## Other Logic TL;DR ## Other Logic TL;DR

View File

@ -4,13 +4,13 @@
Each document uploaded to the knowledge base is stored in the form of text segments (Chunks). You can view the specific text content of each segment in the segment list. Each document uploaded to the knowledge base is stored in the form of text segments (Chunks). You can view the specific text content of each segment in the segment list.
<figure><img src="../../.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>Viewing uploaded document segments</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(3)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1).png" alt=""><figcaption><p>Viewing uploaded document segments</p></figcaption></figure>
*** ***
### 2. Checking Segment Quality ### 2. Checking Segment Quality
The quality of document segmentation significantly affects the Q&A performance of the knowledge base application. It is recommended to manually check the segment quality before associating the knowledge base with the application. The quality of document segmentation significantly affects the Q\&A performance of the knowledge base application. It is recommended to manually check the segment quality before associating the knowledge base with the application.
Although automated segmentation methods based on character length, identifiers, or NLP semantic segmentation can significantly reduce the workload of large-scale text segmentation, the quality of segmentation is related to the text structure of different document formats and the semantic context. Manual checking and correction can effectively compensate for the shortcomings of machine segmentation in semantic recognition. Although automated segmentation methods based on character length, identifiers, or NLP semantic segmentation can significantly reduce the workload of large-scale text segmentation, the quality of segmentation is related to the text structure of different document formats and the semantic context. Manual checking and correction can effectively compensate for the shortcomings of machine segmentation in semantic recognition.
@ -18,15 +18,15 @@ When checking segment quality, pay attention to the following situations:
* **Overly short text segments**, leading to semantic loss; * **Overly short text segments**, leading to semantic loss;
<figure><img src="../../.gitbook/assets/image (183).png" alt="" width="373"><figcaption><p>Overly short text segments</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(183).png" alt="" width="373"><figcaption><p>Overly short text segments</p></figcaption></figure>
* **Overly long text segments**, leading to semantic noise affecting matching accuracy; * **Overly long text segments**, leading to semantic noise affecting matching accuracy;
<figure><img src="../../.gitbook/assets/image (186).png" alt="" width="375"><figcaption><p>Overly long text segments</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(186).png" alt="" width="375"><figcaption><p>Overly long text segments</p></figcaption></figure>
* **Obvious semantic truncation**, which occurs when using maximum segment length limits, leading to forced semantic truncation and missing content during recall; * **Obvious semantic truncation**, which occurs when using maximum segment length limits, leading to forced semantic truncation and missing content during recall;
<figure><img src="../../.gitbook/assets/image (185).png" alt="" width="357"><figcaption><p>Obvious semantic truncation</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(185).png" alt="" width="357"><figcaption><p>Obvious semantic truncation</p></figcaption></figure>
*** ***
@ -34,11 +34,11 @@ When checking segment quality, pay attention to the following situations:
In the segment list, click "Add Segment" to add one or multiple custom segments to the document. In the segment list, click "Add Segment" to add one or multiple custom segments to the document.
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(2)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1).png" alt=""><figcaption></figcaption></figure>
When adding segments in bulk, you need to first download the CSV format segment upload template, edit all the segment content in Excel according to the template format, save the CSV file, and then upload it. When adding segments in bulk, you need to first download the CSV format segment upload template, edit all the segment content in Excel according to the template format, save the CSV file, and then upload it.
<figure><img src="../../.gitbook/assets/image (4) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>Bulk adding custom segments</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(4)%20(1)%20(1)%20(1)%20(1)%20(1).png" alt=""><figcaption><p>Bulk adding custom segments</p></figcaption></figure>
*** ***
@ -46,7 +46,7 @@ When adding segments in bulk, you need to first download the CSV format segment
In the segment list, you can directly edit the content of the added segments, including the text content and keywords of the segments. In the segment list, you can directly edit the content of the added segments, including the text content and keywords of the segments.
<figure><img src="../../.gitbook/assets/image (5) (1).png" alt=""><figcaption><p>Editing document segments</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image (5) (1) (1).png" alt=""><figcaption><p>Editing document segments</p></figcaption></figure>
*** ***
@ -58,7 +58,7 @@ In addition to marking metadata information from different source documents, suc
The metadata filtering and citation source functions are not yet supported in the current version. The metadata filtering and citation source functions are not yet supported in the current version.
{% endhint %} {% endhint %}
<figure><img src="../../.gitbook/assets/image (179).png" alt=""><figcaption><p>Metadata management</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(179).png" alt=""><figcaption><p>Metadata management</p></figcaption></figure>
*** ***
@ -68,7 +68,7 @@ In "Knowledge Base > Document List," click "Add File" to upload new documents or
A knowledge base (Knowledge) is a collection of documents (Documents). Documents can be uploaded by developers or operators, or synchronized from other data sources (usually corresponding to a file unit in the data source). A knowledge base (Knowledge) is a collection of documents (Documents). Documents can be uploaded by developers or operators, or synchronized from other data sources (usually corresponding to a file unit in the data source).
<figure><img src="../../.gitbook/assets/image (181).png" alt=""><figcaption><p>Uploading new documents to the knowledge base</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(181).png" alt=""><figcaption><p>Uploading new documents to the knowledge base</p></figcaption></figure>
*** ***
@ -84,7 +84,7 @@ A knowledge base (Knowledge) is a collection of documents (Documents). Documents
Click **Settings** in the left navigation of the knowledge base to change the following settings: Click **Settings** in the left navigation of the knowledge base to change the following settings:
<figure><img src="../../.gitbook/assets/image (182).png" alt=""><figcaption><p>Knowledge base settings</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(182).png" alt=""><figcaption><p>Knowledge base settings</p></figcaption></figure>
**Knowledge Base Name**: Define a name to identify a knowledge base. **Knowledge Base Name**: Define a name to identify a knowledge base.
@ -108,4 +108,4 @@ When the recall mode of the knowledge base is N-Choose-1, the knowledge base is
Dify Knowledge Base provides a complete set of standard APIs. Developers can use API calls to perform daily management and maintenance operations such as adding, deleting, modifying, and querying documents and segments in the knowledge base. Please refer to the [Knowledge Base API Documentation](maintain-dataset-via-api.md). Dify Knowledge Base provides a complete set of standard APIs. Developers can use API calls to perform daily management and maintenance operations such as adding, deleting, modifying, and querying documents and segments in the knowledge base. Please refer to the [Knowledge Base API Documentation](maintain-dataset-via-api.md).
<figure><img src="../../.gitbook/assets/image (180).png" alt=""><figcaption><p>Knowledge base API management</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(180).png" alt=""><figcaption><p>Knowledge base API management</p></figcaption></figure>

View File

@ -1,14 +1,14 @@
# Recall Testing/Citation Attribution # Retrieval Test/Citation
### 1. Recall Testing ### 1. Recall Testing
The Dify Knowledge Base provides a text recall testing feature to debug the recall effects under different retrieval methods and parameter configurations. You can enter common user questions in the **Source Text** input box, click **Test**, and view the recall results in the **Recalled Paragraph** section on the right. The **Recent Queries** section allows you to view the history of query records; if the knowledge base is linked to an application, queries triggered from within the application can also be viewed here. The Dify Knowledge Base provides a text recall testing feature to debug the recall effects under different retrieval methods and parameter configurations. You can enter common user questions in the **Source Text** input box, click **Test**, and view the recall results in the **Recalled Paragraph** section on the right. The **Recent Queries** section allows you to view the history of query records; if the knowledge base is linked to an application, queries triggered from within the application can also be viewed here.
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>Recall Testing</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1).png" alt=""><figcaption><p>Recall Testing</p></figcaption></figure>
Clicking the icon in the upper right corner of the source text input box allows you to change the retrieval method and specific parameters of the current knowledge base. **Changes will only take effect during the recall testing process.** After completing the recall test and confirming changes to the retrieval parameters of the knowledge base, you need to make changes in [Knowledge Base Settings > Retrieval Settings](retrieval\_test\_and\_citation.md#zhi-shi-ku-she-zhi). Clicking the icon in the upper right corner of the source text input box allows you to change the retrieval method and specific parameters of the current knowledge base. **Changes will only take effect during the recall testing process.** After completing the recall test and confirming changes to the retrieval parameters of the knowledge base, you need to make changes in [Knowledge Base Settings > Retrieval Settings](retrieval\_test\_and\_citation.md#zhi-shi-ku-she-zhi).
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>Recall Testing - Retrieval Settings</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(2)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1).png" alt=""><figcaption><p>Recall Testing - Retrieval Settings</p></figcaption></figure>
**Suggested Steps for Recall Testing:** **Suggested Steps for Recall Testing:**
@ -27,8 +27,8 @@ Clicking the icon in the upper right corner of the source text input box allows
When testing the knowledge base effect within the application, you can go to **Workspace -- Add Function -- Citation Attribution** to enable the citation attribution feature. When testing the knowledge base effect within the application, you can go to **Workspace -- Add Function -- Citation Attribution** to enable the citation attribution feature.
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>Enable Citation and Attribution Feature</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1)%20(1).png" alt=""><figcaption><p>Enable Citation and Attribution Feature</p></figcaption></figure>
After enabling the feature, when the large language model responds to a question by citing content from the knowledge base, you can view specific citation paragraph information below the response content, including **original segment text, segment number, matching degree**, etc. Clicking **Jump to Knowledge Base** above the cited segment allows quick access to the segment list in the knowledge base, facilitating developers in debugging and editing. After enabling the feature, when the large language model responds to a question by citing content from the knowledge base, you can view specific citation paragraph information below the response content, including **original segment text, segment number, matching degree**, etc. Clicking **Jump to Knowledge Base** above the cited segment allows quick access to the segment list in the knowledge base, facilitating developers in debugging and editing.
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>View Citation Information in Response Content</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption><p>View Citation Information in Response Content</p></figcaption></figure>

View File

@ -1 +1,42 @@
# Under Maintenance ---
description: >-
This document primarily introduces how to scrape data from a web page, parse
it into Markdown, and import it into the Dify knowledge base.
---
# Sync Data from Website
Dify's knowledge base supports web scraping and parsing into Markdown for import through integration with Firecrawl.
{% hint style="info" %}
[Firecrawl](https://www.firecrawl.dev/) is an open-source web parsing tool that converts web pages into clean Markdown format text that LLMs easily recognize. It also provides an easy-to-use API service.&#x20;
{% endhint %}
### How to Configure
#### 1. Configure Firecrawl API credentials
First, you need to configure Firecrawl credentials in the **Data Source** section of the **Settings** page.
<figure><img src="../../.gitbook/assets/image.png" alt=""><figcaption><p>Configuring Firecrawl Credentials</p></figcaption></figure>
Log in to the [Firecrawl website](https://www.firecrawl.dev/) to complete registration, get your API Key, and then enter and save it in Dify.
<figure><img src="../../.gitbook/assets/image (2).png" alt=""><figcaption><p>Get the API Key and save it in Dify</p></figcaption></figure>
#### 2. Scrape target webpage
On the knowledge base creation page, select **Sync from website** and enter the URL to be scraped.
<figure><img src="../../.gitbook/assets/image (3).png" alt=""><figcaption><p>Web scraping configuration</p></figcaption></figure>
The configuration options include: Whether to crawl sub-pages, Page crawling limit, Page scraping max depth, Excluded paths, Include only paths, and Content extraction scope. After completing the configuration, click **Run** to preview the parsed pages.
<figure><img src="../../.gitbook/assets/image (4).png" alt=""><figcaption><p>Execute scraping</p></figcaption></figure>
#### 3. Review import results
After importing the parsed text from the webpage, it is stored in the knowledge base documents. View the import results and click **Add URL** to continue importing new web pages.
<figure><img src="../../.gitbook/assets/image (7).png" alt=""><figcaption><p>Importing parsed web text into the knowledge base</p></figcaption></figure>

View File

@ -6,7 +6,7 @@ In enterprise-level large-scale model API calls, high concurrent requests can ex
You can enable this feature in **Model Provider -- Model List -- Configure Model Load Balancing** and add multiple credentials (API keys) for the same model. You can enable this feature in **Model Provider -- Model List -- Configure Model Load Balancing** and add multiple credentials (API keys) for the same model.
<figure><img src="../../.gitbook/assets/image (2) (1) (1).png" alt="" width="563"><figcaption><p>Model Load Balancing</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image (2) (1) (1) (1).png" alt="" width="563"><figcaption><p>Model Load Balancing</p></figcaption></figure>
{% hint style="info" %} {% hint style="info" %}
Model load balancing is a paid feature. You can enable this feature by [subscribing to SaaS paid services](../../getting-started/cloud.md#subscription-plans) or purchasing the enterprise edition. Model load balancing is a paid feature. You can enable this feature by [subscribing to SaaS paid services](../../getting-started/cloud.md#subscription-plans) or purchasing the enterprise edition.
@ -14,17 +14,17 @@ Model load balancing is a paid feature. You can enable this feature by [subscrib
The default API Key in the configuration is the credential added when the model provider was initially configured. You need to click **Add Configuration** to add different API keys for the same model to properly use the load balancing feature. The default API Key in the configuration is the credential added when the model provider was initially configured. You need to click **Add Configuration** to add different API keys for the same model to properly use the load balancing feature.
<figure><img src="../../.gitbook/assets/image (3) (1) (1).png" alt="" width="563"><figcaption><p>Configure Load Balancing</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image (3) (1) (1) (1).png" alt="" width="563"><figcaption><p>Configure Load Balancing</p></figcaption></figure>
**You need to add at least one additional model credential** to save and enable load balancing. **You need to add at least one additional model credential** to save and enable load balancing.
You can also **temporarily disable** or **delete** configured credentials. You can also **temporarily disable** or **delete** configured credentials.
<figure><img src="../../.gitbook/assets/image (7).png" alt="" width="563"><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (7) (1).png" alt="" width="563"><figcaption></figcaption></figure>
After configuration, all models with load balancing enabled will be displayed in the model list. After configuration, all models with load balancing enabled will be displayed in the model list.
<figure><img src="../../.gitbook/assets/image (6).png" alt="" width="563"><figcaption><p>Enable Load Balancing</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image (6) (1).png" alt="" width="563"><figcaption><p>Enable Load Balancing</p></figcaption></figure>
{% hint style="info" %} {% hint style="info" %}
By default, load balancing uses the Round-robin strategy. If a rate limit is triggered, a 1-minute cooldown period will be applied. By default, load balancing uses the Round-robin strategy. If a rate limit is triggered, a 1-minute cooldown period will be applied.
@ -32,4 +32,4 @@ By default, load balancing uses the Round-robin strategy. If a rate limit is tri
You can also configure load balancing from **Add Model**, and the configuration process is the same as described above. You can also configure load balancing from **Add Model**, and the configuration process is the same as described above.
<figure><img src="../../.gitbook/assets/image (4).png" alt="" width="563"><figcaption><p>Configure Load Balancing from Add Model</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image (4) (1).png" alt="" width="563"><figcaption><p>Configure Load Balancing from Add Model</p></figcaption></figure>

View File

@ -1,4 +1,4 @@
# Xinference # Integrate Local Models Deployed by Xinference
[Xorbits inference](https://github.com/xorbitsai/inference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models, and can even be used on laptops. It supports various models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, etc. And Dify supports connecting to Xinference deployed large language model inference and embedding capabilities locally. [Xorbits inference](https://github.com/xorbitsai/inference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models, and can even be used on laptops. It supports various models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, etc. And Dify supports connecting to Xinference deployed large language model inference and embedding capabilities locally.
@ -33,7 +33,7 @@ There are two ways to deploy Xinference, namely [local deployment](https://githu
Visit `http://127.0.0.1:9997`, select the model and specification you need to deploy, as shown below: Visit `http://127.0.0.1:9997`, select the model and specification you need to deploy, as shown below:
<figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
As different models have different compatibility on different hardware platforms, please refer to [Xinference built-in models](https://inference.readthedocs.io/en/latest/models/builtin/index.html) to ensure the created model supports the current hardware platform. As different models have different compatibility on different hardware platforms, please refer to [Xinference built-in models](https://inference.readthedocs.io/en/latest/models/builtin/index.html) to ensure the created model supports the current hardware platform.
4. Obtain the model UID 4. Obtain the model UID

View File

@ -2,7 +2,7 @@
The **Overview -- Data Analysis** section displays metrics such as usage, active users, and LLM (Language Learning Model) invocation costs. This allows you to continuously improve the effectiveness, engagement, and cost-efficiency of your application operations. We will gradually provide more useful visualization capabilities, so please let us know what you need. The **Overview -- Data Analysis** section displays metrics such as usage, active users, and LLM (Language Learning Model) invocation costs. This allows you to continuously improve the effectiveness, engagement, and cost-efficiency of your application operations. We will gradually provide more useful visualization capabilities, so please let us know what you need.
<figure><img src="../../.gitbook/assets/image (6) (1).png" alt=""><figcaption><p>Overview—Data Analysis</p></figcaption></figure> <figure><img src="../../.gitbook/assets/image (6) (1) (1).png" alt=""><figcaption><p>Overview—Data Analysis</p></figcaption></figure>
*** ***
@ -16,7 +16,7 @@ The number of unique users who have had effective interactions with the AI, defi
**Average Session Interactions** **Average Session Interactions**
Reflects the number of continuous interactions per session user. For example, if a user has a 10-round Q&A with the AI, it is counted as 10. This metric reflects user engagement. It is available only for conversational applications. Reflects the number of continuous interactions per session user. For example, if a user has a 10-round Q\&A with the AI, it is counted as 10. This metric reflects user engagement. It is available only for conversational applications.
**Token Output Speed** **Token Output Speed**
@ -28,4 +28,4 @@ The number of likes per 1,000 messages, reflecting the proportion of users who a
**Token Usage** **Token Usage**
Reflects the daily token expenditure for language model requests by the application, useful for cost control. Reflects the daily token expenditure for language model requests by the application, useful for cost control.

View File

@ -1,4 +1,4 @@
# Integrating Langfuse # Integrate LangFuse
### 1. What is Langfuse ### 1. What is Langfuse
@ -55,4 +55,4 @@ After configuration, debugging or production data of the application in Dify can
<figure><img src="../../../.gitbook/assets/image (258).png" alt=""><figcaption><p>Viewing application data in Langfuse</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (258).png" alt=""><figcaption><p>Viewing application data in Langfuse</p></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image.png" alt=""><figcaption><p>Viewing application data in Langfuse</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (8).png" alt=""><figcaption><p>Viewing application data in Langfuse</p></figcaption></figure>

View File

@ -1,4 +1,4 @@
# Integrating LangSmith # Integrate LangSmith
### 1. What is LangSmith ### 1. What is LangSmith
@ -15,23 +15,23 @@ Introduction to LangSmith: [https://www.langchain.com/langsmith](https://www.lan
1. Register and log in to LangSmith on the [official website](https://www.langchain.com/langsmith) 1. Register and log in to LangSmith on the [official website](https://www.langchain.com/langsmith)
2. Create a project in LangSmith. After logging in, click **New Project** on the homepage to create your own project. The **project** will be used to associate with **applications** in Dify for data monitoring. 2. Create a project in LangSmith. After logging in, click **New Project** on the homepage to create your own project. The **project** will be used to associate with **applications** in Dify for data monitoring.
<figure><img src="../../../.gitbook/assets/image (3).png" alt=""><figcaption><p>Create a project in LangSmith</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (3) (1).png" alt=""><figcaption><p>Create a project in LangSmith</p></figcaption></figure>
Once created, you can view all created projects in the Projects section. Once created, you can view all created projects in the Projects section.
<figure><img src="../../../.gitbook/assets/image (7).png" alt=""><figcaption><p>View created projects in LangSmith</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (7) (1).png" alt=""><figcaption><p>View created projects in LangSmith</p></figcaption></figure>
3. Create project credentials. Find the project settings **Settings** in the left sidebar. 3. Create project credentials. Find the project settings **Settings** in the left sidebar.
<figure><img src="../../../.gitbook/assets/image (8).png" alt=""><figcaption><p>Project settings</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (8) (1).png" alt=""><figcaption><p>Project settings</p></figcaption></figure>
Click **Create API Key** to create project credentials. Click **Create API Key** to create project credentials.
<figure><img src="../../../.gitbook/assets/image (3) (1).png" alt=""><figcaption><p>Create a project API Key</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (3) (1) (1).png" alt=""><figcaption><p>Create a project API Key</p></figcaption></figure>
Select **Personal Access Token** for subsequent API authentication. Select **Personal Access Token** for subsequent API authentication.
<figure><img src="../../../.gitbook/assets/image (5).png" alt=""><figcaption><p>Create an API Key</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (5) (1).png" alt=""><figcaption><p>Create an API Key</p></figcaption></figure>
Copy and save the created API key. Copy and save the created API key.
@ -59,6 +59,6 @@ After configuration, debugging or production data of the application in Dify can
<figure><img src="../../../.gitbook/assets/image (17).png" alt=""><figcaption><p>Debugging applications in Dify</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (17).png" alt=""><figcaption><p>Debugging applications in Dify</p></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (2).png" alt=""><figcaption><p>Viewing application data in LangSmith</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (2) (1).png" alt=""><figcaption><p>Viewing application data in LangSmith</p></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (18).png" alt=""><figcaption><p>Viewing application data in LangSmith</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (18).png" alt=""><figcaption><p>Viewing application data in LangSmith</p></figcaption></figure>

View File

@ -243,4 +243,4 @@ After the above steps are completed, we can see this tool on the frontend, and i
Of course, because google\_search needs a credential, before using it, you also need to input your credentials on the frontend. Of course, because google\_search needs a credential, before using it, you also need to input your credentials on the frontend.
<figure><img src="../.gitbook/assets/Feb 4, 2024.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/Feb 4, 2024.png" alt=""><figcaption></figcaption></figure>

View File

@ -6,8 +6,7 @@ description: Learn about the Different Tools Supported by Dify.
Dify supports various tools to enhance your application's capabilities. Each tool has unique features and parameters, so select a tool that suits your application's needs. **Obtain the API key from the tool provider's official website before using it in Dify.** Dify supports various tools to enhance your application's capabilities. Each tool has unique features and parameters, so select a tool that suits your application's needs. **Obtain the API key from the tool provider's official website before using it in Dify.**
## Tools Integration Guides ## Tools Integration Guides
- [StableDiffusion](./stable-diffusion.md): A tool for generating images based on text prompts. * [StableDiffusion](stable-diffusion.md): A tool for generating images based on text prompts.
- [SearXNG](./searxng.md): A free internet metasearch engine which aggregates results from various search services and databases. * [SearXNG](../../../tutorials/tool-configuration/searxng.md): A free internet metasearch engine which aggregates results from various search services and databases.

View File

@ -6,4 +6,4 @@ description: Checklist
Before entering debug mode, you can check the checklist to see if there are any nodes with incomplete configurations or that have not been connected. Before entering debug mode, you can check the checklist to see if there are any nodes with incomplete configurations or that have not been connected.
<figure><img src="../../../.gitbook/assets/image (8).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (8) (1).png" alt=""><figcaption></figcaption></figure>

View File

@ -2,12 +2,12 @@
description: Log description: Log
--- ---
# Log # Conversation/Run Logs
Clicking "View Log—Details" allows you to see a comprehensive overview of the run, including information on input/output, metadata, and more, in the details section. Clicking "View Log—Details" allows you to see a comprehensive overview of the run, including information on input/output, metadata, and more, in the details section.
<figure><img src="../../../.gitbook/assets/image (6).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (6) (1).png" alt=""><figcaption></figcaption></figure>
Clicking "View Log—Trace" enables you to review the input/output, token consumption, runtime duration, etc., of each node throughout the complete execution process of the workflow. Clicking "View Log—Trace" enables you to review the input/output, token consumption, runtime duration, etc., of each node throughout the complete execution process of the workflow.
<figure><img src="../../../.gitbook/assets/image (7).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (7) (1).png" alt=""><figcaption></figcaption></figure>

View File

@ -1,9 +1,9 @@
# Step Test # Step Run
Workflow supports step-by-step debugging of nodes, where you can repetitively test whether the execution of the current node meets expectations. Workflow supports step-by-step debugging of nodes, where you can repetitively test whether the execution of the current node meets expectations.
<figure><img src="../../../.gitbook/assets/image (3).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (3) (1).png" alt=""><figcaption></figcaption></figure>
After running a step test, you can review the execution status, input/output, and metadata information. After running a step test, you can review the execution status, input/output, and metadata information.
<figure><img src="../../../.gitbook/assets/image (4).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (4) (1).png" alt=""><figcaption></figcaption></figure>

View File

@ -1,13 +1,13 @@
# Preview\&Run # Preview and Run
Dify Workflow offers a comprehensive set of execution and debugging features. In conversational applications, clicking "Preview" enters debugging mode. Dify Workflow offers a comprehensive set of execution and debugging features. In conversational applications, clicking "Preview" enters debugging mode.
<figure><img src="../../../.gitbook/assets/image (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (1) (1).png" alt=""><figcaption></figcaption></figure>
In workflow applications, clicking "Run" enters debugging mode. In workflow applications, clicking "Run" enters debugging mode.
<figure><img src="../../../.gitbook/assets/image (2).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (2) (1).png" alt=""><figcaption></figcaption></figure>
Once in debugging mode, you can debug the configured workflow using the interface on the right side of the screen. Once in debugging mode, you can debug the configured workflow using the interface on the right side of the screen.
<figure><img src="../../../.gitbook/assets/image (5).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (5) (1).png" alt=""><figcaption></figcaption></figure>

View File

@ -2,7 +2,7 @@
description: Answer description: Answer
--- ---
# Answer # Direct Reply
Defining Reply Content in a Chatflow Process. In a text editor, you have the flexibility to determine the reply format. This includes crafting a fixed block of text, utilizing output variables from preceding steps as the reply content, or merging custom text with variables for the response. Defining Reply Content in a Chatflow Process. In a text editor, you have the flexibility to determine the reply format. This includes crafting a fixed block of text, utilizing output variables from preceding steps as the reply content, or merging custom text with variables for the response.
@ -14,10 +14,10 @@ Answer node can be seamlessly integrated at any point to dynamically deliver con
Example 1: Output plain text. Example 1: Output plain text.
<figure><img src="../../../.gitbook/assets/image (8) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (8) (1) (1).png" alt=""><figcaption></figcaption></figure>
Example 2: Output image and LLM reply. Example 2: Output image and LLM reply.
<figure><img src="../../../.gitbook/assets/image (6) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (6) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (7) (1).png" alt="" width="275"><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (7) (1) (1).png" alt="" width="275"><figcaption></figcaption></figure>

View File

@ -6,8 +6,8 @@ The "End" node serves as the termination point of the process, beyond which no f
Single-Path Execution Example: Single-Path Execution Example:
<figure><img src="../../../.gitbook/assets/image (2) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (2) (1) (1).png" alt=""><figcaption></figcaption></figure>
Multi-Path Execution Example: Multi-Path Execution Example:
<figure><img src="../../../.gitbook/assets/image (5) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (5) (1) (1).png" alt=""><figcaption></figcaption></figure>

View File

@ -15,7 +15,7 @@ This node supports common HTTP request methods:
You can configure various aspects of the HTTP request, including URL, request headers, query parameters, request body content, and authentication information. You can configure various aspects of the HTTP request, including URL, request headers, query parameters, request body content, and authentication information.
<figure><img src="../../../.gitbook/assets/image (2).png" alt="" width="332"><figcaption><p>HTTP Request Configuration</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (2) (1).png" alt="" width="332"><figcaption><p>HTTP Request Configuration</p></figcaption></figure>
*** ***
@ -23,6 +23,6 @@ You can configure various aspects of the HTTP request, including URL, request he
One practical feature of this node is the ability to dynamically insert variables into different parts of the request based on the scenario. For example, when handling customer feedback requests, you can embed variables such as username or customer ID, feedback content, etc., into the request to customize automated reply messages or fetch specific customer information and send related resources to a designated server. One practical feature of this node is the ability to dynamically insert variables into different parts of the request based on the scenario. For example, when handling customer feedback requests, you can embed variables such as username or customer ID, feedback content, etc., into the request to customize automated reply messages or fetch specific customer information and send related resources to a designated server.
<figure><img src="../../../.gitbook/assets/image (1) (1).png" alt=""><figcaption><p>Customer Feedback Classification</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (1) (1) (1).png" alt=""><figcaption><p>Customer Feedback Classification</p></figcaption></figure>
The return values of an HTTP request include the response body, status code, response headers, and files. Notably, if the response contains a file (currently only image types are supported), this node can automatically save the file for use in subsequent steps of the workflow. This design not only improves processing efficiency but also makes handling responses with files straightforward and direct. The return values of an HTTP request include the response body, status code, response headers, and files. Notably, if the response contains a file (currently only image types are supported), this node can automatically save the file for use in subsequent steps of the workflow. This design not only improves processing efficiency but also makes handling responses with files straightforward and direct.

View File

@ -12,7 +12,7 @@ The iteration step performs the same steps on each item in a list. To use iterat
#### **Example 1: Long Article Iteration Generator** #### **Example 1: Long Article Iteration Generator**
<figure><img src="../../../.gitbook/assets/image (207).png" alt=""><figcaption><p>Long Story Generator</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(207).png" alt=""><figcaption><p>Long Story Generator</p></figcaption></figure>
1. Enter the story title and outline in the **Start Node**. 1. Enter the story title and outline in the **Start Node**.
2. Use a **Code Node** to extract the complete content from user input. 2. Use a **Code Node** to extract the complete content from user input.
@ -24,15 +24,15 @@ The iteration step performs the same steps on each item in a list. To use iterat
1. Configure the story title (title) and outline (outline) in the **Start Node**. 1. Configure the story title (title) and outline (outline) in the **Start Node**.
<figure><img src="../../../.gitbook/assets/image (211).png" alt="" width="375"><figcaption><p>Start Node Configuration</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(211).png" alt="" width="375"><figcaption><p>Start Node Configuration</p></figcaption></figure>
2. Use a **Jinja-2 Template Node** to convert the story title and outline into complete text. 2. Use a **Jinja-2 Template Node** to convert the story title and outline into complete text.
<figure><img src="../../../.gitbook/assets/image (209).png" alt="" width="375"><figcaption><p>Template Node</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(209).png" alt="" width="375"><figcaption><p>Template Node</p></figcaption></figure>
3. Use a **Parameter Extraction Node** to convert the story text into an array (Array) structure. The parameter to extract is `sections`, and the parameter type is `Array[Object]`. 3. Use a **Parameter Extraction Node** to convert the story text into an array (Array) structure. The parameter to extract is `sections`, and the parameter type is `Array[Object]`.
<figure><img src="../../../.gitbook/assets/image (210).png" alt="" width="375"><figcaption><p>Parameter Extraction</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(210).png" alt="" width="375"><figcaption><p>Parameter Extraction</p></figcaption></figure>
{% hint style="info" %} {% hint style="info" %}
The effectiveness of parameter extraction is influenced by the model's inference capability and the instructions given. Using a model with stronger inference capabilities and adding examples in the **instructions** can improve the parameter extraction results. The effectiveness of parameter extraction is influenced by the model's inference capability and the instructions given. Using a model with stronger inference capabilities and adding examples in the **instructions** can improve the parameter extraction results.
@ -40,11 +40,11 @@ The effectiveness of parameter extraction is influenced by the model's inference
4. Use the array-formatted story outline as the input for the iteration node and process it within the iteration node using an **LLM Node**. 4. Use the array-formatted story outline as the input for the iteration node and process it within the iteration node using an **LLM Node**.
<figure><img src="../../../.gitbook/assets/image (220).png" alt="" width="375"><figcaption><p>Configure Iteration Node</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(220).png" alt="" width="375"><figcaption><p>Configure Iteration Node</p></figcaption></figure>
Configure the input variables `GenerateOverallOutline/output` and `Iteration/item` in the LLM Node. Configure the input variables `GenerateOverallOutline/output` and `Iteration/item` in the LLM Node.
<figure><img src="../../../.gitbook/assets/image (221).png" alt="" width="375"><figcaption><p>Configure LLM Node</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(221).png" alt="" width="375"><figcaption><p>Configure LLM Node</p></figcaption></figure>
{% hint style="info" %} {% hint style="info" %}
Built-in variables for iteration: `items[object]` and `index[number]`. Built-in variables for iteration: `items[object]` and `index[number]`.
@ -56,15 +56,15 @@ Built-in variables for iteration: `items[object]` and `index[number]`.
5. Configure a **Direct Reply Node** inside the iteration node to achieve streaming output after each iteration. 5. Configure a **Direct Reply Node** inside the iteration node to achieve streaming output after each iteration.
<figure><img src="../../../.gitbook/assets/image (223).png" alt="" width="375"><figcaption><p>Configure Answer Node</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(223).png" alt="" width="375"><figcaption><p>Configure Answer Node</p></figcaption></figure>
6. Complete debugging and preview. 6. Complete debugging and preview.
<figure><img src="../../../.gitbook/assets/image (222).png" alt=""><figcaption><p>Generate by Iterating Through Story Chapters</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(222).png" alt=""><figcaption><p>Generate by Iterating Through Story Chapters</p></figcaption></figure>
#### **Example 2: Long Article Iteration Generator (Another Arrangement)** #### **Example 2: Long Article Iteration Generator (Another Arrangement)**
<figure><img src="../../../.gitbook/assets/image (2) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (2) (1) (1).png" alt=""><figcaption></figcaption></figure>
* Enter the story title and outline in the **Start Node**. * Enter the story title and outline in the **Start Node**.
* Use an **LLM Node** to generate subheadings and corresponding content for the article. * Use an **LLM Node** to generate subheadings and corresponding content for the article.
@ -126,11 +126,11 @@ A list is a specific data type where elements are separated by commas and enclos
**Return Using the CODE Node** **Return Using the CODE Node**
<figure><img src="../../../.gitbook/assets/image (213).png" alt="" width="375"><figcaption><p>CODE Node Outputting Array</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(213).png" alt="" width="375"><figcaption><p>CODE Node Outputting Array</p></figcaption></figure>
**Return Using the Parameter Extraction Node** **Return Using the Parameter Extraction Node**
<figure><img src="../../../.gitbook/assets/image (214).png" alt="" width="375"><figcaption><p>Parameter Extraction Node Outputting Array</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image%20(214).png" alt="" width="375"><figcaption><p>Parameter Extraction Node Outputting Array</p></figcaption></figure>
### How to Convert an Array to Text ### How to Convert an Array to Text
@ -138,7 +138,7 @@ The output variable of the iteration node is in array format and cannot be direc
**Convert Using a Code Node** **Convert Using a Code Node**
<figure><img src="../../../.gitbook/assets/image (1) (1) (1) (1).png" alt="" width="334"><figcaption><p>Code Node Conversion</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (1) (1) (1) (1) (1).png" alt="" width="334"><figcaption><p>Code Node Conversion</p></figcaption></figure>
```python ```python
def main(articleSections: list): def main(articleSections: list):
@ -150,8 +150,8 @@ def main(articleSections: list):
**Convert Using a Template Node** **Convert Using a Template Node**
<figure><img src="../../../.gitbook/assets/image (3) (1).png" alt="" width="332"><figcaption><p>Template Node Conversion</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (3) (1) (1).png" alt="" width="332"><figcaption><p>Template Node Conversion</p></figcaption></figure>
```django ```django
{{ articleSections | join("\n") }} {{ articleSections | join("\n") }}
``` ```

View File

@ -16,19 +16,19 @@ Some nodes within the workflow require specific data formats as inputs, such as
In this example: The Arxiv paper retrieval tool requires **paper author** or **paper ID** as input parameters. The parameter extractor extracts the paper ID **2405.10739** from the query "What is the content of this paper: 2405.10739" and uses it as the tool parameter for precise querying. In this example: The Arxiv paper retrieval tool requires **paper author** or **paper ID** as input parameters. The parameter extractor extracts the paper ID **2405.10739** from the query "What is the content of this paper: 2405.10739" and uses it as the tool parameter for precise querying.
<figure><img src="../../../.gitbook/assets/image (8).png" alt=""><figcaption><p>Arxiv Paper Retrieval Tool</p></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (8) (1).png" alt=""><figcaption><p>Arxiv Paper Retrieval Tool</p></figcaption></figure>
2. **Converting text to structured data**, such as in the long story iteration generation application, where it serves as a pre-step for the [iteration node](iteration.md), converting chapter content in text format to an array format, facilitating multi-round generation processing by the iteration node. 2. **Converting text to structured data**, such as in the long story iteration generation application, where it serves as a pre-step for the [iteration node](iteration.md), converting chapter content in text format to an array format, facilitating multi-round generation processing by the iteration node.
<figure><img src="../../../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
3. **Extracting structured data and using the [HTTP Request](http\_request.md)**, which can request any accessible URL, suitable for obtaining external retrieval results, webhooks, generating images, and other scenarios. 3. **Extracting structured data and using the** [**HTTP Request**](http\_request.md), which can request any accessible URL, suitable for obtaining external retrieval results, webhooks, generating images, and other scenarios.
*** ***
### 3 How to Configure ### 3 How to Configure
<figure><img src="../../../.gitbook/assets/image (3) (1) (1) (1).png" alt="" width="375"><figcaption></figcaption></figure> <figure><img src="../../../.gitbook/assets/image (3) (1) (1) (1) (1).png" alt="" width="375"><figcaption></figcaption></figure>
**Configuration Steps** **Configuration Steps**
@ -57,4 +57,4 @@ When memory is enabled, each input to the question classifier will include the c
`__is_success Number` Extraction success status, with a value of 1 for success and 0 for failure. `__is_success Number` Extraction success status, with a value of 1 for success and 0 for failure.
`__reason String` Extraction error reason `__reason String` Extraction error reason

View File

@ -1,4 +1,4 @@
# Notion AI Assistant Based on Your Own Notes # Build a Notion AI Assistant
### Intro[](https://wsyfin.com/notion-dify#intro) <a href="#intro" id="intro"></a> ### Intro[](https://wsyfin.com/notion-dify#intro) <a href="#intro" id="intro"></a>
@ -90,11 +90,11 @@ For example, if your Notion notes focus on problem-solving in software developme
_I want you to act as an IT Expert in my Notion workspace, using your knowledge of computer science, network infrastructure, Notion notes, and IT security to solve the problems_. _I want you to act as an IT Expert in my Notion workspace, using your knowledge of computer science, network infrastructure, Notion notes, and IT security to solve the problems_.
<figure><img src="../../../.gitbook/assets/image (40).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (40).png" alt=""><figcaption></figcaption></figure>
It's recommended to initially enable the AI to actively furnish the users with a starter sentence, providing a clue as to what they can ask. Furthermore, activating the 'Speech to Text' feature can allow users to interact with your AI assistant using their voice. It's recommended to initially enable the AI to actively furnish the users with a starter sentence, providing a clue as to what they can ask. Furthermore, activating the 'Speech to Text' feature can allow users to interact with your AI assistant using their voice.
<figure><img src="../../../.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (3) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
Finally, Click the "Publish" button on the top right of the page. Now you can click the public URL in the "Overview" section to converse with your personalized AI assistant! Finally, Click the "Publish" button on the top right of the page. Now you can click the public URL in the "Overview" section to converse with your personalized AI assistant!

View File

@ -1,4 +1,4 @@
# Midjourney Prompt Bot # Create a MidJourney Prompt Bot with Dify
via [@op7418](https://twitter.com/op7418) on Twitter via [@op7418](https://twitter.com/op7418) on Twitter
@ -10,48 +10,48 @@ Dify offers two types of applications: conversational applications similar to Ch
You can access Dify here: https://dify.ai/ You can access Dify here: https://dify.ai/
<figure><img src="../../../.gitbook/assets/create-app.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/create-app.png" alt=""><figcaption></figcaption></figure>
Once you've created your application, the dashboard page will display some data monitoring and application settings. Click on "Prompt Engineering" on the left, which is the main working page. Once you've created your application, the dashboard page will display some data monitoring and application settings. Click on "Prompt Engineering" on the left, which is the main working page.
<figure><img src="../../../.gitbook/assets/screenshot-20230802-114025.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20230802-114025.png" alt=""><figcaption></figcaption></figure>
On this page, the left side is for prompt settings and other functions, while the right side provides real-time previews and usage of your created content. The prefix prompts are the triggers that the user inputs after each content, and they instruct the GPT model how to process the user's input information. On this page, the left side is for prompt settings and other functions, while the right side provides real-time previews and usage of your created content. The prefix prompts are the triggers that the user inputs after each content, and they instruct the GPT model how to process the user's input information.
<figure><img src="../../../.gitbook/assets/WechatIMG38.jpg" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/WechatIMG38.jpg" alt=""><figcaption></figcaption></figure>
Take a look at my prefix prompt structure: the first part instructs GPT to output a description of a photo in the following structure. The second structure serves as the template for generating the prompt, mainly consisting of elements like 'Color photo of the theme,' 'Intricate patterns,' 'Stark contrasts,' 'Environmental description,' 'Camera model,' 'Lens focal length description related to the input content,' 'Composition description relative to the input content,' and 'The names of four master photographers.' This constitutes the main content of the prompt. In theory, you can now save this to the preview area on the right, input the theme you want to generate, and the corresponding prompt will be generated. Take a look at my prefix prompt structure: the first part instructs GPT to output a description of a photo in the following structure. The second structure serves as the template for generating the prompt, mainly consisting of elements like 'Color photo of the theme,' 'Intricate patterns,' 'Stark contrasts,' 'Environmental description,' 'Camera model,' 'Lens focal length description related to the input content,' 'Composition description relative to the input content,' and 'The names of four master photographers.' This constitutes the main content of the prompt. In theory, you can now save this to the preview area on the right, input the theme you want to generate, and the corresponding prompt will be generated.
<figure><img src="../../../.gitbook/assets/pre-prompt.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/pre-prompt.png" alt=""><figcaption></figcaption></figure>
You may have noticed the "\{{proportion\}}" and "\{{version\}}" at the end. These are variables used to pass user-selected information. On the right side, users are required to choose image proportions and model versions, and these two variables help carry that information to the end of the prompt. Let's see how to set them up. You may have noticed the "\{{proportion\}}" and "\{{version\}}" at the end. These are variables used to pass user-selected information. On the right side, users are required to choose image proportions and model versions, and these two variables help carry that information to the end of the prompt. Let's see how to set them up.
<figure><img src="../../../.gitbook/assets/screenshot-20230802-145326.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20230802-145326.png" alt=""><figcaption></figcaption></figure>
Our goal is to fill in the user's selected information at the end of the prompt, making it easy for users to copy without having to rewrite or memorize these commands. For this, we use the variable function. Our goal is to fill in the user's selected information at the end of the prompt, making it easy for users to copy without having to rewrite or memorize these commands. For this, we use the variable function.
Variables allow us to dynamically incorporate the user's form-filled or selected content into the prompt. For example, I've created two variables: one represents the image proportion, and the other represents the model version. Click the "Add" button to create the variables. Variables allow us to dynamically incorporate the user's form-filled or selected content into the prompt. For example, I've created two variables: one represents the image proportion, and the other represents the model version. Click the "Add" button to create the variables.
<figure><img src="../../../.gitbook/assets/WechatIMG157.jpg" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/WechatIMG157.jpg" alt=""><figcaption></figcaption></figure>
After creation, you'll need to fill in the variable key and field name. The variable key should be in English. The optional setting means the field will be non-mandatory when the user fills it. Next, click "Settings" in the action bar to set the variable content. After creation, you'll need to fill in the variable key and field name. The variable key should be in English. The optional setting means the field will be non-mandatory when the user fills it. Next, click "Settings" in the action bar to set the variable content.
<figure><img src="../../../.gitbook/assets/WechatIMG158.jpg" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/WechatIMG158.jpg" alt=""><figcaption></figcaption></figure>
Variables can be of two types: text variables, where users manually input content, and select options where users select from given choices. Since we want to avoid manual commands, we'll choose the dropdown option and add the required choices. Variables can be of two types: text variables, where users manually input content, and select options where users select from given choices. Since we want to avoid manual commands, we'll choose the dropdown option and add the required choices.
<figure><img src="../../../.gitbook/assets/app-variables.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/app-variables.png" alt=""><figcaption></figcaption></figure>
Now, let's use the variables. We need to enclose the variable key within double curly brackets {} and add it to the prefix prompt. Since we want the GPT to output the user-selected content as is, we'll include the phrase "Producing the following English photo description based on user input" in the prompt. Now, let's use the variables. We need to enclose the variable key within double curly brackets {} and add it to the prefix prompt. Since we want the GPT to output the user-selected content as is, we'll include the phrase "Producing the following English photo description based on user input" in the prompt.
<figure><img src="../../../.gitbook/assets/WechatIMG160.jpg" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/WechatIMG160.jpg" alt=""><figcaption></figcaption></figure>
However, there's still a chance that GPT might modify our variable content. To address this, we can lower the diversity in the model selection on the right, reducing the temperature and making it less likely to alter our variable content. You can check the tooltips for other parameters' meanings. However, there's still a chance that GPT might modify our variable content. To address this, we can lower the diversity in the model selection on the right, reducing the temperature and making it less likely to alter our variable content. You can check the tooltips for other parameters' meanings.
<figure><img src="../../../.gitbook/assets/screenshot-20230802-141913.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20230802-141913.png" alt=""><figcaption></figcaption></figure>
With these steps, your application is now complete. After testing and ensuring there are no issues with the output, click the "Publish" button in the upper right corner to release your application. You and users can access your application through the publicly available URL. You can also customize the application name, introduction, icon, and other details in the settings. With these steps, your application is now complete. After testing and ensuring there are no issues with the output, click the "Publish" button in the upper right corner to release your application. You and users can access your application through the publicly available URL. You can also customize the application name, introduction, icon, and other details in the settings.
<figure><img src="../../../.gitbook/assets/screenshot-20230802-142407.png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/screenshot-20230802-142407.png" alt=""><figcaption></figcaption></figure>
That's how you create a simple AI application using Dify. You can also deploy your application on other platforms or modify its UI using the generated API. Additionally, Dify supports uploading your own data, such as building a customer service bot to assist with product-related queries. This concludes the tutorial, and a special thanks to @goocarlos for creating such a fantastic product. That's how you create a simple AI application using Dify. You can also deploy your application on other platforms or modify its UI using the generated API. Additionally, Dify supports uploading your own data, such as building a customer service bot to assist with product-related queries. This concludes the tutorial, and a special thanks to @goocarlos for creating such a fantastic product.

View File

@ -1,4 +1,4 @@
# AI ChatBot with Business Data # Create an AI Chatbot with Business Data in Minutes
AI-powered customer service may be a standard feature for every business website, and it is becoming easier to implement with higher levels of customization. The following content will guide you on how to create an AI-powered customer service for your website in just a few minutes using Dify. AI-powered customer service may be a standard feature for every business website, and it is becoming easier to implement with higher levels of customization. The following content will guide you on how to create an AI-powered customer service for your website in just a few minutes using Dify.
@ -21,7 +21,7 @@ If you want to build an AI Chatbot based on the company's existing knowledge bas
3. select the cleaning method 3. select the cleaning method
4. Click \[Save and Process], and it will take only a few seconds to complete the processing. 4. Click \[Save and Process], and it will take only a few seconds to complete the processing.
<figure><img src="../../../.gitbook/assets/image (41).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (41).png" alt=""><figcaption></figcaption></figure>
### Create an AI application and give it instructions ### Create an AI application and give it instructions
@ -39,30 +39,30 @@ In this case, we assign a role to the AI:
> Opening remarksHey \{{username\}}, I'm Bob☀, the first AI member of Dify. You can discuss with me any questions related to Dify products, team, and even LLMOps. > Opening remarksHey \{{username\}}, I'm Bob☀, the first AI member of Dify. You can discuss with me any questions related to Dify products, team, and even LLMOps.
<figure><img src="../../../.gitbook/assets/image (53).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (53).png" alt=""><figcaption></figcaption></figure>
### Debug the performance of AI Chatbot and publish. ### Debug the performance of AI Chatbot and publish.
After completing the setup, you can send messages to it on the right side of the current page to debug whether its performance meets expectations. Then click "Publish". And then you get an AI chatbot. After completing the setup, you can send messages to it on the right side of the current page to debug whether its performance meets expectations. Then click "Publish". And then you get an AI chatbot.
<figure><img src="../../../.gitbook/assets/image (56).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (56).png" alt=""><figcaption></figcaption></figure>
### Embed AI Chatbot application into your front-end page. ### Embed AI Chatbot application into your front-end page.
This step is to embed the prepared AI chatbot into your official website . Click \[Overview] -> \[Embedded], select the script tag method, and copy the script code into the \<head> or \<body> tag of your website. If you are not a technical person, you can ask the developer responsible for the official website to paste and update the page. This step is to embed the prepared AI chatbot into your official website . Click \[Overview] -> \[Embedded], select the script tag method, and copy the script code into the \<head> or \<body> tag of your website. If you are not a technical person, you can ask the developer responsible for the official website to paste and update the page.
<figure><img src="../../../.gitbook/assets/image (34).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (34).png" alt=""><figcaption></figcaption></figure>
1. Paste the copied code into the target location on your website. 1. Paste the copied code into the target location on your website.
<figure><img src="../../../.gitbook/assets/image (26).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (26).png" alt=""><figcaption></figcaption></figure>
1. Update your official website and you can get an AI intelligent customer service with your business data. Try it out to see the effect. 1. Update your official website and you can get an AI intelligent customer service with your business data. Try it out to see the effect.
<figure><img src="../../../.gitbook/assets/image (19).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (19).png" alt=""><figcaption></figcaption></figure>
Above is an example of how to embed Dify into the official website through the AI chatbot Bob of Dify official website. Of course, you can also use more features provided by Dify to enhance the performance of the chatbot, such as adding some variable settings, so that users can fill in necessary judgment information before interaction, such as name, specific product used and so on. Above is an example of how to embed Dify into the official website through the AI chatbot Bob of Dify official website. Of course, you can also use more features provided by Dify to enhance the performance of the chatbot, such as adding some variable settings, so that users can fill in necessary judgment information before interaction, such as name, specific product used and so on.
Welcome to explore in Dify together! Welcome to explore in Dify together!
<figure><img src="../../../.gitbook/assets/image (25).png" alt=""><figcaption></figcaption></figure> <figure><img src="../../.gitbook/assets/image (25).png" alt=""><figcaption></figcaption></figure>