GITBOOK-41: Restructure

pull/34/head
Chenhe Gu 2024-01-23 11:42:21 +00:00 committed by gitbook-bot
parent e7beadfbcd
commit ef29e525b7
No known key found for this signature in database
GPG Key ID: 07D2180C7B12D0FF
57 changed files with 366 additions and 242 deletions

View File

@ -3,88 +3,86 @@
## Getting Started
* [Welcome to Dify!](README.md)
* [Specifications and Technical Features](getting-started/readme/specifications-and-technical-features.md)
* [Cloud](getting-started/cloud.md)
* [Install(Self hosted)](getting-started/install-self-hosted/README.md)
* [Technical Spec](getting-started/readme/specifications-and-technical-features.md)
* [Using Dify Cloud](getting-started/cloud.md)
* [Install (Self hosted)](getting-started/install-self-hosted/README.md)
* [Docker Compose Deployment](getting-started/install-self-hosted/docker-compose.md)
* [Local Source Code Start](getting-started/install-self-hosted/local-source-code.md)
* [Start the frontend Docker container separately](getting-started/install-self-hosted/start-the-frontend-docker-container.md)
* [Environments](getting-started/install-self-hosted/environments.md)
* [FAQ](getting-started/install-self-hosted/install-faq.md)
* [What is LLMOps?](getting-started/what-is-llmops.md)
* [FAQ](getting-started/faq/README.md)
* [Install FAQ](getting-started/faq/install-faq.md)
* [LLMs-use-FAQ](getting-started/faq/llms-use-faq.md)
* [API-use-FAQ](getting-started/faq/api-use-faq.md)
## Application
## User Guide
* [Creating An Application](application/creating-an-application.md)
* [Launch the WebApp quickly](application/launch-webapp.md)
* [Prompt Engineering](application/prompt-engineering/README.md)
* [Text Generator](application/prompt-engineering/text-generation-application.md)
* [Conversation Application](application/prompt-engineering/conversation-application.md)
* [External-data-tool](application/prompt-engineering/external\_data\_tool.md)
* [Moderation](application/prompt-engineering/moderation\_tool.md)
* [Developing with APIs](application/developing-with-apis.md)
* [Logs & Annotations](application/logs.md)
* [Creating Dify Apps](user-guide/creating-dify-apps/README.md)
* [Quickstart](user-guide/creating-dify-apps/creating-an-application.md)
* [Overview](user-guide/creating-dify-apps/overview.md)
* [Setting Prompts](user-guide/creating-dify-apps/prompt-engineering/README.md)
* [Chat App](user-guide/creating-dify-apps/prompt-engineering/conversation-application.md)
* [Text Generator](user-guide/creating-dify-apps/prompt-engineering/text-generation-application.md)
* [FAQ](user-guide/creating-dify-apps/llms-use-faq.md)
* [Use Cases](user-guide/creating-dify-apps/use-cases/README.md)
* [Notion AI Assistant Based on Your Own Notes](user-guide/creating-dify-apps/use-cases/build-an-notion-ai-assistant.md)
* [AI ChatBot with Business Data](user-guide/creating-dify-apps/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md)
* [Midjourney Prompt Bot](user-guide/creating-dify-apps/use-cases/create-a-midjourney-prompt-bot-with-dify.md)
* [Launching Dify Apps](user-guide/launching-dify-apps/README.md)
* [Quickstart](user-guide/launching-dify-apps/launch-webapp.md)
* [Developing with APIs](user-guide/launching-dify-apps/developing-with-apis/README.md)
* [FAQ](user-guide/launching-dify-apps/developing-with-apis/api-use-faq.md)
* [Using Dify Apps](user-guide/using-dify-apps/README.md)
* [Text Generator](user-guide/using-dify-apps/text-generator.md)
* [Chat App](user-guide/using-dify-apps/conversation-application.md)
* [Further Chat App Settings](user-guide/using-dify-apps/chat.md)
## web application
## Features
* [Overview](web-application/overview.md)
* [Text Generator](web-application/text-generator.md)
* [Conversation Application](web-application/conversation-application.md)
## Explore
* [Discovery](explore/app.md)
* [Chat](explore/chat.md)
## Advanced
* [Expert Mode for Prompt Engineering](advanced/prompt-engineering/README.md)
* [Prompt Template](advanced/prompt-engineering/prompt-template.md)
* [RAG (Retrieval Augmented Generation)](advanced/retrieval-augment/README.md)
* [Hybrid Search](advanced/retrieval-augment/hybrid-search.md)
* [Rerank](advanced/retrieval-augment/rerank.md)
* [Retrieval](advanced/retrieval-augment/retrieval.md)
* [Knowledge\&Index](advanced/datasets/README.md)
* [Sync from Notion](advanced/datasets/sync-from-notion.md)
* [Maintain Knowledge Via Api](advanced/datasets/maintain-dataset-via-api.md)
* [Annotation Reply](advanced/annotation-reply.md)
* [Plugins](advanced/ai-plugins/README.md)
* [Based on WebApp Template](advanced/ai-plugins/based-on-frontend-templates.md)
* [Model Configuration](advanced/model-configuration/README.md)
* [Hugging Face](advanced/model-configuration/hugging-face.md)
* [Replicate](advanced/model-configuration/replicate.md)
* [Xinference](advanced/model-configuration/xinference.md)
* [OpenLLM](advanced/model-configuration/openllm.md)
* [LocalAI](advanced/model-configuration/localai.md)
* [Ollama](advanced/model-configuration/ollama.md)
* [More Integration](advanced/more-integration.md)
* [Extension](advanced/extension/README.md)
* [API Based Extension](advanced/extension/api\_based\_extension/README.md)
* [External\_data\_tool](advanced/extension/api\_based\_extension/external\_data\_tool.md)
* [Deploy to Cloudflare Workers](advanced/extension/api\_based\_extension/cloudflare\_workers.md)
* [Moderation Extension](advanced/extension/api\_based\_extension/moderation-extension.md)
* [Code-based Extension](advanced/extension/code-based-extension.md)
* [Prompting Expert Mode](features/prompt-engineering/README.md)
* [Prompt Template](features/prompt-engineering/prompt-template.md)
* [RAG (Retrieval Augmented Generation)](features/retrieval-augment/README.md)
* [Hybrid Search](features/retrieval-augment/hybrid-search.md)
* [Rerank](features/retrieval-augment/rerank.md)
* [Retrieval](features/retrieval-augment/retrieval.md)
* [Knowledge Import](features/datasets/README.md)
* [Sync from Notion](features/datasets/sync-from-notion.md)
* [Maintain Knowledge Via Api](features/datasets/maintain-dataset-via-api.md)
* [External Data Tool](features/external\_data\_tool.md)
* [Annotation Reply](features/annotation-reply.md)
* [Logs & Annotations](features/logs.md)
* [Plugins](features/ai-plugins/README.md)
* [Based on WebApp Template](features/ai-plugins/based-on-frontend-templates.md)
* [More Integration](features/more-integration.md)
* [Extension](features/extension/README.md)
* [API Based Extension](features/extension/api\_based\_extension/README.md)
* [External\_data\_tool](features/extension/api\_based\_extension/external\_data\_tool.md)
* [Moderation Extension](features/extension/api\_based\_extension/moderation-extension.md)
* [Code-based Extension](features/extension/code-based-extension.md)
* [Moderation](features/moderation\_tool.md)
## workspace
* [Discovery](workspace/app.md)
* [Billing](workspace/billing.md)
## use cases
## Tutorials
* [How to Build an Notion AI Assistant Based on Your Own Notes?](use-cases/build-an-notion-ai-assistant.md)
* [Create an AI ChatBot with Business Data in Minutes](use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md)
* [Create a Midjoureny Prompt Bot Without Code in Just a Few Minutes](use-cases/create-a-midjoureny-prompt-bot-with-dify.md)
* [Expose API Extension on public Internet using Cloudflare Workers](tutorials/cloudflare\_workers.md)
* [Connecting with Different Models](tutorials/model-configuration/README.md)
* [Hugging Face](tutorials/model-configuration/hugging-face.md)
* [Replicate](tutorials/model-configuration/replicate.md)
* [Xinference](tutorials/model-configuration/xinference.md)
* [OpenLLM](tutorials/model-configuration/openllm.md)
* [LocalAI](tutorials/model-configuration/localai.md)
* [Ollama](tutorials/model-configuration/ollama.md)
## Community
* [Contributing](community/contributing.md)
* [Support](community/support.md)
* [Open-Source License](community/open-source.md)
* [Data Security](community/data-security.md)
## User Agreement
* [Terms of Service](user-agreement/terms-of-service.md)
* [Open-Source License](user-agreement/open-source.md)
* [Data Security](user-agreement/data-security.md)
* [Privacy Policy](user-agreement/privacy-policy.md)

View File

@ -0,0 +1,154 @@
# Contributing
So you're looking to contribute to Dify - that's awesome, we can't wait to see what you do. As a startup with limited headcount and funding, we have grand ambitions to design the most intuitive workflow for building and managing LLM applications. Any help from the community counts, truly.
We need to be nimble and ship fast given where we are, but we also want to make sure that contributors like you get as smooth an experience at contributing as possible. We've assembled this contribution guide for that purpose, aiming at getting you familiarized with the codebase & how we work with contributors, so you could quickly jump to the fun part.
This guide, like Dify itself, is a constant work in progress. We highly appreciate your understanding if at times it lags behind the actual project, and welcome any feedback for us to improve.
In terms of licensing, please take a minute to read our short License and Contributor Agreement. The community also adheres to the [code of conduct](https://github.com/langgenius/.github/blob/main/CODE\_OF\_CONDUCT.md).
### Before you jump in
[Find](https://github.com/langgenius/dify/issues?q=is:issue+is:closed) an existing issue, or [open](https://github.com/langgenius/dify/issues/new/choose) a new one. We categorize issues into 2 types:
#### Feature requests:
* If you're opening a new feature request, we'd like you to explain what the proposed feature achieves, and include as much context as possible. [@perzeusss](https://github.com/perzeuss) has made a solid [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) that helps you draft out your needs. Feel free to give it a try.
* If you want to pick one up from the existing issues, simply drop a comment below it saying so.
A team member working in the related direction will be looped in. If all looks good, they will give the go-ahead for you to start coding. We ask that you hold off working on the feature until then, so none of your work goes to waste should we propose changes.
Depending on whichever area the proposed feature falls under, you might talk to different team members. Here's rundown of the areas each our team members are working on at the moment:
| Member | Scope |
| --------------------------------------------------------------------------------------- | ---------------------------------------------------- |
| [@yeuoly](https://github.com/Yeuoly) | Architecting Agents |
| [@jyong](https://github.com/JohnJyong) | RAG pipeline design |
| [@GarfieldDai](https://github.com/GarfieldDai) | Building workflow orchestrations |
| [@iamjoel](https://github.com/iamjoel) & [@zxhlyh](https://github.com/zxhlyh) | Making our frontend a breeze to use |
| [@guchenhe](https://github.com/guchenhe) & [@crazywoola](https://github.com/crazywoola) | Developer experience, points of contact for anything |
| [@takatost](https://github.com/takatost) | Overall product direction and architecture |
How we prioritize:
| Feature Type | Priority |
| --------------------------------------------------------------------------------------- | --------------- |
| High-Priority Features as being labeled by a team member | High Priority |
| Popular feature requests from our [community feedback board](https://feedback.dify.ai/) | Medium Priority |
| Non-core features and minor enhancements | Low Priority |
| Valuable but not immediate | Future-Feature |
#### Anything else (e.g. bug report, performance optimization, typo correction):
* Start coding right away.
How we prioritize:
| Issue Type | Priority |
| ----------------------------------------------------------------------------------- | --------------- |
| Bugs in core functions (cannot login, applications not working, security loopholes) | Critical |
| Non-critical bugs, performance boosts | Medium Priority |
| Minor fixes (typos, confusing but working UI) | Low Priority |
### Installing
Here are the steps to set up Dify for development:
#### 1. Fork this repository
#### 2. Clone the repo
Clone the forked repository from your terminal:
```
git clone git@github.com:<github_username>/dify.git
```
#### 3. Verify dependencies
Dify requires the following dependencies to build, make sure they're installed on your system:
* [Docker](https://www.docker.com/)
* [Docker Compose](https://docs.docker.com/compose/install/)
* [Node.js v18.x (LTS)](http://nodejs.org)
* [npm](https://www.npmjs.com/) version 8.x.x or [Yarn](https://yarnpkg.com/)
* [Python](https://www.python.org/) version 3.10.x
#### 4. Installations
Dify is composed of a backend and a frontend. Navigate to the backend directory by `cd api/`, then follow the Backend README to install it. In a separate terminal, navigate to the frontend directory by `cd web/`, then follow the Frontend README to install.
Check the [installation FAQ](https://docs.dify.ai/getting-started/faq/install-faq) for a list of common issues and steps to troubleshoot.
#### 5. Visit dify in your browser
To validate your set up, head over to [http://localhost:3000](http://localhost:3000) (the default, or your self-configured URL and port) in your browser. You should now see Dify up and running.
### Developing
If you are adding a model provider, [this guide](https://github.com/langgenius/dify/blob/main/api/core/model\_runtime/README.md) is for you.
To help you quickly navigate where your contribution fits, a brief, annotated outline of Dify's backend & frontend is as follows:
#### Backend
Difys backend is written in Python using [Flask](https://flask.palletsprojects.com/en/3.0.x/). It uses [SQLAlchemy](https://www.sqlalchemy.org/) for ORM and [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html) for task queueing. Authorization logic goes via Flask-login.
```
[api/]
├── constants // Constant settings used throughout code base.
├── controllers // API route definitions and request handling logic.
├── core // Core application orchestration, model integrations, and tools.
├── docker // Docker & containerization related configurations.
├── events // Event handling and processing
├── extensions // Extensions with 3rd party frameworks/platforms.
├── fields // field definitions for serialization/marshalling.
├── libs // Reusable libraries and helpers.
├── migrations // Scripts for database migration.
├── models // Database models & schema definitions.
├── services // Specifies business logic.
├── storage // Private key storage.
├── tasks // Handling of async tasks and background jobs.
└── tests
```
#### Frontend
The website is bootstrapped on [Next.js](https://nextjs.org/) boilerplate in Typescript and uses [Tailwind CSS](https://tailwindcss.com/) for styling. [React-i18next](https://react.i18next.com/) is used for internationalization.
```
[web/]
├── app // layouts, pages, and components
│ ├── (commonLayout) // common layout used throughout the app
│ ├── (shareLayout) // layouts specifically shared across token-specific sessions
│ ├── activate // activate page
│ ├── components // shared by pages and layouts
│ ├── install // install page
│ ├── signin // signin page
│ └── styles // globally shared styles
├── assets // Static assets
├── bin // scripts ran at build step
├── config // adjustable settings and options
├── context // shared contexts used by different portions of the app
├── dictionaries // Language-specific translate files
├── docker // container configurations
├── hooks // Reusable hooks
├── i18n // Internationalization configuration
├── models // describes data models & shapes of API responses
├── public // meta assets like favicon
├── service // specifies shapes of API actions
├── test
├── types // descriptions of function params and return values
└── utils // Shared utility functions
```
### Submitting your PR
At last, time to open a pull request (PR) to our repo. For major features, we first merge them into the `deploy/dev` branch for testing, before they go into the `main` branch. If you run into issues like merge conflicts or don't know how to open a pull request, check out [GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests).
And that's it! Once your PR is merged, you will be featured as a contributor in our [README](https://github.com/langgenius/dify/blob/main/README.md).
### Getting Help
If you ever get stuck or got a burning question while contributing, simply shoot your queries our way via the related GitHub issue, or hop onto our [Discord](https://discord.gg/AhzKf7dNgk) for a quick chat.

View File

@ -1,9 +1,9 @@
# API-based Extension
# API Based Extension
Developers can extend module capabilities through the API extension module. Currently supported module extensions include:
* `moderation`&#x20;
* `external_data_tool`&#x20;
* `moderation`
* `external_data_tool`
Before extending module capabilities, prepare an API and an API Key for authentication, which can also be automatically generated by Dify. In addition to developing the corresponding module capabilities, follow the specifications below so that Dify can invoke the API correctly.
@ -71,8 +71,7 @@ Authorization: Bearer {api_key}
}
```
\
\\
## For Example
@ -117,7 +116,7 @@ Authorization: Bearer 123456
The code is based on the Python FastAPI framework.
#### **Install dependencies.** &#x20;
#### **Install dependencies.**
<pre><code><strong>pip install 'fastapi[all]' uvicorn
</strong></code></pre>
@ -264,4 +263,4 @@ Now, this API endpoint is accessible publicly. You can configure this endpoint i
We recommend that you use Cloudflare Workers to deploy your API extension, because Cloudflare Workers can easily provide a public address and can be used for free.
[cloudflare\_workers.md](./cloudflare\_workers.md "mention")
[cloudflare\_workers.md](../../../tutorials/cloudflare\_workers.md "mention")

View File

@ -1,6 +1,6 @@
# External-data-tool
# External Data Tool
Previously, [knowledge](../../advanced/datasets/ "mention") allowed developers to directly upload long texts in various formats and structured data to build knowledge, enabling AI applications to converse based on the latest context uploaded by users. With this update, the external data tool empowers developers to use their own search capabilities or external data such as internal knowledge bases as the context for LLMs. This is achieved by extending APIs to fetch external data and embedding it into Prompts. Compared to uploading knowledge to the cloud, using external data tools offers significant advantages in ensuring the security of private data, customizing searches, and obtaining real-time data.
Previously, [datasets](datasets/ "mention") allowed developers to directly upload long texts in various formats and structured data to build knowledge, enabling AI applications to converse based on the latest context uploaded by users. With this update, the external data tool empowers developers to use their own search capabilities or external data such as internal knowledge bases as the context for LLMs. This is achieved by extending APIs to fetch external data and embedding it into Prompts. Compared to uploading knowledge to the cloud, using external data tools offers significant advantages in ensuring the security of private data, customizing searches, and obtaining real-time data.
## What does it do?
@ -8,23 +8,23 @@ When end-users make a request to the conversational system, the platform backend
## Quick Start
1. Before using the external data tool, you need to prepare an API and an API Key for authentication. Head to [external\_data\_tool.md](../../advanced/extension/api\_based\_extension/external\_data\_tool.md "mention").
1. Before using the external data tool, you need to prepare an API and an API Key for authentication. Head to [external\_data\_tool.md](extension/api\_based\_extension/external\_data\_tool.md "mention").
2. Dify offers centralized API management; After adding API extension configurations in the settings interface, they can be directly utilized across various applications on Dify.
<figure><img src="../../.gitbook/assets/api_based.png" alt=""><figcaption><p>API-based Extension<br></p></figcaption></figure>
<figure><img src="../.gitbook/assets/api_based.png" alt=""><figcaption><p>API-based Extension<br></p></figcaption></figure>
3. Taking "Query Weather" as an example, enter the name, API endpoint, and API Key in the "Add New API-based Extension" dialog box. After saving, we can then call the API.
<figure><img src="../../.gitbook/assets/api_based_extension.png" alt=""><figcaption><p>Weather Inquiry</p></figcaption></figure>
<figure><img src="../.gitbook/assets/api_based_extension.png" alt=""><figcaption><p>Weather Inquiry</p></figcaption></figure>
4. On the prompt orchestration page, click the "+ Add" button to the right of "Tools," and in the "Add Tool" dialog that opens, fill in the name and variable name (the variable name will be referenced in the Prompt, so please use English), as well as select the API-based extension added in Step 2.
<figure><img src="../../.gitbook/assets/api_based_extension1.png" alt=""><figcaption><p>External_data_tool</p></figcaption></figure>
<figure><img src="../.gitbook/assets/api_based_extension1.png" alt=""><figcaption><p>External_data_tool</p></figcaption></figure>
5. In the prompt orchestration box, we can assemble the queried external data into the Prompt. For instance, if we want to query today's weather in London, we can add a variable named `location`, enter "London", and combine it with the external data tool's extension variable name `weather_data`. The debug output would be as follows:
<figure><img src="../../.gitbook/assets/Weather_search_tool.jpeg" alt=""><figcaption><p>Weather_search_tool</p></figcaption></figure>
<figure><img src="../.gitbook/assets/Weather_search_tool.jpeg" alt=""><figcaption><p>Weather_search_tool</p></figcaption></figure>
In the Prompt Log, we can also see the real-time data returned by the API:
<figure><img src="../../.gitbook/assets/log.jpeg" alt="" width="335"><figcaption><p>Prompt Log</p></figcaption></figure>
<figure><img src="../.gitbook/assets/log.jpeg" alt="" width="335"><figcaption><p>Prompt Log</p></figcaption></figure>

View File

@ -2,7 +2,7 @@
In our interactions with AI applications, we often have stringent requirements in terms of content security, user experience, and legal regulations. At this point, we need the "Sensitive Word Review" feature to create a better interactive environment for end-users. On the prompt orchestration page, click "Add Function" and locate the "Content Review" toolbox at the bottom:
<figure><img src="../../.gitbook/assets/content_moderation.png" alt=""><figcaption><p>Content moderation</p></figcaption></figure>
<figure><img src="../.gitbook/assets/content_moderation.png" alt=""><figcaption><p>Content moderation</p></figcaption></figure>
## Call the OpenAI Moderation API
@ -10,17 +10,16 @@ OpenAI, along with most companies providing LLMs, includes content moderation fe
Now you can also directly call the OpenAI Moderation API on Dify; you can review either input or output content simply by entering the corresponding "preset reply."
<figure><img src="../../.gitbook/assets/moderation2.png" alt=""><figcaption><p>OpenAI Moderation</p></figcaption></figure>
<figure><img src="../.gitbook/assets/moderation2.png" alt=""><figcaption><p>OpenAI Moderation</p></figcaption></figure>
## Keywords
Developers can customize the sensitive words they need to review, such as using "kill" as a keyword to perform an audit action when users input. The preset reply content should be "The content is violating usage policies." It can be anticipated that when a user inputs a text snippet containing "kill" at the terminal, it will trigger the sensitive word review tool and return the preset reply content.
<figure><img src="../../.gitbook/assets/moderation3.png" alt=""><figcaption><p>Keywords</p></figcaption></figure>
<figure><img src="../.gitbook/assets/moderation3.png" alt=""><figcaption><p>Keywords</p></figcaption></figure>
## Moderation Extension
Different enterprises often have their own mechanisms for sensitive word moderation. When developing their own AI applications, such as an internal knowledge base ChatBot, enterprises need to moderate the query content input by employees for sensitive words. For this purpose, developers can write an API extension based on their enterprise's internal sensitive word moderation mechanisms, specifically referring to [moderation-extension.md](../../advanced/extension/api\_based\_extension/moderation-extension.md "mention"), which can then be called on Dify to achieve a high degree of customization and privacy protection for sensitive word review.
<figure><img src="../../.gitbook/assets/moderation4.png" alt=""><figcaption><p>Moderation Extension</p></figcaption></figure>
Different enterprises often have their own mechanisms for sensitive word moderation. When developing their own AI applications, such as an internal knowledge base ChatBot, enterprises need to moderate the query content input by employees for sensitive words. For this purpose, developers can write an API extension based on their enterprise's internal sensitive word moderation mechanisms, specifically referring to [moderation-extension.md](extension/api\_based\_extension/moderation-extension.md "mention"), which can then be called on Dify to achieve a high degree of customization and privacy protection for sensitive word review.
<figure><img src="../.gitbook/assets/moderation4.png" alt=""><figcaption><p>Moderation Extension</p></figcaption></figure>

View File

@ -1,6 +1,6 @@
# Expert Mode for Prompt Engineering
# Prompting Expert Mode
Currently, the orchestration for creating apps in Dify is set to **Basic Mode** by default. This is ideal for non-tech-savvy individuals who want to quickly make an app. For example, if you want to create a corporate knowledge-base ChatBot or an article summary Generator, you can use the **Basic Mode** to design `Pre-prompt` words, add `Query`, integrate `Context`, and other straightforward steps to launch a complete app. For more head to 👉 [text-generation-application.md](../../application/prompt-engineering/text-generation-application.md "mention") and [conversation-application.md](../../application/prompt-engineering/conversation-application.md "mention").
Currently, the orchestration for creating apps in Dify is set to **Basic Mode** by default. This is ideal for non-tech-savvy individuals who want to quickly make an app. For example, if you want to create a corporate knowledge-base ChatBot or an article summary Generator, you can use the **Basic Mode** to design `Pre-prompt` words, add `Query`, integrate `Context`, and other straightforward steps to launch a complete app. For more head to 👉 [text-generation-application.md](../../user-guide/creating-dify-apps/prompt-engineering/text-generation-application.md "mention") and [conversation-application.md](../../user-guide/creating-dify-apps/prompt-engineering/conversation-application.md "mention").
💡However, you surely want to design prompts in a more customized manner if you're a developer who has conducted in-depth research on prompts, then you should opt for the **Expert Mode**. In this mode, you are granted permission to customize comprehensive prompts rather than using the pre-packaged prompts from Dify. You can modify the built-in prompts, rearrange the placement of `Context` and `History` , set necessary parameters, and more. If you're familiar with the OpenAI's Playground, you can get up to speed with this mode more quickly.
@ -12,8 +12,7 @@ Well, before you try the new mode, you should be aware of some essential element
When choosing a model, if you see "COMPLETE" on the right side of the model name, it indicates a Text completion model e.g. <img src="../../.gitbook/assets/screenshot-20231017-092613.png" alt="" data-size="line">
This type of model accepts a freeform text string and generates a text completion, attempting to match any context or pattern you provide. For example, if you write the prompt `As René Descartes said, "I think, therefore"`, it's highly likely that the model will return `"I am."` as the completion.\
This type of model accepts a freeform text string and generates a text completion, attempting to match any context or pattern you provide. For example, if you write the prompt `As René Descartes said, "I think, therefore"`, it's highly likely that the model will return `"I am."` as the completion.\\
* **Chat**
When choosing a model, if you see "CHAT" on the right side of the model name, it indicates a Chat completions model e.g. <img src="../../.gitbook/assets/screenshot-20231017-092957.png" alt="" data-size="line">
@ -28,31 +27,24 @@ Well, before you try the new mode, you should be aware of some essential element
User messages provide requests or comments for the AI assistant to respond to.
* `ASSISTANT`
Assistant messages store previous assistant responses, but they can also be written by you to provide examples of desired behavior.\
Assistant messages store previous assistant responses, but they can also be written by you to provide examples of desired behavior.\\
* **Stop\_Sequences**
Stop\_Sequences refers to specific words, phrases, or characters used to send a signal to LLM to stop generating text.\
Stop\_Sequences refers to specific words, phrases, or characters used to send a signal to LLM to stop generating text.\\
* **Blocks**
<img src="../../.gitbook/assets/Context.png" alt="" data-size="line">
When users input a query, the app processes the query as search criteria for the knowledge. The organized results from the search then replace the variable `Context`, allowing the LLM to reference the content for its response.
<img src="../../.gitbook/assets/QUERY.png" alt="" data-size="line">
The query content is only available in the Text completion models of conversational applications. The content entered by the user during the conversation will replace this variable, initiating a new turn of dialogue.
<img src="../../.gitbook/assets/history (1).png" alt="" data-size="line">
The conversation history is only available in the Text completion model of conversational applications. When engaging in multiple conversations in dialogue applications, Dify will assemble and concatenate the historical dialogue records according to built-in rules and replace the 'Conversation History' variable. The `Human` and `Assistant` prefixes can be modified by clicking on the `...` after "Conversation History".\
* #### **Prompt Template**
The conversation history is only available in the Text completion model of conversational applications. When engaging in multiple conversations in dialogue applications, Dify will assemble and concatenate the historical dialogue records according to built-in rules and replace the 'Conversation History' variable. The `Human` and `Assistant` prefixes can be modified by clicking on the `...` after "Conversation History".\\
* **Prompt Template**
In this mode, before formal orchestration, an initial template is provided in the prompt box. We can directly modify this template to have more customized requirements for LLM. Different types of applications have variations in different modes.
@ -62,13 +54,13 @@ Well, before you try the new mode, you should be aware of some essential element
## Comparison of the two modes
<table><thead><tr><th width="333">Comparison Dimension</th><th width="197">Basic Mode </th><th>Expert Mode</th></tr></thead><tbody><tr><td>Visibility of Built-in Prompts</td><td>Invisible</td><td>Visible</td></tr><tr><td>Automatic Design</td><td>Available</td><td>Disabled</td></tr><tr><td>Variable Insertion</td><td>Available</td><td>Available</td></tr><tr><td>Block Validation</td><td>Disabled</td><td>Available</td></tr><tr><td>SYSTEM / USER / ASSISTANT </td><td>Invisible</td><td>Visible</td></tr><tr><td>Context parameter settings</td><td>Available</td><td>Available</td></tr><tr><td>PROMPT LOG</td><td>Available</td><td>Available</td></tr><tr><td>Stop_Sequences </td><td>Disabled</td><td>Available</td></tr></tbody></table>
<table><thead><tr><th width="333">Comparison Dimension</th><th width="197">Basic Mode</th><th>Expert Mode</th></tr></thead><tbody><tr><td>Visibility of Built-in Prompts</td><td>Invisible</td><td>Visible</td></tr><tr><td>Automatic Design</td><td>Available</td><td>Disabled</td></tr><tr><td>Variable Insertion</td><td>Available</td><td>Available</td></tr><tr><td>Block Validation</td><td>Disabled</td><td>Available</td></tr><tr><td>SYSTEM / USER / ASSISTANT</td><td>Invisible</td><td>Visible</td></tr><tr><td>Context parameter settings</td><td>Available</td><td>Available</td></tr><tr><td>PROMPT LOG</td><td>Available</td><td>Available</td></tr><tr><td>Stop_Sequences</td><td>Disabled</td><td>Available</td></tr></tbody></table>
## Operation Guide
### 1. How to enter the Expert Mode
After creating an application, you can switch to the **Expert Mode** on the prompt design page.&#x20;
After creating an application, you can switch to the **Expert Mode** on the prompt design page.
<figure><img src="../../.gitbook/assets/000.png" alt=""><figcaption><p>Access to the <strong>Expert Mode</strong></p></figcaption></figure>
@ -88,8 +80,7 @@ Please note that only after uploading the context, the built-in prompts containi
**TopK:** The value is an integer from 1 to 10.
It is used to filter the text fragments with the highest similarity to the user's query. The system will also dynamically adjust the number of fragments based on the context window size of the selected model. The default system value is 2. This value is recommended to be set between 2 and 5, because we expect to get answers that match the embedded context more closely.\
It is used to filter the text fragments with the highest similarity to the user's query. The system will also dynamically adjust the number of fragments based on the context window size of the selected model. The default system value is 2. This value is recommended to be set between 2 and 5, because we expect to get answers that match the embedded context more closely.\\
**Score Threshold:** The value is a floating-point number from 0 to 1, with two decimal places.
@ -121,7 +112,7 @@ The soil is yellow.
Because LLM stops generating content before the next `Human1:`.
### 4.Use "/" to insert Variables and Blocks
### 4.Use "/" to insert Variables and Blocks
You can enter "/" in the text editor to quickly bring up Blocks to insert into the prompt.
@ -184,7 +175,6 @@ From the log, we can view the complete prompts that have been assembled by the s
In the initial application's main interface, you can find "Logs & Ann." in the left-side navigation bar. Clicking on it will allow you to view the complete logs. In the "Logs & Ann." main interface, you can click on any conversation log entry. In the right-side dialog box that appears, simply move the mouse pointer over the conversation and then click the "Log" button to check the Prompt Log.
For more head to 👉 [logs.md](../../application/logs.md "mention") .
For more head to 👉 [logs.md](../logs.md "mention") .
<figure><img src="../../.gitbook/assets/33333.png" alt=""><figcaption><p>Logs &#x26; Ann.</p></figcaption></figure>

View File

@ -1,4 +1,4 @@
# Cloud
# Using Dify Cloud
{% hint style="info" %}
Note: Dify is currently in the Beta testing phase. If there are inconsistencies between the documentation and the product, please refer to the actual product experience.
@ -8,6 +8,6 @@ Dify offers a [cloud service](http://cloud.dify.ai) for everyone, so you can use
1. Log in to [Dify Cloud](https://cloud.dify.ai) and create a new Workspace or join an existing one
2. Configure your model provider or use our hosted model provider
3. You can [create an application](../application/creating-an-application.md) now!
3. You can [create an application](../user-guide/creating-dify-apps/creating-an-application.md) now!
Currently, we don't have a pricing plan. If you like this LLMOps product, please introduce it to your friends😄.

View File

@ -1,2 +0,0 @@
# FAQ

View File

@ -4,16 +4,14 @@ description: >-
serves as a shortcut to understand Dify's unique advantages
---
# Specifications and Technical Features
# Technical Spec
We adopt transparent policies around product specifications to ensure decisions are made based on complete understanding. Such transparency not only benefits your technical selection, but also promotes deeper comprehension within the community for active contributions.
### Project Basics
<table data-header-hidden><thead><tr><th width="341"></th><th></th></tr></thead><tbody><tr><td>Established</td><td>March 2023</td></tr><tr><td>Open Source License</td><td><a href="../../community/open-source.md">Apache License 2.0 with commercial licensing</a></td></tr><tr><td>Official R&#x26;D Team</td><td>Over 10 full-time employees</td></tr><tr><td>Community Contributors</td><td>Over 60 people</td></tr><tr><td>Backend Technology</td><td>Python/Flask/PostgreSQL</td></tr><tr><td>Frontend Technology</td><td>Next.js</td></tr><tr><td>Codebase Size</td><td>Over 130,000 lines</td></tr><tr><td>Release Frequency</td><td>Average once per week</td></tr></tbody></table>
<table data-header-hidden><thead><tr><th width="341"></th><th></th></tr></thead><tbody><tr><td>Established</td><td>March 2023</td></tr><tr><td>Open Source License</td><td><a href="../../user-agreement/open-source.md">Apache License 2.0 with commercial licensing</a></td></tr><tr><td>Official R&#x26;D Team</td><td>Over 10 full-time employees</td></tr><tr><td>Community Contributors</td><td>Over 60 people</td></tr><tr><td>Backend Technology</td><td>Python/Flask/PostgreSQL</td></tr><tr><td>Frontend Technology</td><td>Next.js</td></tr><tr><td>Codebase Size</td><td>Over 130,000 lines</td></tr><tr><td>Release Frequency</td><td>Average once per week</td></tr></tbody></table>
### Technical Features
<table data-header-hidden><thead><tr><th width="240"></th><th></th></tr></thead><tbody><tr><td>LLM Inference Engines</td><td>Dify Runtime (LangChain removed since v0.4)</td></tr><tr><td>Commercial Models Supported</td><td><strong>10+</strong>, including OpenAI and Anthropic<br>Onboard new mainstream models within 48 hours</td></tr><tr><td>MaaS Vendor Supported</td><td><strong>2</strong>, Hugging Face and Replicate</td></tr><tr><td>Local Model Inference Runtimes Supported</td><td><strong>4</strong>, Xoribits (recommended), OpenLLM, LocalAI, ChatGLM</td></tr><tr><td>Multimodal Capabilities</td><td><p>ASR Models</p><p>Rich-text models up to GPT-4V specs</p></td></tr><tr><td>Built-in App Types</td><td>Text generation, Conversational</td></tr><tr><td>Prompt-as-a-Service Orchestration</td><td><p>Visual orchestration interface widely praised, modify Prompts and preview effects in one place.<br></p><p><strong>Orchestration Modes</strong></p><ul><li>Simple orchestration</li><li>Advanced orchestration</li><li>Assistant orchestration (launching Jan 2024)</li><li>Flow orchestration (Q1 2024)</li></ul><p><strong>Prompt Variable Types</strong></p><ul><li>String</li><li>Radio enum</li><li>External API</li><li>File (Jan 2024)</li></ul></td></tr><tr><td>RAG Features</td><td><p>Industry-first visual knowledge base management interface, supporting snippet previews and recall testing.</p><p><strong>Indexing Methods</strong></p><ul><li>Keywords</li><li>Text vectors</li><li>LLM-assisted question-snippet model</li></ul><p><strong>Retrieval Methods</strong></p><ul><li>Keywords</li><li>Text similarity matching</li><li>N choose 1</li><li>Multi-path recall</li></ul><p><strong>Recall Optimization</strong></p><ul><li>Re-rank models</li></ul></td></tr><tr><td>ETL Capabilities</td><td><p>Automated cleaning for TXT, Markdown, PDF, HTML, DOC, CSV formats. Unstructured service enables maximum support.</p><p>Sync Notion docs as knowledge bases.</p></td></tr><tr><td>Vector Databases Supported</td><td>Qdrant (recommended), Weaviate, Zilliz</td></tr><tr><td>Agent Technologies</td><td><p>ReAct, Function Call.<br></p><p><strong>Tooling Support</strong></p><ul><li>Invoke OpenAI Plugin standard tools (Q4 2023)</li><li>Directly load OpenAPI Specification APIs as tools</li></ul><p><strong>Built-in Tools</strong></p><ul><li>3 tools</li></ul></td></tr><tr><td>Logging</td><td>Supported, annotations based on logs</td></tr><tr><td>Annotation Reply</td><td>Based on human-annotated Q&#x26;As, used for similarity-based replies. Exportable as data format for model fine-tuning.</td></tr><tr><td>Content Moderation</td><td>OpenAI Moderation or external APIs</td></tr><tr><td>Team Collaboration</td><td>Workspaces, multi-member management</td></tr><tr><td>API Specs</td><td>RESTful, most features covered</td></tr><tr><td>Deployment Methods</td><td>Docker, Helm</td></tr></tbody></table>

View File

@ -1,4 +1,4 @@
# Deploying API Extension to Cloudflare Workers
# Expose API Extension on public Internet using Cloudflare Workers
## Getting Started
@ -43,19 +43,12 @@ npm run deploy
After successful deployment, you will get a public internet address, which you can add in Dify as an API Endpoint. Please note not to miss the `endpoint` path.
<figure><img src="../../../.gitbook/assets/api_extension_edit.png" alt="">
<figcaption><p>
Adding API Endpoint in Dify
</p></figcaption>
</figure>
<figure><img src="../.gitbook/assets/api_extension_edit.png" alt=""><figcaption><p>Adding API Endpoint in Dify</p></figcaption></figure>
<figure><img src="../../../.gitbook/assets/app_tools_edit.png" alt="">
<figcaption><p>
Adding API Tool in the App edit page
</p></figcaption>
</figure>
<figure><img src="../.gitbook/assets/app_tools_edit.png" alt=""><figcaption><p>Adding API Tool in the App edit page</p></figcaption></figure>
## Other Logic TL;DR
### About Bearer Auth
```typescript
@ -91,8 +84,7 @@ const schema = z.object({
});
```
We use `zod` to define the types of parameters. You can use `zValidator` in `src/index.ts` for parameter validation. Get validated parameters through ` const { point, params } = c.req.valid("json");`. Our point has only two values, so we use `z.union` for definition.
`params` is an optional parameter, defined with `z.optional`. It includes a `inputs` parameter, a `Record<string, any>` type representing an object with string keys and any values. This type can represent any object. You can get the `count` parameter in `src/index.ts` using `params?.inputs?.count`.
We use `zod` to define the types of parameters. You can use `zValidator` in `src/index.ts` for parameter validation. Get validated parameters through `const { point, params } = c.req.valid("json");`. Our point has only two values, so we use `z.union` for definition. `params` is an optional parameter, defined with `z.optional`. It includes a `inputs` parameter, a `Record<string, any>` type representing an object with string keys and any values. This type can represent any object. You can get the `count` parameter in `src/index.ts` using `params?.inputs?.count`.
### Accessing Logs of Cloudflare Workers
@ -102,6 +94,6 @@ wrangler tail
## Reference Content
- [Cloudflare Workers](https://workers.cloudflare.com/)
- [Cloudflare Workers CLI](https://developers.cloudflare.com/workers/cli-wrangler/install-update)
- [Example GitHub Repository](https://github.com/crazywoola/dify-extension-workers)
* [Cloudflare Workers](https://workers.cloudflare.com/)
* [Cloudflare Workers CLI](https://developers.cloudflare.com/workers/cli-wrangler/install-update)
* [Example GitHub Repository](https://github.com/crazywoola/dify-extension-workers)

View File

@ -0,0 +1,2 @@
# Creating Dify Apps

View File

@ -1,4 +1,4 @@
# Creating An Application
# Quickstart
In Dify, an "application" refers to a real-world scenario application built on large language models such as GPT. By creating an application, you can apply intelligent AI technology to specific needs. It encompasses both the engineering paradigms for developing AI applications and the specific deliverables.
@ -14,13 +14,13 @@ You can choose one or all of them to support your AI application development.
Dify offers two types of applications: text generation and conversational. More application paradigms may appear in the future (we should keep up-to-date), and the ultimate goal of Dify is to cover more than 80% of typical LLM application scenarios. The differences between text generation and conversational applications are shown in the table below:
<table><thead><tr><th width="199.33333333333331"> </th><th>Text Generation</th><th>Conversational</th></tr></thead><tbody><tr><td>WebApp Interface</td><td>Form + Results</td><td>Chat style</td></tr><tr><td>API Endpoint</td><td><code>completion-messages</code></td><td><code>chat-messages</code></td></tr><tr><td>Interaction Mode</td><td>One question and one answer</td><td>Multi-turn dialogue</td></tr><tr><td>Streaming results return</td><td>Supported</td><td>Supported</td></tr><tr><td>Context Preservation</td><td>Current time</td><td>Continuous</td></tr><tr><td>User input form</td><td>Supported</td><td>Supported</td></tr><tr><td>Knowledge&#x26;Plugins</td><td>Supported</td><td>Supported</td></tr><tr><td>AI opening remarks</td><td>Not supported</td><td>Supported</td></tr><tr><td>Scenario example</td><td>Translation, judgment, indexing</td><td>Chat or everything</td></tr></tbody></table>
<table><thead><tr><th width="199.33333333333331"></th><th>Text Generator</th><th>Chat App</th></tr></thead><tbody><tr><td>WebApp Interface</td><td>Form + Results</td><td>Chat style</td></tr><tr><td>API Endpoint</td><td><code>completion-messages</code></td><td><code>chat-messages</code></td></tr><tr><td>Interaction Mode</td><td>One question and one answer</td><td>Multi-turn dialogue</td></tr><tr><td>Streaming results return</td><td>Supported</td><td>Supported</td></tr><tr><td>Context Preservation</td><td>Current time</td><td>Continuous</td></tr><tr><td>User input form</td><td>Supported</td><td>Supported</td></tr><tr><td>Knowledge&#x26;Plugins</td><td>Supported</td><td>Supported</td></tr><tr><td>AI opening remarks</td><td>Not supported</td><td>Supported</td></tr><tr><td>Scenario example</td><td>Translation, judgment, indexing</td><td>Chat or everything</td></tr></tbody></table>
### Steps to Create an Application
After logging in as an administrator in Dify, go to the main navigation application page Click "Create New Application" Choose a conversational or text generation application and give it a name (modifiable later)
<figure><img src="../.gitbook/assets/create a new App.png" alt=""><figcaption><p>Create a new App</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/create a new App.png" alt=""><figcaption><p>Create a new App</p></figcaption></figure>
We provide some templates in the application creation interface, and you can click to create from a template in the popup when creating an application. These templates will provide inspiration and reference for the application you want to develop.
@ -32,7 +32,7 @@ If you have obtained a template from the community or someone else, you can clic
If you are using it for the first time, you will be prompted to enter your OpenAI API key. A properly functioning LLM key is a prerequisite for using Dify. If you don't have one yet, please apply for one.
<figure><img src="../.gitbook/assets/OpenAI API Key.png" alt=""><figcaption><p>Enter your OpenAI API Key</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/OpenAI API Key.png" alt=""><figcaption><p>Enter your OpenAI API Key</p></figcaption></figure>
After creating an application or selecting an existing one, you will arrive at an application overview page showing the application's profile. You can directly access your WebApp or check the API status here, as well as enable or disable them.

View File

@ -1,4 +1,4 @@
# LLMs-use-FAQ
# FAQ
### 1. How to choose a basic model?
@ -89,7 +89,7 @@ There are two potential solutions if the error "Invalid token" appears:
### 13. What are the size limits for uploading knowledge documents?
The maximum size for a single document upload is currently 15MB. There is also a limit of 100 total documents. These limits can be adjusted if you are using a local deployment. Refer to the [documentation](install-faq.md#11.-how-to-solve-the-size-and-quantity-limitations-for-uploading-dataset-documents-in-the-local-depl) for details on changing the limits.
The maximum size for a single document upload is currently 15MB. There is also a limit of 100 total documents. These limits can be adjusted if you are using a local deployment. Refer to the [documentation](../../getting-started/install-self-hosted/install-faq.md#11.-how-to-solve-the-size-and-quantity-limitations-for-uploading-dataset-documents-in-the-local-depl) for details on changing the limits.
### 14. Why does Claude still consume OpenAI credits when using the Claude model?

View File

@ -1,4 +1,4 @@
# Conversation Application
# Chat App
Conversation applications use a one-question-one-answer mode to have a continuous conversation with the user.
@ -16,13 +16,13 @@ Here, we use a interviewer application as an example to introduce the way to com
Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select **"Chat App"** as the application type.
<figure><img src="../../.gitbook/assets/image (32).png" alt=""><figcaption><p>Create Application</p></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (32).png" alt=""><figcaption><p>Create Application</p></figcaption></figure>
#### Step 2: Compose the Application
After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “**Prompt Eng.**” to compose the application.
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (2) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
**2.1 Fill in Prompts**
@ -34,33 +34,33 @@ The prompt we are filling in here is:
>
> When I am ready, you can start asking questions.
![](<../../.gitbook/assets/image (38).png>)
![](<../../../.gitbook/assets/image (38).png>)
For a better experience, we will add an opening dialogue: `"Hello, {{name}}. I'm your interviewer, Bob. Are you ready?"`
To add the opening dialogue, click the "Add Feature" button in the upper left corner, and enable the "Conversation remarkers" feature:
<figure><img src="../../.gitbook/assets/image (21).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (21).png" alt=""><figcaption></figcaption></figure>
And then edit the opening remarks:
![](<../../.gitbook/assets/image (15).png>)
![](<../../../.gitbook/assets/image (15).png>)
**2.2 Adding Context**
If an application wants to generate content based on private contextual conversations, it can use our [knowledge](../../advanced/datasets/) feature. Click the "Add" button in the context to add a knowledge base.
If an application wants to generate content based on private contextual conversations, it can use our [knowledge](../../../features/datasets/) feature. Click the "Add" button in the context to add a knowledge base.
![](<../../.gitbook/assets/image (9).png>)
![](<../../../.gitbook/assets/image (9).png>)
**2.3 Debugging**
We fill in the user input on the right side and debug the input content.
![](<../../.gitbook/assets/image (11).png>)
![](<../../../.gitbook/assets/image (11).png>)
If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:
![](<../../.gitbook/assets/image (29).png>)
![](<../../../.gitbook/assets/image (29).png>)
We support the GPT-4 model.
@ -72,6 +72,6 @@ After debugging the application, click the **"Publish"** button in the upper rig
On the overview page, you can find the sharing address of the application. Click the "Preview" button to preview the shared application. Click the "Share" button to get the sharing link address. Click the "Settings" button to set the shared application information.
<figure><img src="../../.gitbook/assets/image (47).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (47).png" alt=""><figcaption></figcaption></figure>
If you want to customize the application that you share, you can Fork our open source [WebApp template](https://github.com/langgenius/webapp-conversation). Based on the template, you can modify the application to meet your specific needs and style requirements.

View File

@ -16,13 +16,13 @@ Here, we use a translation application as an example to introduce the way to com
Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select "Text Generator" as the application type.
<figure><img src="../../.gitbook/assets/image (28).png" alt=""><figcaption><p>Create Application</p></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (28).png" alt=""><figcaption><p>Create Application</p></figcaption></figure>
#### Step 2: Compose the Application
After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “**Prompt Eng.**” to compose the application.
<figure><img src="../../.gitbook/assets/image (50).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (50).png" alt=""><figcaption></figcaption></figure>
**2.1 Fill in Prefix Prompts**
@ -30,35 +30,29 @@ Prompts are used to give a series of instructions and constraints to the AI resp
The prompt we are filling in here is: `Translate the content to: {{language}}. The content is as follows:`
![](<../../.gitbook/assets/image (7).png>)
![](<../../../.gitbook/assets/image (7).png>)
**2.2 Adding Context**
If the application wants to generate content based on private contextual conversations, our [knowledge](../../advanced/datasets/) feature can be used. Click the "Add" button in the context to add a knowledge base.
![](<../../.gitbook/assets/image (12).png>)
If the application wants to generate content based on private contextual conversations, our [knowledge](../../../features/datasets/) feature can be used. Click the "Add" button in the context to add a knowledge base.
![](<../../../.gitbook/assets/image (12).png>)
**2.3 Adding Future: Generate more like this**
Generating more like this allows you to generate multiple texts at once, which you can edit and continue generating from. Click on the "Add Future" button in the upper left corner to enable this feature.
<figure><img src="../../.gitbook/assets/image (35).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (35).png" alt=""><figcaption></figcaption></figure>
**2.4 Debugging**
We debug on the right side by entering variables and querying content. Click the "Run" button to view the results of the operation.
![](<../../.gitbook/assets/image (17).png>)
![](<../../../.gitbook/assets/image (17).png>)
If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:
![](<../../.gitbook/assets/image (36).png>)
![](<../../../.gitbook/assets/image (36).png>)
**2.5 Publish**
@ -68,9 +62,6 @@ After debugging the application, click the **"Publish"** button in the upper rig
You can find the sharing address of the application on the overview page. Click the "Preview" button to preview the shared application. Click the "Share" button to obtain the sharing link address. Click the "Settings" button to set the information of the shared application.
<figure><img src="../../.gitbook/assets/image (52).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (52).png" alt=""><figcaption></figcaption></figure>
If you want to customize the application shared outside, you can Fork our open source [WebApp template](https://github.com/langgenius/webapp-text-generator). Based on the template, you can modify the application to meet your specific situation and style requirements.

View File

@ -0,0 +1,2 @@
# Use Cases

View File

@ -1,4 +1,4 @@
# How to Build an Notion AI Assistant Based on Your Own Notes?
# Notion AI Assistant Based on Your Own Notes
### Intro[](https://wsyfin.com/notion-dify#intro) <a href="#intro" id="intro"></a>
@ -26,7 +26,7 @@ The process to train a Notion AI assistant is relatively straightforward. Just f
4. Start training.
5. Create your own AI application.
#### 1. Login to dify[](https://wsyfin.com/notion-dify#1-login-to-dify) <a href="#1-login-to-dify" id="1-login-to-dify"></a>
#### 1. Login to dify[](https://wsyfin.com/notion-dify#1-login-to-dify) <a href="#id-1-login-to-dify" id="id-1-login-to-dify"></a>
Click [here](https://dify.ai/) to login to Dify. You can conveniently log in using your GitHub or Google account.
@ -58,7 +58,7 @@ Select the pages you want to synchronize with Dify, and press the "Allow access"
<figure><img src="https://pan.wsyfin.com/f/M8Xtz/connect-with-notion-4.png" alt=""><figcaption></figcaption></figure>
#### 4. Start training[](https://wsyfin.com/notion-dify#4-start-training) <a href="#4-start-training" id="4-start-training"></a>
#### 4. Start training[](https://wsyfin.com/notion-dify#4-start-training) <a href="#id-4-start-training" id="id-4-start-training"></a>
Specifying the pages for AI need to study, enabling it to comprehend the content within this section of Notion. Then click the "next" button.
@ -72,7 +72,7 @@ Enjoy your coffee while waiting for the training process to complete.
![train-3](https://pan.wsyfin.com/f/PN9F3/train-3.png)
#### 5. Create Your AI application[](https://wsyfin.com/notion-dify#5-create-your-ai-application) <a href="#5-create-your-own-ai-application" id="5-create-your-own-ai-application"></a>
#### 5. Create Your AI application[](https://wsyfin.com/notion-dify#5-create-your-ai-application) <a href="#id-5-create-your-own-ai-application" id="id-5-create-your-own-ai-application"></a>
You must create an AI application and link it with the knowledge you've recently created.
@ -90,11 +90,11 @@ For example, if your Notion notes focus on problem-solving in software developme
_I want you to act as an IT Expert in my Notion workspace, using your knowledge of computer science, network infrastructure, Notion notes, and IT security to solve the problems_.
<figure><img src="../.gitbook/assets/image (40).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (40).png" alt=""><figcaption></figcaption></figure>
It's recommended to initially enable the AI to actively furnish the users with a starter sentence, providing a clue as to what they can ask. Furthermore, activating the 'Speech to Text' feature can allow users to interact with your AI assistant using their voice.
<figure><img src="../.gitbook/assets/image (3) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (3) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
Finally, Click the "Publish" button on the top right of the page. Now you can click the public URL in the "Overview" section to converse with your personalized AI assistant!
@ -110,19 +110,19 @@ Click the "API Reference" button on the page of Overview page. You can refer to
![using-api-1](https://pan.wsyfin.com/f/wp0Cy/using-api-1.png)
#### 1. Generate API Secret Key[](https://wsyfin.com/notion-dify#1-generate-api-secret-key) <a href="#1-generate-api-secret-key" id="1-generate-api-secret-key"></a>
#### 1. Generate API Secret Key[](https://wsyfin.com/notion-dify#1-generate-api-secret-key) <a href="#id-1-generate-api-secret-key" id="id-1-generate-api-secret-key"></a>
For sercurity reason, it's recommened to create new API secret key to access your AI application.
![using-api-2](https://pan.wsyfin.com/f/xk2Fx/using-api-2.png)
#### 2. Retrieve Conversation ID[](https://wsyfin.com/notion-dify#2-retrieve-conversation-id) <a href="#2-retrieve-conversation-id" id="2-retrieve-conversation-id"></a>
#### 2. Retrieve Conversation ID[](https://wsyfin.com/notion-dify#2-retrieve-conversation-id) <a href="#id-2-retrieve-conversation-id" id="id-2-retrieve-conversation-id"></a>
After chatting with your AI application, you can retrieve the session ID from the "Logs & Ann." pages.
![using-api-3](https://pan.wsyfin.com/f/yPXHL/using-api-3.png)
#### 3. Invoke API[](https://wsyfin.com/notion-dify#3-invoke-api) <a href="#3-invoke-api" id="3-invoke-api"></a>
#### 3. Invoke API[](https://wsyfin.com/notion-dify#3-invoke-api) <a href="#id-3-invoke-api" id="id-3-invoke-api"></a>
You can run the example request code on the API document to invoke your AI application in terminal.

View File

@ -1,59 +1,57 @@
# Create a Midjoureny Prompt Bot Without Code in Just a Few Minutes
# Midjourney Prompt Bot
via [@op7418](https://twitter.com/op7418) on Twitter
I recently tried out a natural language programming tool called Dify, developed by [@goocarlos](https://twitter.com/goocarlos). It allows someone without coding knowledge to create a web application just by writing prompts. It even generates the API for you, making it easy to deploy your application on your preferred platform.
The application I created using Dify took me only 20 minutes, and the results were impressive. Without Dify, it might have taken me much longer to achieve the same outcome. The specific functionality of the application is to generate Midjourney prompts based on short input topics, assisting users in quickly filling in common Midjourney commands. In this tutorial, I will walk you through the process of creating this application to familiarize you with the platform.
Dify offers two types of applications: conversational applications similar to ChatGPT, which involve multi-turn dialogue, and text generation applications that directly generate text content with the click of a button. Since we want to create a Midjoureny prompt bot, we'll choose the text generator.
You can access Dify here: https://dify.ai/
<figure><img src="../.gitbook/assets/create-app.png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/create-app.png" alt=""><figcaption></figcaption></figure>
Once you've created your application, the dashboard page will display some data monitoring and application settings. Click on "Prompt Engineering" on the left, which is the main working page.
<figure><img src="../.gitbook/assets/screenshot-20230802-114025.png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/screenshot-20230802-114025.png" alt=""><figcaption></figcaption></figure>
On this page, the left side is for prompt settings and other functions, while the right side provides real-time previews and usage of your created content. The prefix prompts are the triggers that the user inputs after each content, and they instruct the GPT model how to process the user's input information.
<figure><img src="../.gitbook/assets/WechatIMG38.jpg" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/WechatIMG38.jpg" alt=""><figcaption></figcaption></figure>
Take a look at my prefix prompt structure: the first part instructs GPT to output a description of a photo in the following structure. The second structure serves as the template for generating the prompt, mainly consisting of elements like 'Color photo of the theme,' 'Intricate patterns,' 'Stark contrasts,' 'Environmental description,' 'Camera model,' 'Lens focal length description related to the input content,' 'Composition description relative to the input content,' and 'The names of four master photographers.' This constitutes the main content of the prompt. In theory, you can now save this to the preview area on the right, input the theme you want to generate, and the corresponding prompt will be generated.
<figure><img src="../.gitbook/assets/pre-prompt.png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/pre-prompt.png" alt=""><figcaption></figcaption></figure>
You may have noticed the "\{{proportion\}}" and "\{{version\}}" at the end. These are variables used to pass user-selected information. On the right side, users are required to choose image proportions and model versions, and these two variables help carry that information to the end of the prompt. Let's see how to set them up.
<figure><img src="../.gitbook/assets/screenshot-20230802-145326.png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/screenshot-20230802-145326.png" alt=""><figcaption></figcaption></figure>
Our goal is to fill in the user's selected information at the end of the prompt, making it easy for users to copy without having to rewrite or memorize these commands. For this, we use the variable function.
Variables allow us to dynamically incorporate the user's form-filled or selected content into the prompt. For example, I've created two variables: one represents the image proportion, and the other represents the model version. Click the "Add" button to create the variables.
<figure><img src="../.gitbook/assets/WechatIMG157.jpg" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/WechatIMG157.jpg" alt=""><figcaption></figcaption></figure>
After creation, you'll need to fill in the variable key and field name. The variable key should be in English. The optional setting means the field will be non-mandatory when the user fills it. Next, click "Settings" in the action bar to set the variable content.
<figure><img src="../.gitbook/assets/WechatIMG158.jpg" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/WechatIMG158.jpg" alt=""><figcaption></figcaption></figure>
Variables can be of two types: text variables, where users manually input content, and select options where users select from given choices. Since we want to avoid manual commands, we'll choose the dropdown option and add the required choices.
<figure><img src="../.gitbook/assets/app-variables.png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/app-variables.png" alt=""><figcaption></figcaption></figure>
Now, let's use the variables. We need to enclose the variable key within double curly brackets {} and add it to the prefix prompt. Since we want the GPT to output the user-selected content as is, we'll include the phrase "Producing the following English photo description based on user input" in the prompt.
<figure><img src="../.gitbook/assets/WechatIMG160.jpg" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/WechatIMG160.jpg" alt=""><figcaption></figcaption></figure>
However, there's still a chance that GPT might modify our variable content. To address this, we can lower the diversity in the model selection on the right, reducing the temperature and making it less likely to alter our variable content. You can check the tooltips for other parameters' meanings.
<figure><img src="../.gitbook/assets/screenshot-20230802-141913.png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/screenshot-20230802-141913.png" alt=""><figcaption></figcaption></figure>
With these steps, your application is now complete. After testing and ensuring there are no issues with the output, click the "Publish" button in the upper right corner to release your application. You and users can access your application through the publicly available URL. You can also customize the application name, introduction, icon, and other details in the settings.
<figure><img src="../.gitbook/assets/screenshot-20230802-142407.png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/screenshot-20230802-142407.png" alt=""><figcaption></figcaption></figure>
That's how you create a simple AI application using Dify. You can also deploy your application on other platforms or modify its UI using the generated API. Additionally, Dify supports uploading your own data, such as building a customer service bot to assist with product-related queries. This concludes the tutorial, and a special thanks to @goocarlos for creating such a fantastic product.

View File

@ -1,4 +1,4 @@
# Create an AI ChatBot with Business Data in Minutes
# AI ChatBot with Business Data
AI-powered customer service may be a standard feature for every business website, and it is becoming easier to implement with higher levels of customization. The following content will guide you on how to create an AI-powered customer service for your website in just a few minutes using Dify.
@ -21,7 +21,7 @@ If you want to build an AI Chatbot based on the company's existing knowledge bas
3. select the cleaning method
4. Click \[Save and Process], and it will take only a few seconds to complete the processing.
<figure><img src="../.gitbook/assets/image (41).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (41).png" alt=""><figcaption></figcaption></figure>
### Create an AI application and give it instructions
@ -39,30 +39,30 @@ In this case, we assign a role to the AI:
> Opening remarksHey \{{username\}}, I'm Bob☀, the first AI member of Dify. You can discuss with me any questions related to Dify products, team, and even LLMOps.
<figure><img src="../.gitbook/assets/image (53).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (53).png" alt=""><figcaption></figcaption></figure>
### Debug the performance of AI Chatbot and publish.
After completing the setup, you can send messages to it on the right side of the current page to debug whether its performance meets expectations. Then click "Publish". And then you get an AI chatbot.
<figure><img src="../.gitbook/assets/image (56).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (56).png" alt=""><figcaption></figcaption></figure>
### Embed AI Chatbot application into your front-end page.
This step is to embed the prepared AI chatbot into your official website . Click \[Overview] -> \[Embedded], select the script tag method, and copy the script code into the \<head> or \<body> tag of your website. If you are not a technical person, you can ask the developer responsible for the official website to paste and update the page.
<figure><img src="../.gitbook/assets/image (34).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (34).png" alt=""><figcaption></figcaption></figure>
1. Paste the copied code into the target location on your website.
<figure><img src="../.gitbook/assets/image (26).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (26).png" alt=""><figcaption></figcaption></figure>
1. Update your official website and you can get an AI intelligent customer service with your business data. Try it out to see the effect.
<figure><img src="../.gitbook/assets/image (19).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (19).png" alt=""><figcaption></figcaption></figure>
Above is an example of how to embed Dify into the official website through the AI chatbot Bob of Dify official website. Of course, you can also use more features provided by Dify to enhance the performance of the chatbot, such as adding some variable settings, so that users can fill in necessary judgment information before interaction, such as name, specific product used and so on.
Welcome to explore in Dify together!
<figure><img src="../.gitbook/assets/image (25).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../../.gitbook/assets/image (25).png" alt=""><figcaption></figcaption></figure>

View File

@ -0,0 +1,2 @@
# Launching Dify Apps

View File

@ -15,7 +15,7 @@ Dify offers a "Backend-as-a-Service" API, providing numerous benefits to AI appl
Choose an application, and find the API Access in the left-side navigation of the Apps section. On this page, you can view the API documentation provided by Dify and manage credentials for accessing the API.
<figure><img src="../.gitbook/assets/API Access.png" alt=""><figcaption><p>API document</p></figcaption></figure>
<figure><img src="../../../.gitbook/assets/API Access.png" alt=""><figcaption><p>API document</p></figcaption></figure>
You can create multiple access credentials for an application to deliver to different users or developers. This means that API users can use the AI capabilities provided by the application developer, but the underlying Prompt engineering, knowledge, and tool capabilities are encapsulated.
@ -31,7 +31,7 @@ These applications are used to generate high-quality text, such as articles, sum
You can find the API documentation and example requests for this application in **Applications -> Access API**.
For example, here is a sample call an API for text generation:
For example, here is a sample call an API for text generation:
```
curl --location --request POST 'https://api.dify.ai/v1/completion-messages' \
@ -44,11 +44,9 @@ curl --location --request POST 'https://api.dify.ai/v1/completion-messages' \
}'
```
### Conversational applications
Suitable for most scenarios, conversational applications engage in continuous dialogue with users in a question-and-answer format. To start a conversation, call the chat-messages API and maintain the session by continuously passing in the returned conversation_id.
Suitable for most scenarios, conversational applications engage in continuous dialogue with users in a question-and-answer format. To start a conversation, call the chat-messages API and maintain the session by continuously passing in the returned conversation\_id.
You can find the API documentation and example requests for this application in **Applications -> Access API**.

View File

@ -1,4 +1,4 @@
# Launch the WebApp quickly
# Quickstart
One of the benefits of creating AI applications with Dify is that you can launch a user-friendly Web application in just a few minutes, based on your Prompt orchestration.
@ -9,7 +9,7 @@ One of the benefits of creating AI applications with Dify is that you can launch
In the application overview page, you can find a card for the AI site (WebApp). Simply enable WebApp access to get a shareable link for your users.
<figure><img src="../.gitbook/assets/share your App.png" alt=""><figcaption><p>Share your WebApp</p></figcaption></figure>
<figure><img src="../../.gitbook/assets/share your App.png" alt=""><figcaption><p>Share your WebApp</p></figcaption></figure>
We provide a sleek WebApp interface for both of the following applications:
@ -31,16 +31,15 @@ Click the settings button on the WebApp card to configure some options for the A
Dify supports embedding your AI application into your business website. With this capability, you can create AI customer service and business knowledge Q\&A applications with business data on your official website within minutes. Click the embed button on the WebApp card, copy the embed code, and paste it into the desired location on your website.
* For iframe tag:
* For iframe tag:
Copy the iframe code and paste it into the tags (such as `<div>`, `<section>`, etc.) on your website used to display the AI application.
Copy the iframe code and paste it into the tags (such as `<div>`, `<section>`, etc.) on your website used to display the AI application.
* For script tag:
* For script tag:
Copy the script code and paste it into the `<head>` or `<body>` tags on your website.
Copy the script code and paste it into the `<head>` or `<body>` tags on your website.
<figure><img src="../.gitbook/assets/image (46).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (46).png" alt=""><figcaption></figcaption></figure>
For example, if you paste the script code into the section of your official website, you will get an AI chatbot on your website:
<figure><img src="../.gitbook/assets/image (42).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (42).png" alt=""><figcaption></figcaption></figure>

View File

@ -0,0 +1,2 @@
# Using Dify Apps

View File

@ -1,10 +1,10 @@
# Chat
# Further Chat App Settings
Chat in explore is a conversational application used to explore the boundaries of Dify's capabilities.
When we talk to large natural language models, we often encounter situations where the answers are outdated or invalid. This is due to the old training data of the large model and the lack of networking capabilities. Based on the large model, Chat uses agents to capabilities and some tools endow the large model with the ability of online real-time query.
<figure><img src="../.gitbook/assets/image (61).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (61).png" alt=""><figcaption></figcaption></figure>
Chat supports the use of plugins and knowledge.
@ -32,15 +32,15 @@ Currently we support the following plugins:
We can choose the plugins needed for this conversation before the conversation starts.
<figure><img src="../.gitbook/assets/image (4) (1).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (4) (1).png" alt=""><figcaption></figcaption></figure>
If you use the Google search plugin, you need to configure the SerpAPI key.
<figure><img src="../.gitbook/assets/image (31).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (31).png" alt=""><figcaption></figcaption></figure>
Configured entry:
<figure><img src="../.gitbook/assets/image (18).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (18).png" alt=""><figcaption></figcaption></figure>
### Use knowledge
@ -48,10 +48,10 @@ Chat supports knowledge. After selecting the knowledge, the questions asked by t
We can select the knowledge needed for this conversation before the conversation starts.
<figure><img src="../.gitbook/assets/image (5).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (5).png" alt=""><figcaption></figcaption></figure>
### The process of thinking
The thinking process refers to the process of the model using plugins and knowledge. We can see the thought process in each answer.
<figure><img src="../.gitbook/assets/image (23).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (23).png" alt=""><figcaption></figcaption></figure>

View File

@ -1,4 +1,4 @@
# Conversation Application
# Chat App
Conversational applications use a question-and-answer model to maintain a dialogue with the user. Conversational applications support the following capabilities (please confirm that the following functions are enabled when the application is programmed):
@ -12,33 +12,33 @@ Conversational applications use a question-and-answer model to maintain a dialog
If you have the requirement to fill in variables when you apply the layout, you need to fill in the information according to the prompts before entering the dialog window:
<figure><img src="../.gitbook/assets/image (45).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (45).png" alt=""><figcaption></figcaption></figure>
Fill in the necessary content and click the "Start Chat" button to start chatting.
<figure><img src="../.gitbook/assets/image (8).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (8).png" alt=""><figcaption></figcaption></figure>
Move to the AI's answer, you can copy the content of the conversation, and give the answer "like" and "dislike".
<figure><img src="../.gitbook/assets/image (30).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (30).png" alt=""><figcaption></figcaption></figure>
### Conversation creation, pinning and deletion
Click the "New Conversation" button to start a new conversation. Move to a session, and the session can be "pinned" and "deleted".
<figure><img src="../.gitbook/assets/image (43).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (43).png" alt=""><figcaption></figcaption></figure>
### Conversation remarks
If the "Conversation remarks" function is enabled when the application is programmed, the AI application will automatically initiate the first sentence of the dialogue when creating a new dialogue:
<figure><img src="../.gitbook/assets/image (48).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (48).png" alt=""><figcaption></figcaption></figure>
### Follow-up
If the "Follow-up" function is enabled during the application arrangement, the system will automatically generate 3 related question suggestions after the dialogue:
<figure><img src="../.gitbook/assets/image (16).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (16).png" alt=""><figcaption></figcaption></figure>
### Speech to text
@ -46,10 +46,10 @@ If the "Speech to Text" function is enabled during application programming, you
_Please make sure that the device environment you are using is authorized to use the microphone._
<figure><img src="../.gitbook/assets/image (39).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (39).png" alt=""><figcaption></figcaption></figure>
### Citations and Attributions
If the "Quotations and Attribution" feature is enabled during the application arrangement, the dialogue returns will automatically show the quoted knowledge document sources.
<figure><img src="../.gitbook/assets/image (3).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (3).png" alt=""><figcaption></figcaption></figure>

View File

@ -2,8 +2,6 @@
The text generation application is an application that automatically generates high-quality text according to the prompts provided by the user. It can generate various types of text, such as article summaries, translations, etc.
Text generation applications support the following features:
1. Run it once.
@ -13,13 +11,11 @@ Text generation applications support the following features:
Let's introduce them separately.
### Run it once
Enter the query content, click the run button, and the result will be generated on the right, as shown in the following figure:
<figure><img src="../.gitbook/assets/image (57).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (57).png" alt=""><figcaption></figcaption></figure>
In the generated results section, click the "Copy" button to copy the content to the clipboard. Click the "Save" button to save the content. You can see the saved content in the "Saved" tab. You can also "like" and "dislike" the generated content.
@ -33,17 +29,17 @@ In the above scenario, the batch operation function is used, which is convenient
Click the "Run Batch" tab to enter the batch run page.
<figure><img src="../.gitbook/assets/image (27).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (27).png" alt=""><figcaption></figcaption></figure>
#### Step 2 Download the template and fill in the content
Click the Download Template button to download the template. Edit the template, fill in the content, and save as a `.csv` file.
<figure><img src="../.gitbook/assets/image (13).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (13).png" alt=""><figcaption></figcaption></figure>
#### Step 3 Upload the file and run
<figure><img src="../.gitbook/assets/image (55).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (55).png" alt=""><figcaption></figcaption></figure>
If you need to export the generated content, you can click the download "button" in the upper right corner to export as a `csv` file.
@ -53,10 +49,10 @@ If you need to export the generated content, you can click the download "button"
Click the "Save" button below the generated results to save the running results. In the "Saved" tab, you can see all saved content.
<figure><img src="../.gitbook/assets/image (6).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (6).png" alt=""><figcaption></figcaption></figure>
### Generate more similar results
If the "more similar" function is turned on when applying the arrangement. Clicking the "more similar" button in the web application generates content similar to the current result. As shown below:
<figure><img src="../.gitbook/assets/image (22).png" alt=""><figcaption></figcaption></figure>
<figure><img src="../../.gitbook/assets/image (22).png" alt=""><figcaption></figcaption></figure>

View File

@ -1,15 +1,21 @@
# Discovery
## Template application
In **Explore > Discovery**, some commonly used template applications are provided. These apps cover translate, writing, programming and assistant.
<figure><img src="./images/explore-app.jpg"></figure>
If you want to use a template application, click the template's "Add to Workspace" button. In the workspace on the left, the app is available.
<figure><img src="./images/creat-customize-app.jpg"></figure>
## Template application
In **Explore > Discovery**, some commonly used template applications are provided. These apps cover translate, writing, programming and assistant.
<figure><img src="../explore/images/explore-app.jpg" alt=""><figcaption></figcaption></figure>
If you want to use a template application, click the template's "Add to Workspace" button. In the workspace on the left, the app is available.
<figure><img src="../explore/images/creat-customize-app.jpg" alt=""><figcaption></figcaption></figure>
If you want to modify a template to create a new application, click the "Customize" button of the template.
## Workspace
The workspace is the application's navigation. Click an application in the workspace to use the application directly.
<figure><img src="./images/workspace.jpg"></figure>
Apps in the workspace include: your own apps and apps added to the workspace by other teams.
The workspace is the application's navigation. Click an application in the workspace to use the application directly.
<figure><img src="../explore/images/workspace.jpg" alt=""><figcaption></figcaption></figure>
Apps in the workspace include: your own apps and apps added to the workspace by other teams.