GITBOOK-33: change request with no subject merged in GitBook
parent
29e9992fd0
commit
b43cb54e47
83
en/README.md
83
en/README.md
|
|
@ -1,64 +1,35 @@
|
|||
---
|
||||
description: >-
|
||||
The name "Dify" is derived from the two words "Define" and "Modify". It
|
||||
represents the vision to help developers continuously improve their AI
|
||||
applications. "Dify" can be understood as "Do it for you"
|
||||
---
|
||||
|
||||
# Welcome to Dify!
|
||||
|
||||
Dify is an open-source large language model (LLM) application development platform. It combines the concepts of Backend-as-a-Service and LLMOps to enable developers to quickly build production-grade generative AI applications. Even non-technical personnel can participate in the definition and data operations of AI applications.
|
||||
|
||||
By integrating the key technology stacks required for building LLM applications, including support for hundreds of models, an intuitive Prompt orchestration interface, high-quality RAG engines, and a flexible Agent framework, while providing a set of easy-to-use interfaces and APIs, Dify saves developers a lot of time reinventing the wheel, allowing them to focus on innovation and business needs.
|
||||
|
||||
### Why Use Dify?
|
||||
|
||||
You can think of libraries like LangChain as toolboxes with hammers, nails, etc. In comparison, Dify provides a more production-ready, complete solution - think of Dify as a scaffolding system with refined engineering design and software testing.
|
||||
|
||||
Importantly, Dify is **open source**, co-created by a professional full-time team and community. You can self-deploy capabilities similar to Assistants API and GPTs based on any model, maintaining full control over your data with flexible security, all on an easy-to-use interface.
|
||||
|
||||
> Our community users summarize their evaluation of Dify's products as simple, restrained, and rapid iteration. 
|
||||
>
|
||||
> \- Lu Yu, Dify.AI CEO
|
||||
|
||||
We hope the above information and this guide can help you understand this product. We believe Dify is made for you.
|
||||
|
||||
### What Can Dify Do?
|
||||
|
||||
{% hint style="info" %}
|
||||
Tips: Dify is currently in the beta preview stage. If there are any inconsistencies between the documentation and the product, please refer to the actual product experience.
|
||||
The name Dify comes from Define + Modify, referring to defining and continuously improving your AI applications. It's made for you.
|
||||
{% endhint %}
|
||||
|
||||
If you are amazed and excited by the rapid development of LLM technologies such as GPT-4 and can't wait to use them for something useful! But you have all these confusing questions in your mind:
|
||||
|
||||
* How do I "train" a model based on my content?
|
||||
* How do I let AI know about things that happened after 2021?
|
||||
* How do I prevent AI from babbling nonsense with users?
|
||||
* What do fine-tuning and embedding mean?
|
||||
|
||||
Well, Dify is just what you need.
|
||||
|
||||
**Dify aims to enable developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.**
|
||||
|
||||
> "We shape our tools, and then our tools shape us." - Marshall McLuhan
|
||||
|
||||
You can quickly build a Web App using Dify, and the generated frontend code can be hosted on Dify. If you want to develop further based on this Web App, you can obtain these templates from GitHub and deploy them anywhere (e.g., Vercel or your server). Alternatively, you can develop your own Web frontend, mobile App, etc., based on the WebAPI, saving you backend development work.
|
||||
|
||||
Moreover, the core concept of Dify is to create, configure, and improve your application in a visual interface. Application development based on LLM has a continuous improvement lifecycle, and you may need to make AI give correct answers based on your content, improve AI's accuracy and narrative style, or even download a subtitle from YouTube as context.
|
||||
|
||||
This process will involve some logic design, context enhancement, data preparation, and other efforts that may be challenging without the right tools... We call this process LLMOps.
|
||||
* **Startups** - Quickly turn your AI ideas into reality, accelerating both success and failure. In the real world, dozens of teams have already built MVPs to get funding or win customer orders through Dify.
|
||||
* **Integrate LLMs into existing businesses** - Enhance capabilities of current apps by introducing LLMs. Access Dify’s RESTful APIs to decouple Prompts from business logic. Use Dify’s management interface to track data, costs and usage while continuously improving performance.
|
||||
* **Enterprise LLM infrastructure** - Some banks and internet companies are deploying Dify as an internal LLM gateway, accelerating the adoption of GenAI technologies while enabling centralized governance.
|
||||
* **Explore LLM capabilities** - Even as a tech enthusiast, you can easily practice Prompt engineering and Agent technologies through Dify. Over 60,000 developers have built their first app on Dify even before GPTs came out.
|
||||
|
||||
### Next Steps
|
||||
|
||||
* Check out these applications created with Dify
|
||||
* Quickly create applications in the cloud
|
||||
* Install Dify on your server
|
||||
|
||||
> "Only a few companies will have the budget to build and manage large language models (LLM) like GPT-3, but there will be many billion-dollar 'second layer' companies that emerge over the next decade."———Sam Altman
|
||||
|
||||
Just as the LLM technology is rapidly evolving, Dify is a constantly improving product, and there may be some discrepancies between the content of this document and the actual product. You can share your thoughts with us on [GitHub](https://github.com/langgenius) or Discord.
|
||||
|
||||
### Q\&A
|
||||
|
||||
**Q: What can I do with Dify?**
|
||||
A: Dify is a simple yet powerful natural language programming tool. You can use it to build commercial-grade applications, personal assistants. If you want to develop applications yourself, Dify can also save you the backend work of accessing OpenAI, but using our gradually provided high visual operation ability, you can continuously improve and train your GPT model.
|
||||
|
||||
**Q: How do I use Dify to train my own models?**
|
||||
A: A valuable application consists of Prompt Engineering, Context Enhancement and Fine-tuning. We have created a hybrid programming method that combines prompts and programming languages (similar to a template engine). You can easily complete long text embedding or grab the subtitles of a YouTube video entered by the user - these will be used as context submitted to LLMs for calculation. We pay great attention to the operability of the application. The data generated by your users during the use of the App can be analyzed, labeled and continuously trained. The above steps may consume a lot of your time without good tool support.
|
||||
|
||||
**Q: What do I need to prepare to create my own application?**
|
||||
A: You choose a model provider such as OpenAI. Our cloud version has a built-in trial model of GPT-4. You can fill in your own API key. Then you can create an app based on prompts or your own context.
|
||||
|
||||
**Q: Can applications built with Dify maintain conversations?**
|
||||
A: Yes, if you create a conversational application, it has built-in session saving capabilities, supported in both generated web apps and APIs.
|
||||
|
||||
**Q: What's the difference between LLMOps and MLOps?**
|
||||
A: In the past, MLOps allowed developers to train models from scratch, while LLMOps developed AI-native applications based on powerful models such as GPT-4. You can refer to this [article](https://blog.dify.ai/unleashing-the-power-of-llm-embeddings-with-datasets-revolutionizing-mlops/).
|
||||
|
||||
**Q: What interface languages are provided?**
|
||||
A: English and Chinese are currently supported. You can contribute language packs for us.
|
||||
|
||||
**Q: What is LangGenius?**
|
||||
A: LangGenius was the product name before Dify's official launch. We are still updating all the documentation. The name "Dify" is derived from the two words "Define" and "Modify". It represents the vision to help developers continuously improve their AI applications. "Dify" can be understood as "Do it for you".
|
||||
* Read [**Quick Start**](https://docs.dify.ai/application/creating-an-application) for an overview of Dify’s application building workflow.
|
||||
* Learn how to [**self-deploy Dify** ](https://docs.dify.ai/getting-started/install-self-hosted)to your servers and [**integrate open source models**](https://docs.dify.ai/advanced/model-configuration)**.**
|
||||
* Understand Dify’s [**specifications and roadmap**](getting-started/readme/specifications-and-technical-features.md)**.**
|
||||
* [**Star us on GitHub**](https://github.com/langgenius/dify) and read our **Contributor Guidelines.**
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@
|
|||
## Getting Started
|
||||
|
||||
* [Welcome to Dify!](README.md)
|
||||
* [Specifications and Technical Features](getting-started/readme/specifications-and-technical-features.md)
|
||||
* [Cloud](getting-started/cloud.md)
|
||||
* [Install(Self hosted)](getting-started/install-self-hosted/README.md)
|
||||
* [Docker Compose Deployment](getting-started/install-self-hosted/docker-compose.md)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,19 @@
|
|||
---
|
||||
description: >-
|
||||
For those already familiar with LLM application tech stacks, this document
|
||||
serves as a shortcut to understand Dify's unique advantages
|
||||
---
|
||||
|
||||
# Specifications and Technical Features
|
||||
|
||||
We adopt transparent policies around product specifications to ensure decisions are made based on complete understanding. Such transparency not only benefits your technical selection, but also promotes deeper comprehension within the community for active contributions.
|
||||
|
||||
### Project Basics
|
||||
|
||||
<table data-header-hidden><thead><tr><th width="341"></th><th></th></tr></thead><tbody><tr><td>March 2023</td><td>Established</td></tr><tr><td>Apache License 2.0 with commercial licensing</td><td>Open Source License</td></tr><tr><td>Over 10 full-time employees</td><td>Official R&D Team</td></tr><tr><td>Community Contributors</td><td>Over 60 people</td></tr><tr><td>Backend Technology</td><td>Python/Flask/PostgreSQL</td></tr><tr><td>Frontend Technology</td><td>Next.js</td></tr><tr><td>Codebase Size</td><td>Over 130,000 lines</td></tr><tr><td>Release Frequency</td><td>Average once per week</td></tr></tbody></table>
|
||||
|
||||
### Technical Features
|
||||
|
||||
|
||||
|
||||
<table data-header-hidden><thead><tr><th width="240"></th><th></th></tr></thead><tbody><tr><td>LLM Inference Engines</td><td>Dify Runtime (LangChain removed since v0.4)</td></tr><tr><td>Commercial Models Supported</td><td><strong>10+</strong>, including OpenAI and Anthropic<br>Onboard new mainstream models within 48 hours</td></tr><tr><td>MaaS Vendor Supported</td><td><strong>2</strong>, Hugging Face and Replicate</td></tr><tr><td>Local Model Inference Runtimes Supported</td><td><strong>4</strong>, Xoribits (recommended), OpenLLM, LocalAI, ChatGLM</td></tr><tr><td>Multimodal Capabilities</td><td><p>ASR Models</p><p>Rich-text models up to GPT-4V specs</p></td></tr><tr><td>Built-in App Types</td><td>Text generation, Conversational</td></tr><tr><td>Prompt-as-a-Service Orchestration</td><td><p>Visual orchestration interface widely praised, modify Prompts and preview effects in one place.<br></p><p><strong>Orchestration Modes</strong></p><ul><li>Simple orchestration</li><li>Advanced orchestration</li><li>Assistant orchestration (launching Jan 2024)</li><li>Flow orchestration (Q1 2024)</li></ul><p><strong>Prompt Variable Types</strong></p><ul><li>String</li><li>Radio enum</li><li>External API</li><li>File (Jan 2024)</li></ul></td></tr><tr><td>RAG Features</td><td><p>Industry-first visual knowledge base management interface, supporting snippet previews and recall testing.</p><p><strong>Indexing Methods</strong></p><ul><li>Keywords</li><li>Text vectors</li><li>LLM-assisted question-snippet model</li></ul><p><strong>Retrieval Methods</strong></p><ul><li>Keywords</li><li>Text similarity matching</li><li>N choose 1</li><li>Multi-path recall</li></ul><p><strong>Recall Optimization</strong></p><ul><li>Re-rank models</li></ul></td></tr><tr><td>ETL Capabilities</td><td><p>Automated cleaning for TXT, Markdown, PDF, HTML, DOC, CSV formats. Unstructured service enables maximum support.</p><p>Sync Notion docs as knowledge bases.</p></td></tr><tr><td>Vector Databases Supported</td><td>Qdrant (recommended), Weaviate, Zilliz</td></tr><tr><td>Agent Technologies</td><td><p>ReAct, Function Call.<br></p><p><strong>Tooling Support</strong></p><ul><li>Invoke OpenAI Plugin standard tools (Q4 2023)</li><li>Directly load OpenAPI Specification APIs as tools</li></ul><p><strong>Built-in Tools</strong></p><ul><li>3 tools</li></ul></td></tr><tr><td>Logging</td><td>Supported, annotations based on logs</td></tr><tr><td>Annotation Reply</td><td>Based on human-annotated Q&As, used for similarity-based replies. Exportable as data format for model fine-tuning.</td></tr><tr><td>Content Moderation</td><td>OpenAI Moderation or external APIs</td></tr><tr><td>Team Collaboration</td><td>Workspaces, multi-member management</td></tr><tr><td>API Specs</td><td>RESTful, most features covered</td></tr><tr><td>Deployment Methods</td><td>Docker, Helm</td></tr></tbody></table>
|
||||
Loading…
Reference in New Issue