From 0aecba573d7f31824db011234e81aecf6fd9f834 Mon Sep 17 00:00:00 2001 From: John Wang Date: Mon, 21 Aug 2023 00:09:54 +0800 Subject: [PATCH] feat: add openllm connect docs --- en/SUMMARY.md | 1 + en/advanced/model-configuration/README.md | 3 ++ en/advanced/model-configuration/openllm.md | 40 +++++++++++++++++++ en/advanced/model-configuration/xinference.md | 2 +- zh_CN/SUMMARY.md | 1 + zh_CN/advanced/model-configuration/README.md | 3 ++ zh_CN/advanced/model-configuration/openllm.md | 40 +++++++++++++++++++ .../model-configuration/xinference.md | 2 +- 8 files changed, 90 insertions(+), 2 deletions(-) create mode 100644 en/advanced/model-configuration/openllm.md create mode 100644 zh_CN/advanced/model-configuration/openllm.md diff --git a/en/SUMMARY.md b/en/SUMMARY.md index 9376a25..3eec489 100644 --- a/en/SUMMARY.md +++ b/en/SUMMARY.md @@ -46,6 +46,7 @@ * [Hugging Face](advanced/model-configuration/hugging-face.md) * [Replicate](advanced/model-configuration/replicate.md) * [Xinference](advanced/model-configuration/xinference.md) + * [OpenLLM](advanced/model-configuration/openllm.md) * [More Integration](advanced/more-integration.md) ## use cases diff --git a/en/advanced/model-configuration/README.md b/en/advanced/model-configuration/README.md index b1c0cf7..57320d7 100644 --- a/en/advanced/model-configuration/README.md +++ b/en/advanced/model-configuration/README.md @@ -7,6 +7,8 @@ Dify currently supports major model providers such as OpenAI's GPT series. Here * Anthropic * Hugging Face Hub * Replicate +* Xinference +* OpenLLM * iFLYTEK SPARK * WENXINYIYAN * TONGYI @@ -76,6 +78,7 @@ There are many third-party models on hosting type providers. Access models need * [Hugging Face](hugging-face.md). * [Replicate](replicate.md). * [Xinference](xinference.md). +* [OpenLLM](openllm.md). ### Use model diff --git a/en/advanced/model-configuration/openllm.md b/en/advanced/model-configuration/openllm.md new file mode 100644 index 0000000..90871c9 --- /dev/null +++ b/en/advanced/model-configuration/openllm.md @@ -0,0 +1,40 @@ +# Connecting to OpenLLM Local Deployed Models + +> 🚧 WIP + +With [OpenLLM](https://github.com/bentoml/OpenLLM), you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps. +And Dify supports connecting to OpenLLM deployed large language model's inference capabilities locally. + +## Deploy OpenLLM Model + +Each OpenLLM Server can deploy one model, and you can deploy it in the following way: + +1. First, install OpenLLM through PyPI: + + ```bash + $ pip install openllm + ``` + +2. Locally deploy and start the OpenLLM model: + + ```bash + $ openllm start opt --model_id facebook/opt-125m -p 3333 + 2023-08-20T23:49:59+0800 [INFO] [cli] Prometheus metrics for HTTP BentoServer from "_service:svc" can be accessed at http://localhost:3333/metrics. + 2023-08-20T23:50:00+0800 [INFO] [cli] Starting production HTTP BentoServer from "_service:svc" listening on http://0.0.0.0:3333 (Press CTRL+C to quit) + ``` + + After OpenLLM starts, it provides API access service for the local port `3333`, the endpoint being: `http://127.0.0.1:3333`. Since the default 3000 port conflicts with Dify's WEB service, the port is changed to 3333 here. + If you need to modify the host or port, you can view the help information for starting OpenLLM: `openllm start opt --model_id facebook/opt-125m -h`. + + > Note: Using the `facebook/opt-125m` model here is only for demonstration, and the effect may not be good. Please choose the appropriate model according to the actual situation. For more models, please refer to: [Supported Model List](https://github.com/bentoml/OpenLLM#-supported-models). + +3. After the model is deployed, use the connected model in Dify. + + Fill in under `Settings > Model Providers > OpenLLM`: + + - Model Name: `facebook/opt-125m` + - Server URL: `http://127.0.0.1:3333` + + Click "Save" and the model can be used in the application. + +This instruction is only for quick connection as an example. For more features and information on using OpenLLM, please refer to: [OpenLLM](https://github.com/bentoml/OpenLLM) \ No newline at end of file diff --git a/en/advanced/model-configuration/xinference.md b/en/advanced/model-configuration/xinference.md index 5b7ed8a..a748ab0 100644 --- a/en/advanced/model-configuration/xinference.md +++ b/en/advanced/model-configuration/xinference.md @@ -1,6 +1,6 @@ # Connecting to Xinference Local Deployed Models -> WIP 🚧 +> 🚧 WIP [Xorbits inference](https://github.com/xorbitsai/inference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models, and can even be used on laptops. It supports various models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, etc. And Dify supports connecting to Xinference deployed large language model inference and embedding capabilities locally. diff --git a/zh_CN/SUMMARY.md b/zh_CN/SUMMARY.md index b71796c..0d87778 100644 --- a/zh_CN/SUMMARY.md +++ b/zh_CN/SUMMARY.md @@ -45,6 +45,7 @@ * [接入 Hugging Face 上的开源模型](advanced/model-configuration/hugging-face.md) * [接入 Replicate 上的开源模型](advanced/model-configuration/replicate.md) * [接入 Xinference 部署的本地模型](advanced/model-configuration/xinference.md) + * [接入 OpenLLM 部署的本地模型](advanced/model-configuration/openllm.md) * [更多集成](advanced/more-integration.md) ## 使用案例 diff --git a/zh_CN/advanced/model-configuration/README.md b/zh_CN/advanced/model-configuration/README.md index 43c7423..00cd604 100644 --- a/zh_CN/advanced/model-configuration/README.md +++ b/zh_CN/advanced/model-configuration/README.md @@ -7,6 +7,8 @@ Dify 目前已支持主流的模型供应商,例如 OpenAI 的 GPT 系列。 * Anthropic * Hugging Face Hub * Replicate +* Xinference +* OpenLLM * 讯飞星火 * 文心一言 * 通义千问 @@ -79,6 +81,7 @@ Dify 使用了 [PKCS1_OAEP](https://pycryptodome.readthedocs.io/en/latest/src/ci * [Hugging Face](hugging-face.md)。 * [Replicate](replicate.md)。 * [Xinference](xinference.md)。 +* [OpenLLM](openllm.md)。 diff --git a/zh_CN/advanced/model-configuration/openllm.md b/zh_CN/advanced/model-configuration/openllm.md new file mode 100644 index 0000000..967680c --- /dev/null +++ b/zh_CN/advanced/model-configuration/openllm.md @@ -0,0 +1,40 @@ +# 接入 OpenLLM 部署的本地模型 + +> 🚧 WIP + +使用 [OpenLLM](https://github.com/bentoml/OpenLLM), 您可以针对任何开源大型语言模型进行推理,部署到云端或本地,并构建强大的 AI 应用程序。 +Dify 支持以本地部署的方式接入 OpenLLM 部署的大型语言模型的推理能力。 + +## 部署 OpenLLM 模型 + +每个 OpenLLM Server 可以部署一个模型,您可以通过以下方式部署: + +1. 首先通过 PyPI 安装 OpenLLM: + + ```bash + $ pip install openllm + ``` + +2. 本地部署并启动 OpenLLM 模型: + + ```bash + $ openllm start opt --model_id facebook/opt-125m -p 3333 + 2023-08-20T23:49:59+0800 [INFO] [cli] Prometheus metrics for HTTP BentoServer from "_service:svc" can be accessed at http://localhost:3333/metrics. + 2023-08-20T23:50:00+0800 [INFO] [cli] Starting production HTTP BentoServer from "_service:svc" listening on http://0.0.0.0:3333 (Press CTRL+C to quit) + ``` + + OpenLLM 启动后,为本机的 `3333` 端口提供 API 接入服务,端点为:`http://127.0.0.1:3333`,由于默认的 3000 端口与 Dify 的 WEB 服务冲突,这边修改为 3333 端口。 + 如需修改 host 或 port,可查看 OpenLLM 启动的帮助信息:`openllm start opt --model_id facebook/opt-125m -h`。 + + > 注意:此处使用 facebook/opt-125m 模型仅作为示例,效果可能不佳,请根据实际情况选择合适的模型,更多模型请参考:[支持的模型列表](https://github.com/bentoml/OpenLLM#-supported-models)。 + +3. 模型部署完毕,在 Dify 中使用接入模型 + + 在 `设置 > 模型供应商 > OpenLLM` 中填入: + + - 模型名称:`facebook/opt-125m` + - 服务器 URL:`http://127.0.0.1:3333` + + "保存" 后即可在应用中使用该模型。 + +本说明仅作为快速接入的示例,如需使用 OpenLLM 更多特性和信息,请参考:[OpenLLM](https://github.com/bentoml/OpenLLM) \ No newline at end of file diff --git a/zh_CN/advanced/model-configuration/xinference.md b/zh_CN/advanced/model-configuration/xinference.md index 363d14c..7014553 100644 --- a/zh_CN/advanced/model-configuration/xinference.md +++ b/zh_CN/advanced/model-configuration/xinference.md @@ -1,6 +1,6 @@ # 接入 Xinference 部署的本地模型 -> WIP 🚧 +> 🚧 WIP [Xorbits inference](https://github.com/xorbitsai/inference) 是一个强大且通用的分布式推理框架,旨在为大型语言模型、语音识别模型和多模态模型提供服务,甚至可以在笔记本电脑上使用。它支持多种与GGML兼容的模型,如 chatglm, baichuan, whisper, vicuna, orca 等。 Dify 支持以本地部署的方式接入 Xinference 部署的大型语言模型推理和 embedding 能力。