From c7a4ccb5e01d0cfde4812951077a2ea392a5a2a3 Mon Sep 17 00:00:00 2001 From: takatost Date: Wed, 23 Aug 2023 17:47:23 +0800 Subject: [PATCH] feat: remove wip (#8) --- en/advanced/model-configuration/openllm.md | 2 -- en/advanced/model-configuration/xinference.md | 2 -- zh_CN/advanced/model-configuration/openllm.md | 2 -- zh_CN/advanced/model-configuration/xinference.md | 2 -- 4 files changed, 8 deletions(-) diff --git a/en/advanced/model-configuration/openllm.md b/en/advanced/model-configuration/openllm.md index 90871c9..5dd69fe 100644 --- a/en/advanced/model-configuration/openllm.md +++ b/en/advanced/model-configuration/openllm.md @@ -1,7 +1,5 @@ # Connecting to OpenLLM Local Deployed Models -> 🚧 WIP - With [OpenLLM](https://github.com/bentoml/OpenLLM), you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps. And Dify supports connecting to OpenLLM deployed large language model's inference capabilities locally. diff --git a/en/advanced/model-configuration/xinference.md b/en/advanced/model-configuration/xinference.md index a748ab0..d80b376 100644 --- a/en/advanced/model-configuration/xinference.md +++ b/en/advanced/model-configuration/xinference.md @@ -1,7 +1,5 @@ # Connecting to Xinference Local Deployed Models -> 🚧 WIP - [Xorbits inference](https://github.com/xorbitsai/inference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models, and can even be used on laptops. It supports various models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, etc. And Dify supports connecting to Xinference deployed large language model inference and embedding capabilities locally. diff --git a/zh_CN/advanced/model-configuration/openllm.md b/zh_CN/advanced/model-configuration/openllm.md index 967680c..150f8dc 100644 --- a/zh_CN/advanced/model-configuration/openllm.md +++ b/zh_CN/advanced/model-configuration/openllm.md @@ -1,7 +1,5 @@ # 接入 OpenLLM 部署的本地模型 -> 🚧 WIP - 使用 [OpenLLM](https://github.com/bentoml/OpenLLM), 您可以针对任何开源大型语言模型进行推理,部署到云端或本地,并构建强大的 AI 应用程序。 Dify 支持以本地部署的方式接入 OpenLLM 部署的大型语言模型的推理能力。 diff --git a/zh_CN/advanced/model-configuration/xinference.md b/zh_CN/advanced/model-configuration/xinference.md index 7014553..bd8b552 100644 --- a/zh_CN/advanced/model-configuration/xinference.md +++ b/zh_CN/advanced/model-configuration/xinference.md @@ -1,7 +1,5 @@ # 接入 Xinference 部署的本地模型 -> 🚧 WIP - [Xorbits inference](https://github.com/xorbitsai/inference) 是一个强大且通用的分布式推理框架,旨在为大型语言模型、语音识别模型和多模态模型提供服务,甚至可以在笔记本电脑上使用。它支持多种与GGML兼容的模型,如 chatglm, baichuan, whisper, vicuna, orca 等。 Dify 支持以本地部署的方式接入 Xinference 部署的大型语言模型推理和 embedding 能力。