feat: remove wip (#8)
parent
1e515086cd
commit
c7a4ccb5e0
|
|
@ -1,7 +1,5 @@
|
|||
# Connecting to OpenLLM Local Deployed Models
|
||||
|
||||
> 🚧 WIP
|
||||
|
||||
With [OpenLLM](https://github.com/bentoml/OpenLLM), you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps.
|
||||
And Dify supports connecting to OpenLLM deployed large language model's inference capabilities locally.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
# Connecting to Xinference Local Deployed Models
|
||||
|
||||
> 🚧 WIP
|
||||
|
||||
[Xorbits inference](https://github.com/xorbitsai/inference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models, and can even be used on laptops. It supports various models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, etc.
|
||||
And Dify supports connecting to Xinference deployed large language model inference and embedding capabilities locally.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
# 接入 OpenLLM 部署的本地模型
|
||||
|
||||
> 🚧 WIP
|
||||
|
||||
使用 [OpenLLM](https://github.com/bentoml/OpenLLM), 您可以针对任何开源大型语言模型进行推理,部署到云端或本地,并构建强大的 AI 应用程序。
|
||||
Dify 支持以本地部署的方式接入 OpenLLM 部署的大型语言模型的推理能力。
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
# 接入 Xinference 部署的本地模型
|
||||
|
||||
> 🚧 WIP
|
||||
|
||||
[Xorbits inference](https://github.com/xorbitsai/inference) 是一个强大且通用的分布式推理框架,旨在为大型语言模型、语音识别模型和多模态模型提供服务,甚至可以在笔记本电脑上使用。它支持多种与GGML兼容的模型,如 chatglm, baichuan, whisper, vicuna, orca 等。
|
||||
Dify 支持以本地部署的方式接入 Xinference 部署的大型语言模型推理和 embedding 能力。
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue