Fix/discord link (#24)

* update discord link

* feat: add detail docs for local providers

* feat: add detail docs for local providers en
pull/25/head
crazywoola 2023-11-07 20:24:34 +08:00 committed by GitHub
parent 76b1283cbd
commit d423693bf2
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 134 additions and 2 deletions

View File

@ -6,6 +6,28 @@ Dify allows integration with LocalAI for local deployment of large language mode
## Deploying LocalAI
### Before you start
When using Docker to deploy a private model locally, you might need to access the service via the container's IP address instead of `127.0.0.1`. This is because `127.0.0.1` or `localhost` by default points to your host system and not the internal network of the Docker container. To retrieve the IP address of your Docker container, you can follow these steps:
1. First, determine the name or ID of your Docker container. You can list all active containers using the following command:
```bash
docker ps
```
2. Then, use the command below to obtain detailed information about a specific container, including its IP address:
```bash
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_ID
```
Please note that you usually do not need to manually find the IP address of the Docker container to access the service, because Docker offers a port mapping feature. This allows you to map the container ports to local machine ports, enabling access via your local address. For example, if you used the `-p 80:80` parameter when running the container, you can access the service inside the container by visiting `http://localhost:80` or `http://127.0.0.1:80`.
If you do need to use the container's IP address directly, the steps above will assist you in obtaining this information.
### Starting LocalAI
You can refer to the official [Getting Started](https://localai.io/basics/getting_started/) guide for deployment, or quickly integrate following the steps below:
(These steps are derived from [LocalAI Data query example](https://github.com/go-skynet/LocalAI/blob/master/examples/langchain-chroma/README.md))

View File

@ -5,6 +5,28 @@ And Dify supports connecting to OpenLLM deployed large language model's inferenc
## Deploy OpenLLM Model
### Before you start
When using Docker to deploy a private model locally, you might need to access the service via the container's IP address instead of `127.0.0.1`. This is because `127.0.0.1` or `localhost` by default points to your host system and not the internal network of the Docker container. To retrieve the IP address of your Docker container, you can follow these steps:
1. First, determine the name or ID of your Docker container. You can list all active containers using the following command:
```bash
docker ps
```
2. Then, use the command below to obtain detailed information about a specific container, including its IP address:
```bash
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_ID
```
Please note that you usually do not need to manually find the IP address of the Docker container to access the service, because Docker offers a port mapping feature. This allows you to map the container ports to local machine ports, enabling access via your local address. For example, if you used the `-p 80:80` parameter when running the container, you can access the service inside the container by visiting `http://localhost:80` or `http://127.0.0.1:80`.
If you do need to use the container's IP address directly, the steps above will assist you in obtaining this information.
### Starting OpenLLM
Each OpenLLM Server can deploy one model, and you can deploy it in the following way:
1. First, install OpenLLM through PyPI:

View File

@ -5,6 +5,28 @@ And Dify supports connecting to Xinference deployed large language model inferen
## Deploy Xinference
### Before you start
When using Docker to deploy a private model locally, you might need to access the service via the container's IP address instead of `127.0.0.1`. This is because `127.0.0.1` or `localhost` by default points to your host system and not the internal network of the Docker container. To retrieve the IP address of your Docker container, you can follow these steps:
1. First, determine the name or ID of your Docker container. You can list all active containers using the following command:
```bash
docker ps
```
2. Then, use the command below to obtain detailed information about a specific container, including its IP address:
```bash
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_ID
```
Please note that you usually do not need to manually find the IP address of the Docker container to access the service, because Docker offers a port mapping feature. This allows you to map the container ports to local machine ports, enabling access via your local address. For example, if you used the `-p 80:80` parameter when running the container, you can access the service inside the container by visiting `http://localhost:80` or `http://127.0.0.1:80`.
If you do need to use the container's IP address directly, the steps above will assist you in obtaining this information.
### Starting Xinference
There are two ways to deploy Xinference, namely [local deployment](https://github.com/xorbitsai/inference/blob/main/README.md#local) and [distributed deployment](https://github.com/xorbitsai/inference/blob/main/README.md#distributed), here we take local deployment as an example.
1. First, install Xinference via PyPI:

View File

@ -9,7 +9,7 @@ Please do not share your Dify account information or other sensitive information
{% endhint %}
* Submit an Issue on [GitHub](https://github.com/langgenius/dify)
* Join the [Discord community](https://discord.gg/FngNHpbcY7)
* Join the [Discord community](https://discord.gg/8Tpq4AcN9c)
* Email [support@dify.ai](mailto:support@dify.ai)
### Contact Us

View File

@ -5,6 +5,28 @@ Dify 支持以本地部署的方式接入 LocalAI 部署的大型语言模型推
## 部署 LocalAI
### 使用前注意事项
当您使用 Docker 在本地部署一个私有模型时,您可能需要通过容器的 IP 地址而不是 `127.0.0.1` 来访问该服务。这是因为 `127.0.0.1``localhost` 默认指向您的主机系统,而不是 Docker 容器的内部网络。为了获取 Docker 容器的 IP 地址,您可以按照以下步骤操作:
1. 首先,确定您的 Docker 容器的名称或 ID。您可以使用以下命令列出所有运行中的容器
```bash
docker ps
```
2. 然后,使用以下命令获取指定容器的详细信息,包括其 IP 地址:
```bash
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 容器名称或ID
```
请注意,您通常不需要手动寻找 Docker 容器的 IP 地址来访问服务,因为 Docker 提供了端口映射功能,允许您将容器端口映射到本机端口,从而通过本机地址进行访问。例如,如果您在运行容器时使用了 `-p 80:80` 参数,那么您可以通过访问 `http://localhost:80``http://127.0.0.1:80` 来访问容器中的服务。
如果确实需要直接使用容器的 IP 地址,以上步骤将帮助您获取到这一信息。
### 开始部署
可参考官方 [Getting Started](https://localai.io/basics/getting_started/) 进行部署,也可参考下方步骤进行快速接入:
(以下步骤来自 [LocalAI Data query example](https://github.com/go-skynet/LocalAI/blob/master/examples/langchain-chroma/README.md)

View File

@ -5,6 +5,28 @@ Dify 支持以本地部署的方式接入 OpenLLM 部署的大型语言模型的
## 部署 OpenLLM 模型
### 使用前注意事项
当您使用 Docker 在本地部署一个私有模型时,您可能需要通过容器的 IP 地址而不是 `127.0.0.1` 来访问该服务。这是因为 `127.0.0.1``localhost` 默认指向您的主机系统,而不是 Docker 容器的内部网络。为了获取 Docker 容器的 IP 地址,您可以按照以下步骤操作:
1. 首先,确定您的 Docker 容器的名称或 ID。您可以使用以下命令列出所有运行中的容器
```bash
docker ps
```
2. 然后,使用以下命令获取指定容器的详细信息,包括其 IP 地址:
```bash
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 容器名称或ID
```
请注意,您通常不需要手动寻找 Docker 容器的 IP 地址来访问服务,因为 Docker 提供了端口映射功能,允许您将容器端口映射到本机端口,从而通过本机地址进行访问。例如,如果您在运行容器时使用了 `-p 80:80` 参数,那么您可以通过访问 `http://localhost:80``http://127.0.0.1:80` 来访问容器中的服务。
如果确实需要直接使用容器的 IP 地址,以上步骤将帮助您获取到这一信息。
### 开始部署
每个 OpenLLM Server 可以部署一个模型,您可以通过以下方式部署:
1. 首先通过 PyPI 安装 OpenLLM

View File

@ -5,6 +5,28 @@ Dify 支持以本地部署的方式接入 Xinference 部署的大型语言模型
## 部署 Xinference
### 使用前注意事项
当您使用 Docker 在本地部署一个私有模型时,您可能需要通过容器的 IP 地址而不是 `127.0.0.1` 来访问该服务。这是因为 `127.0.0.1``localhost` 默认指向您的主机系统,而不是 Docker 容器的内部网络。为了获取 Docker 容器的 IP 地址,您可以按照以下步骤操作:
1. 首先,确定您的 Docker 容器的名称或 ID。您可以使用以下命令列出所有运行中的容器
```bash
docker ps
```
2. 然后,使用以下命令获取指定容器的详细信息,包括其 IP 地址:
```bash
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 容器名称或ID
```
请注意,您通常不需要手动寻找 Docker 容器的 IP 地址来访问服务,因为 Docker 提供了端口映射功能,允许您将容器端口映射到本机端口,从而通过本机地址进行访问。例如,如果您在运行容器时使用了 `-p 80:80` 参数,那么您可以通过访问 `http://localhost:80``http://127.0.0.1:80` 来访问容器中的服务。
如果确实需要直接使用容器的 IP 地址,以上步骤将帮助您获取到这一信息。
### 开始部署
部署 Xinference 有两种方式,分别为[本地部署](https://github.com/xorbitsai/inference/blob/main/README_zh_CN.md#%E6%9C%AC%E5%9C%B0%E9%83%A8%E7%BD%B2)和[分布式部署](https://github.com/xorbitsai/inference/blob/main/README_zh_CN.md#%E5%88%86%E5%B8%83%E5%BC%8F%E9%83%A8%E7%BD%B2),以下以本地部署为例。
1. 首先通过 PyPI 安装 Xinference

View File

@ -9,7 +9,7 @@
{% endhint %}
* 在 [Github](https://github.com/langgenius/dify) 上提交 Issue
* 加入 [Discord ](https://discord.gg/FngNHpbcY7)社群
* 加入 [Discord ](https://discord.gg/8Tpq4AcN9c)社群
* 发邮件至 [support@dify.ai](mailto:support@dify.ai)
### 联系我们