diff --git a/en/advanced/model-configuration/xinference.md b/en/advanced/model-configuration/xinference.md index d80b376..bdd10f9 100644 --- a/en/advanced/model-configuration/xinference.md +++ b/en/advanced/model-configuration/xinference.md @@ -23,6 +23,7 @@ There are two ways to deploy Xinference, namely [local deployment](https://githu ``` Xinference will start a worker locally by default, with the endpoint: `http://127.0.0.1:9997`, and the default port is `9997`. + By default, access is limited to the local machine only, but it can be configured with `-H 0.0.0.0` to allow access from any non-local client. To modify the host or port, you can refer to xinference's help information: `xinference --help`. 3. Create and deploy the model diff --git a/zh_CN/advanced/model-configuration/xinference.md b/zh_CN/advanced/model-configuration/xinference.md index bd8b552..6378452 100644 --- a/zh_CN/advanced/model-configuration/xinference.md +++ b/zh_CN/advanced/model-configuration/xinference.md @@ -23,7 +23,8 @@ Dify 支持以本地部署的方式接入 Xinference 部署的大型语言模型 ``` Xinference 默认会在本地启动一个 worker,端点为:`http://127.0.0.1:9997`,端口默认为 `9997`。 - 如需修改 host 或 port,可查看 xinference 的帮助信息:`xinference --help`。 + 默认只可本机访问,可配置 `-H 0.0.0.0`,非本地客户端可任意访问。 + 如需进一步修改 host 或 port,可查看 xinference 的帮助信息:`xinference --help`。 3. 创建并部署模型