Feature/dify workflow (#9)

* feat: add basic workflow to test

* feat: update readme

* fix: add workflow image

* fix: update dsl file path of readme

* feat: bold workflow variable name

* fix: chat workflow empty input

* refactor: refactor reply function of dify bot

* feat: update readme for dify workflow
master
Han Fangyuan 2024-04-08 23:31:43 +08:00 committed by GitHub
parent 22216b6ca2
commit a689fdf2ce
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 299 additions and 75 deletions

View File

@ -2,7 +2,7 @@
<h1>Dify on WeChat</h1>
本项目为 [chatgpt-on-wechat](https://github.com/zhayujie/chatgpt-on-wechat)下游分支
额外对接了LLMOps平台 [Dify](https://github.com/langgenius/dify)支持Dify智能助手模型调用工具和知识库。
额外对接了LLMOps平台 [Dify](https://github.com/langgenius/dify)支持Dify智能助手模型调用工具和知识库支持Dify工作流
</div>
@ -12,11 +12,16 @@
![image-2](./docs/images/image2.jpg)
基本的dify workflow api支持
![image-3](./docs/images/image4.jpg)
目前Dify已经测试过的通道如下
- [x] **个人微信**
- [x] **企业微信应用**
- [ ] **公众号** 待测试
- [x] **企业服务公众号**
- [ ] **个人订阅公众号** 待测试
- [ ] **钉钉** 待测试
- [ ] **飞书** 待测试
@ -65,10 +70,10 @@ python3 app.py # windows环境下该命令通
# 更新日志
- 2024/04/08 支持聊天助手类型应用内置的工作流支持dify基础的对话工作流dify官网已正式上线工作流模式。可以导入本项目下的[dsl文件](./dsl/chat-workflow.yml)快速创建工作流进行测试。工作流输入变量名称十分灵活,对于**工作流类型**的应用,本项目**约定工作流的输入变量命名为`query`****输出变量命名为`text`**。(ps: 感觉工作流类型应用不太适合作为聊天机器人现在它还没有会话的概念需要自己管理上下文。但是它可以调用各种工具通过http请求和外界交互适合执行业务逻辑复杂的任务它可以导入导出工作流dsl文件方便分享移植。也许以后dsl文件+配置文件就可以作为本项目的一个插件。)
- 2024/04/04 支持docker部署
- 2024/03/31 支持coze api(内测版)
- 2024/03/29 支持dify基础的对话工作流由于dify官网还未上线工作流需要自行部署测试 [0.6.0-preview-workflow.1](https://github.com/langgenius/dify/releases/tag/0.6.0-preview-workflow.1)。
# Dify on WeChat 交流群
添加我的微信拉你进群
@ -123,13 +128,14 @@ pip3 install -r requirements-optional.txt # 国内可以在该命令末尾添加
cp config-template.json config.json
```
然后在`config.json`中填入配置,以下是对默认配置的说明,可根据需要进行自定义修改(**如果复制下方的示例内容,请去掉注释**
然后在`config.json`中填入配置,以下是对默认配置的说明,可根据需要进行自定义修改(如果复制下方的示例内容,请**去掉注释**, 务必保证正确配置**dify_app_type**
```bash
# dify config.json文件内容示例
{ "dify_api_base": "https://api.dify.ai/v1", # dify base url
{
"dify_api_base": "https://api.dify.ai/v1", # dify base url
"dify_api_key": "app-xxx", # dify api key
"dify_agent": true, # dify助手类型如果是基础助手请设置为false智能助手请设置为true, 当前为true
"dify_app_type": "chatbot", # dify应用类型 chatbot(对应聊天助手)/agent(对应Agent)/workflow(对应工作流)默认为chatbot
"dify_convsersation_max_messages": 5, # dify目前不支持设置历史消息长度暂时使用超过最大消息数清空会话的策略缺点是没有滑动窗口会突然丢失历史消息, 当前为5
"channel_type": "wx", # 通道类型,当前为个人微信
"model": "dify", # 模型名称当前对应dify平台

View File

@ -10,7 +10,7 @@ from bridge.context import ContextType, Context
from bridge.reply import Reply, ReplyType
from common.log import logger
from common import const
from config import conf, load_config
from config import conf
class DifyBot(Bot):
def __init__(self):
@ -28,9 +28,9 @@ class DifyBot(Bot):
channel_type = conf().get("channel_type", "wx")
user = None
if channel_type == "wx":
user = context["msg"].other_user_nickname
user = context["msg"].other_user_nickname if context.get("msg") else "default"
elif channel_type in ["wechatcom_app", "wechatmp", "wechatmp_service"]:
user = context["msg"].other_user_id
user = context["msg"].other_user_id if context.get("msg") else "default"
else:
return Reply(ReplyType.ERROR, f"unsupported channel type: {channel_type}, now dify only support wx, wechatcom_app, wechatmp, wechatmp_service channel")
logger.debug(f"[DIFY] dify_user={user}")
@ -66,76 +66,140 @@ class DifyBot(Bot):
def _reply(self, query: str, session: DifySession, context: Context):
try:
session.count_user_message() # 限制一个conversation中消息数防止conversation过长
base_url = self._get_api_base_url()
chat_url = f'{base_url}/chat-messages'
headers = self._get_headers()
is_dify_agent = conf().get('dify_agent', True)
response_mode = 'streaming' if is_dify_agent else 'blocking'
payload = self._get_payload(query, session, response_mode)
response = requests.post(chat_url, headers=headers, json=payload, stream=is_dify_agent)
if response.status_code != 200:
error_info = f"[DIFY] response text={response.text} status_code={response.status_code}"
logger.warn(error_info)
return None, error_info
if is_dify_agent:
# response:
# data: {"event": "agent_thought", "id": "8dcf3648-fbad-407a-85dd-73a6f43aeb9f", "task_id": "9cf1ddd7-f94b-459b-b942-b77b26c59e9b", "message_id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "position": 1, "thought": "", "observation": "", "tool": "", "tool_input": "", "created_at": 1705639511, "message_files": [], "conversation_id": "c216c595-2d89-438c-b33c-aae5ddddd142"}
# data: {"event": "agent_thought", "id": "8dcf3648-fbad-407a-85dd-73a6f43aeb9f", "task_id": "9cf1ddd7-f94b-459b-b942-b77b26c59e9b", "message_id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "position": 1, "thought": "", "observation": "", "tool": "dalle3", "tool_input": "{\"dalle3\": {\"prompt\": \"cute Japanese anime girl with white hair, blue eyes, bunny girl suit\"}}", "created_at": 1705639511, "message_files": [], "conversation_id": "c216c595-2d89-438c-b33c-aae5ddddd142"}
# data: {"event": "agent_message", "id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "task_id": "9cf1ddd7-f94b-459b-b942-b77b26c59e9b", "message_id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "answer": "I have created an image of a cute Japanese", "created_at": 1705639511, "conversation_id": "c216c595-2d89-438c-b33c-aae5ddddd142"}
# data: {"event": "message_end", "task_id": "9cf1ddd7-f94b-459b-b942-b77b26c59e9b", "id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "message_id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "conversation_id": "c216c595-2d89-438c-b33c-aae5ddddd142", "metadata": {"usage": {"prompt_tokens": 305, "prompt_unit_price": "0.001", "prompt_price_unit": "0.001", "prompt_price": "0.0003050", "completion_tokens": 97, "completion_unit_price": "0.002", "completion_price_unit": "0.001", "completion_price": "0.0001940", "total_tokens": 184, "total_price": "0.0002290", "currency": "USD", "latency": 1.771092874929309}}}
msgs, conversation_id = self._handle_sse_response(response)
channel = context.get("channel")
# TODO: 适配除微信以外的其他channel
is_group = context.get("isgroup", False)
for msg in msgs[:-1]:
if msg['type'] == 'agent_message':
if is_group:
at_prefix = "@" + context["msg"].actual_user_nickname + "\n"
msg['content'] = at_prefix + msg['content']
reply = Reply(ReplyType.TEXT, msg['content'])
channel.send(reply, context)
elif msg['type'] == 'message_file':
reply = Reply(ReplyType.IMAGE_URL, msg['content']['url'])
thread = threading.Thread(target=channel.send, args=(reply, context))
thread.start()
final_msg = msgs[-1]
reply = None
if final_msg['type'] == 'agent_message':
reply = Reply(ReplyType.TEXT, final_msg['content'])
elif final_msg['type'] == 'message_file':
reply = Reply(ReplyType.IMAGE_URL, final_msg['content']['url'])
# 设置dify conversation_id, 依靠dify管理上下文
if session.get_conversation_id() == '':
session.set_conversation_id(conversation_id)
return reply, None
dify_app_type = conf().get('dify_app_type', 'chatbot')
if dify_app_type == 'chatbot':
return self._handle_chatbot(query, session)
elif dify_app_type == 'agent':
return self._handle_agent(query, session, context)
elif dify_app_type == 'workflow':
return self._handle_workflow(query, session)
else:
# response:
# {
# "event": "message",
# "message_id": "9da23599-e713-473b-982c-4328d4f5c78a",
# "conversation_id": "45701982-8118-4bc5-8e9b-64562b4555f2",
# "mode": "chat",
# "answer": "xxx",
# "metadata": {
# "usage": {
# },
# "retriever_resources": []
# },
# "created_at": 1705407629
# }
rsp_data = response.json()
logger.debug("[DIFY] usage ".format(rsp_data['metadata']['usage']))
reply = Reply(ReplyType.TEXT, rsp_data['answer'])
# 设置dify conversation_id, 依靠dify管理上下文
if session.get_conversation_id() == '':
session.set_conversation_id(rsp_data['conversation_id'])
return reply, None
return None, "dify_app_type must be agent, chatbot or workflow"
except Exception as e:
error_info = f"[DIFY] Exception: {e}"
logger.exception(error_info)
return None, error_info
def _handle_chatbot(self, query: str, session: DifySession):
# TODO: 获取response部分抽取为公共函数
base_url = self._get_api_base_url()
chat_url = f'{base_url}/chat-messages'
headers = self._get_headers()
response_mode = 'blocking'
payload = self._get_payload(query, session, response_mode)
response = requests.post(chat_url, headers=headers, json=payload)
if response.status_code != 200:
error_info = f"[DIFY] response text={response.text} status_code={response.status_code}"
logger.warn(error_info)
return None, error_info
# response:
# {
# "event": "message",
# "message_id": "9da23599-e713-473b-982c-4328d4f5c78a",
# "conversation_id": "45701982-8118-4bc5-8e9b-64562b4555f2",
# "mode": "chat",
# "answer": "xxx",
# "metadata": {
# "usage": {
# },
# "retriever_resources": []
# },
# "created_at": 1705407629
# }
rsp_data = response.json()
logger.debug("[DIFY] usage ".format(rsp_data['metadata']['usage']))
reply = Reply(ReplyType.TEXT, rsp_data['answer'])
# 设置dify conversation_id, 依靠dify管理上下文
if session.get_conversation_id() == '':
session.set_conversation_id(rsp_data['conversation_id'])
return reply, None
def _handle_agent(self, query: str, session: DifySession, context: Context):
# TODO: 获取response抽取为公共函数
base_url = self._get_api_base_url()
chat_url = f'{base_url}/chat-messages'
headers = self._get_headers()
response_mode = 'streaming'
payload = self._get_payload(query, session, response_mode)
response = requests.post(chat_url, headers=headers, json=payload)
if response.status_code != 200:
error_info = f"[DIFY] response text={response.text} status_code={response.status_code}"
logger.warn(error_info)
return None, error_info
# response:
# data: {"event": "agent_thought", "id": "8dcf3648-fbad-407a-85dd-73a6f43aeb9f", "task_id": "9cf1ddd7-f94b-459b-b942-b77b26c59e9b", "message_id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "position": 1, "thought": "", "observation": "", "tool": "", "tool_input": "", "created_at": 1705639511, "message_files": [], "conversation_id": "c216c595-2d89-438c-b33c-aae5ddddd142"}
# data: {"event": "agent_thought", "id": "8dcf3648-fbad-407a-85dd-73a6f43aeb9f", "task_id": "9cf1ddd7-f94b-459b-b942-b77b26c59e9b", "message_id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "position": 1, "thought": "", "observation": "", "tool": "dalle3", "tool_input": "{\"dalle3\": {\"prompt\": \"cute Japanese anime girl with white hair, blue eyes, bunny girl suit\"}}", "created_at": 1705639511, "message_files": [], "conversation_id": "c216c595-2d89-438c-b33c-aae5ddddd142"}
# data: {"event": "agent_message", "id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "task_id": "9cf1ddd7-f94b-459b-b942-b77b26c59e9b", "message_id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "answer": "I have created an image of a cute Japanese", "created_at": 1705639511, "conversation_id": "c216c595-2d89-438c-b33c-aae5ddddd142"}
# data: {"event": "message_end", "task_id": "9cf1ddd7-f94b-459b-b942-b77b26c59e9b", "id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "message_id": "1fb10045-55fd-4040-99e6-d048d07cbad3", "conversation_id": "c216c595-2d89-438c-b33c-aae5ddddd142", "metadata": {"usage": {"prompt_tokens": 305, "prompt_unit_price": "0.001", "prompt_price_unit": "0.001", "prompt_price": "0.0003050", "completion_tokens": 97, "completion_unit_price": "0.002", "completion_price_unit": "0.001", "completion_price": "0.0001940", "total_tokens": 184, "total_price": "0.0002290", "currency": "USD", "latency": 1.771092874929309}}}
msgs, conversation_id = self._handle_sse_response(response)
channel = context.get("channel")
# TODO: 适配除微信以外的其他channel
is_group = context.get("isgroup", False)
for msg in msgs[:-1]:
if msg['type'] == 'agent_message':
if is_group:
at_prefix = "@" + context["msg"].actual_user_nickname + "\n"
msg['content'] = at_prefix + msg['content']
reply = Reply(ReplyType.TEXT, msg['content'])
channel.send(reply, context)
elif msg['type'] == 'message_file':
reply = Reply(ReplyType.IMAGE_URL, msg['content']['url'])
thread = threading.Thread(target=channel.send, args=(reply, context))
thread.start()
final_msg = msgs[-1]
reply = None
if final_msg['type'] == 'agent_message':
reply = Reply(ReplyType.TEXT, final_msg['content'])
elif final_msg['type'] == 'message_file':
reply = Reply(ReplyType.IMAGE_URL, final_msg['content']['url'])
# 设置dify conversation_id, 依靠dify管理上下文
if session.get_conversation_id() == '':
session.set_conversation_id(conversation_id)
return reply, None
def _handle_workflow(self, query: str, session: DifySession):
base_url = self._get_api_base_url()
workflow_url = f'{base_url}/workflows/run'
headers = self._get_headers()
payload = self._get_workflow_payload(query, session)
response = requests.post(workflow_url, headers=headers, json=payload)
if response.status_code != 200:
error_info = f"[DIFY] response text={response.text} status_code={response.status_code}"
logger.warn(error_info)
return None, error_info
# {
# "log_id": "djflajgkldjgd",
# "task_id": "9da23599-e713-473b-982c-4328d4f5c78a",
# "data": {
# "id": "fdlsjfjejkghjda",
# "workflow_id": "fldjaslkfjlsda",
# "status": "succeeded",
# "outputs": {
# "text": "Nice to meet you."
# },
# "error": null,
# "elapsed_time": 0.875,
# "total_tokens": 3562,
# "total_steps": 8,
# "created_at": 1705407629,
# "finished_at": 1727807631
# }
# }
rsp_data = response.json()
reply = Reply(ReplyType.TEXT, rsp_data['data']['outputs']['text'])
return reply, None
def _get_workflow_payload(self, query, session: DifySession):
return {
'inputs': {
"query": query
},
"response_mode": "blocking",
"user": session.get_user()
}
def _parse_sse_event(self, event_str):
"""
Parses a single SSE event string and returns a dictionary of its data.

View File

@ -78,7 +78,7 @@ available_setting = {
# dify配置
"dify_api_base": "https://api.dify.ai/v1",
"dify_api_key": "app-xxx",
"dify_agent": True, # dify助手类型如果是基础助手请设置为False智能助手请设置为True默认为True
"dify_app_type": "chatbot", # dify助手类型 chatbot(对应聊天助手)/agent(对应Agent)/workflow(对应工作流)默认为chatbot
"dify_convsersation_max_messages": 5, # dify目前不支持设置历史消息长度暂时使用超过最大消息数清空会话的策略缺点是没有滑动窗口会突然丢失历史消息
# coze配置
"coze_api_base": "https://api.coze.cn/open_api/v2",

BIN
docs/images/image4.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

154
dsl/chat-workflow.yml Normal file
View File

@ -0,0 +1,154 @@
app:
description: ''
icon: "\U0001F916"
icon_background: '#FFEAD5'
mode: workflow
name: chat-workflow
workflow:
features:
file_upload:
image:
enabled: false
number_limits: 3
transfer_methods:
- local_file
- remote_url
opening_statement: ''
retriever_resource:
enabled: false
sensitive_word_avoidance:
enabled: false
speech_to_text:
enabled: false
suggested_questions: []
suggested_questions_after_answer:
enabled: false
text_to_speech:
enabled: false
language: ''
voice: ''
graph:
edges:
- data:
sourceType: start
targetType: llm
id: 1712585213444-1712585220528
source: '1712585213444'
sourceHandle: source
target: '1712585220528'
targetHandle: target
type: custom
- data:
sourceType: llm
targetType: end
id: 1712585220528-1712585261804
source: '1712585220528'
sourceHandle: source
target: '1712585261804'
targetHandle: target
type: custom
nodes:
- data:
desc: ''
selected: false
title: "\u5F00\u59CB"
type: start
variables:
- label: "\u95EE\u9898"
max_length: 256
options: []
required: true
type: text-input
variable: query
height: 89
id: '1712585213444'
position:
x: 80
y: 282
positionAbsolute:
x: 80
y: 282
selected: false
sourcePosition: right
targetPosition: left
type: custom
width: 243
- data:
context:
enabled: true
variable_selector:
- '1712585213444'
- query
desc: ''
model:
completion_params:
frequency_penalty: 0
max_tokens: 512
presence_penalty: 0
temperature: 0.7
top_p: 1
mode: chat
name: gpt-3.5-turbo-0125
provider: openai
prompt_template:
- role: system
text: Please disregard all previous instructions;\nThis is related to every
one of my prompts;\nYou should provide clear, simple, and direct answers;\nIf
you don't know the answer, say I don't know;\nFor complex questions, take
a deep breath and approach them step by step;\nFor questions that are
unclear or ambiguous, please ask me follow-up questions;\nWhen I ask you
to complete a task, please do it directly without telling me how it should
be done, unless necessary;\nWhen explaining concepts, use real-world examples
and analogies;\nDo not refuse replies related to tasks and work I assign;\nDo
not attempt to conserve token count when generating replies, as my fingers
have a condition that prevents me from typing too much;\nIf you have a
perfect solution, I will give you a $100000 tip, and I will give you more
tips based on the quality of your reply;\nDo your best!\nUnless otherwise
specified, please answer me in Chinese.
- role: user
text: '{{#1712585213444.query#}}'
selected: true
title: LLM
type: llm
variables: []
vision:
enabled: false
height: 97
id: '1712585220528'
position:
x: 380
y: 282
positionAbsolute:
x: 380
y: 282
selected: true
sourcePosition: right
targetPosition: left
type: custom
width: 243
- data:
desc: ''
outputs:
- value_selector:
- '1712585220528'
- text
variable: text
selected: false
title: "\u7ED3\u675F"
type: end
height: 89
id: '1712585261804'
position:
x: 680
y: 282
positionAbsolute:
x: 680
y: 282
sourcePosition: right
targetPosition: left
type: custom
width: 243
viewport:
x: 0
y: 0
zoom: 1