diff --git a/en/.gitbook/assets/API Access.png b/en/.gitbook/assets/API Access.png
new file mode 100644
index 0000000..a992eb5
Binary files /dev/null and b/en/.gitbook/assets/API Access.png differ
diff --git a/en/.gitbook/assets/OpenAI API Key.png b/en/.gitbook/assets/OpenAI API Key.png
new file mode 100644
index 0000000..59636e9
Binary files /dev/null and b/en/.gitbook/assets/OpenAI API Key.png differ
diff --git a/en/.gitbook/assets/WechatIMG156.jpg b/en/.gitbook/assets/WechatIMG156.jpg
new file mode 100644
index 0000000..0ae00d8
Binary files /dev/null and b/en/.gitbook/assets/WechatIMG156.jpg differ
diff --git a/en/.gitbook/assets/WechatIMG157.jpg b/en/.gitbook/assets/WechatIMG157.jpg
new file mode 100644
index 0000000..3bf8f26
Binary files /dev/null and b/en/.gitbook/assets/WechatIMG157.jpg differ
diff --git a/en/.gitbook/assets/WechatIMG158.jpg b/en/.gitbook/assets/WechatIMG158.jpg
new file mode 100644
index 0000000..43352b4
Binary files /dev/null and b/en/.gitbook/assets/WechatIMG158.jpg differ
diff --git a/en/.gitbook/assets/WechatIMG160.jpg b/en/.gitbook/assets/WechatIMG160.jpg
new file mode 100644
index 0000000..1acdc82
Binary files /dev/null and b/en/.gitbook/assets/WechatIMG160.jpg differ
diff --git a/en/.gitbook/assets/WechatIMG38.jpg b/en/.gitbook/assets/WechatIMG38.jpg
new file mode 100644
index 0000000..7aa8791
Binary files /dev/null and b/en/.gitbook/assets/WechatIMG38.jpg differ
diff --git a/en/.gitbook/assets/add-new-segment.png b/en/.gitbook/assets/add-new-segment.png
new file mode 100644
index 0000000..6de3b9e
Binary files /dev/null and b/en/.gitbook/assets/add-new-segment.png differ
diff --git a/en/.gitbook/assets/app-variables.png b/en/.gitbook/assets/app-variables.png
new file mode 100644
index 0000000..ebd2964
Binary files /dev/null and b/en/.gitbook/assets/app-variables.png differ
diff --git a/en/.gitbook/assets/create a new App.png b/en/.gitbook/assets/create a new App.png
new file mode 100644
index 0000000..fc0cd42
Binary files /dev/null and b/en/.gitbook/assets/create a new App.png differ
diff --git a/en/.gitbook/assets/create-app.png b/en/.gitbook/assets/create-app.png
new file mode 100644
index 0000000..5a47573
Binary files /dev/null and b/en/.gitbook/assets/create-app.png differ
diff --git a/en/.gitbook/assets/image (1).png b/en/.gitbook/assets/image (1).png
new file mode 100644
index 0000000..0ef5f75
Binary files /dev/null and b/en/.gitbook/assets/image (1).png differ
diff --git a/en/.gitbook/assets/image (10).png b/en/.gitbook/assets/image (10).png
new file mode 100644
index 0000000..14e5b94
Binary files /dev/null and b/en/.gitbook/assets/image (10).png differ
diff --git a/en/.gitbook/assets/image (11).png b/en/.gitbook/assets/image (11).png
new file mode 100644
index 0000000..2ef8774
Binary files /dev/null and b/en/.gitbook/assets/image (11).png differ
diff --git a/en/.gitbook/assets/image (12).png b/en/.gitbook/assets/image (12).png
new file mode 100644
index 0000000..b912edf
Binary files /dev/null and b/en/.gitbook/assets/image (12).png differ
diff --git a/en/.gitbook/assets/image (13).png b/en/.gitbook/assets/image (13).png
new file mode 100644
index 0000000..72ea54e
Binary files /dev/null and b/en/.gitbook/assets/image (13).png differ
diff --git a/en/.gitbook/assets/image (14).png b/en/.gitbook/assets/image (14).png
new file mode 100644
index 0000000..f8b59b3
Binary files /dev/null and b/en/.gitbook/assets/image (14).png differ
diff --git a/en/.gitbook/assets/image (15).png b/en/.gitbook/assets/image (15).png
new file mode 100644
index 0000000..09de148
Binary files /dev/null and b/en/.gitbook/assets/image (15).png differ
diff --git a/en/.gitbook/assets/image (16).png b/en/.gitbook/assets/image (16).png
new file mode 100644
index 0000000..42309ea
Binary files /dev/null and b/en/.gitbook/assets/image (16).png differ
diff --git a/en/.gitbook/assets/image (17).png b/en/.gitbook/assets/image (17).png
new file mode 100644
index 0000000..ec81ad7
Binary files /dev/null and b/en/.gitbook/assets/image (17).png differ
diff --git a/en/.gitbook/assets/image (18).png b/en/.gitbook/assets/image (18).png
new file mode 100644
index 0000000..e5b1aa0
Binary files /dev/null and b/en/.gitbook/assets/image (18).png differ
diff --git a/en/.gitbook/assets/image (19).png b/en/.gitbook/assets/image (19).png
new file mode 100644
index 0000000..381bea4
Binary files /dev/null and b/en/.gitbook/assets/image (19).png differ
diff --git a/en/.gitbook/assets/image (2).png b/en/.gitbook/assets/image (2).png
new file mode 100644
index 0000000..4f9324b
Binary files /dev/null and b/en/.gitbook/assets/image (2).png differ
diff --git a/en/.gitbook/assets/image (20).png b/en/.gitbook/assets/image (20).png
new file mode 100644
index 0000000..1376e80
Binary files /dev/null and b/en/.gitbook/assets/image (20).png differ
diff --git a/en/.gitbook/assets/image (21).png b/en/.gitbook/assets/image (21).png
new file mode 100644
index 0000000..ee7aecf
Binary files /dev/null and b/en/.gitbook/assets/image (21).png differ
diff --git a/en/.gitbook/assets/image (22).png b/en/.gitbook/assets/image (22).png
new file mode 100644
index 0000000..db192aa
Binary files /dev/null and b/en/.gitbook/assets/image (22).png differ
diff --git a/en/.gitbook/assets/image (23).png b/en/.gitbook/assets/image (23).png
new file mode 100644
index 0000000..f173753
Binary files /dev/null and b/en/.gitbook/assets/image (23).png differ
diff --git a/en/.gitbook/assets/image (24).png b/en/.gitbook/assets/image (24).png
new file mode 100644
index 0000000..07c3064
Binary files /dev/null and b/en/.gitbook/assets/image (24).png differ
diff --git a/en/.gitbook/assets/image (25).png b/en/.gitbook/assets/image (25).png
new file mode 100644
index 0000000..71c5637
Binary files /dev/null and b/en/.gitbook/assets/image (25).png differ
diff --git a/en/.gitbook/assets/image (26).png b/en/.gitbook/assets/image (26).png
new file mode 100644
index 0000000..4bcf779
Binary files /dev/null and b/en/.gitbook/assets/image (26).png differ
diff --git a/en/.gitbook/assets/image (27).png b/en/.gitbook/assets/image (27).png
new file mode 100644
index 0000000..f090407
Binary files /dev/null and b/en/.gitbook/assets/image (27).png differ
diff --git a/en/.gitbook/assets/image (28).png b/en/.gitbook/assets/image (28).png
new file mode 100644
index 0000000..ec9d87f
Binary files /dev/null and b/en/.gitbook/assets/image (28).png differ
diff --git a/en/.gitbook/assets/image (29).png b/en/.gitbook/assets/image (29).png
new file mode 100644
index 0000000..0986861
Binary files /dev/null and b/en/.gitbook/assets/image (29).png differ
diff --git a/en/.gitbook/assets/image (3).png b/en/.gitbook/assets/image (3).png
new file mode 100644
index 0000000..46e504d
Binary files /dev/null and b/en/.gitbook/assets/image (3).png differ
diff --git a/en/.gitbook/assets/image (30).png b/en/.gitbook/assets/image (30).png
new file mode 100644
index 0000000..d3aa0df
Binary files /dev/null and b/en/.gitbook/assets/image (30).png differ
diff --git a/en/.gitbook/assets/image (31).png b/en/.gitbook/assets/image (31).png
new file mode 100644
index 0000000..f0a0144
Binary files /dev/null and b/en/.gitbook/assets/image (31).png differ
diff --git a/en/.gitbook/assets/image (32).png b/en/.gitbook/assets/image (32).png
new file mode 100644
index 0000000..2e9429a
Binary files /dev/null and b/en/.gitbook/assets/image (32).png differ
diff --git a/en/.gitbook/assets/image (33).png b/en/.gitbook/assets/image (33).png
new file mode 100644
index 0000000..b99288f
Binary files /dev/null and b/en/.gitbook/assets/image (33).png differ
diff --git a/en/.gitbook/assets/image (34).png b/en/.gitbook/assets/image (34).png
new file mode 100644
index 0000000..5709ac4
Binary files /dev/null and b/en/.gitbook/assets/image (34).png differ
diff --git a/en/.gitbook/assets/image (35).png b/en/.gitbook/assets/image (35).png
new file mode 100644
index 0000000..0b73df0
Binary files /dev/null and b/en/.gitbook/assets/image (35).png differ
diff --git a/en/.gitbook/assets/image (36).png b/en/.gitbook/assets/image (36).png
new file mode 100644
index 0000000..5cf0555
Binary files /dev/null and b/en/.gitbook/assets/image (36).png differ
diff --git a/en/.gitbook/assets/image (37).png b/en/.gitbook/assets/image (37).png
new file mode 100644
index 0000000..3b96b7a
Binary files /dev/null and b/en/.gitbook/assets/image (37).png differ
diff --git a/en/.gitbook/assets/image (38).png b/en/.gitbook/assets/image (38).png
new file mode 100644
index 0000000..430f2a3
Binary files /dev/null and b/en/.gitbook/assets/image (38).png differ
diff --git a/en/.gitbook/assets/image (39).png b/en/.gitbook/assets/image (39).png
new file mode 100644
index 0000000..c4aa379
Binary files /dev/null and b/en/.gitbook/assets/image (39).png differ
diff --git a/en/.gitbook/assets/image (4).png b/en/.gitbook/assets/image (4).png
new file mode 100644
index 0000000..f8b59b3
Binary files /dev/null and b/en/.gitbook/assets/image (4).png differ
diff --git a/en/.gitbook/assets/image (40).png b/en/.gitbook/assets/image (40).png
new file mode 100644
index 0000000..63dc5b2
Binary files /dev/null and b/en/.gitbook/assets/image (40).png differ
diff --git a/en/.gitbook/assets/image (41).png b/en/.gitbook/assets/image (41).png
new file mode 100644
index 0000000..c7c2d65
Binary files /dev/null and b/en/.gitbook/assets/image (41).png differ
diff --git a/en/.gitbook/assets/image (42).png b/en/.gitbook/assets/image (42).png
new file mode 100644
index 0000000..6dc2ffc
Binary files /dev/null and b/en/.gitbook/assets/image (42).png differ
diff --git a/en/.gitbook/assets/image (43).png b/en/.gitbook/assets/image (43).png
new file mode 100644
index 0000000..732fa67
Binary files /dev/null and b/en/.gitbook/assets/image (43).png differ
diff --git a/en/.gitbook/assets/image (44).png b/en/.gitbook/assets/image (44).png
new file mode 100644
index 0000000..ee7aecf
Binary files /dev/null and b/en/.gitbook/assets/image (44).png differ
diff --git a/en/.gitbook/assets/image (45).png b/en/.gitbook/assets/image (45).png
new file mode 100644
index 0000000..3407ccd
Binary files /dev/null and b/en/.gitbook/assets/image (45).png differ
diff --git a/en/.gitbook/assets/image (46).png b/en/.gitbook/assets/image (46).png
new file mode 100644
index 0000000..703e9ed
Binary files /dev/null and b/en/.gitbook/assets/image (46).png differ
diff --git a/en/.gitbook/assets/image (47).png b/en/.gitbook/assets/image (47).png
new file mode 100644
index 0000000..3abdd4e
Binary files /dev/null and b/en/.gitbook/assets/image (47).png differ
diff --git a/en/.gitbook/assets/image (48).png b/en/.gitbook/assets/image (48).png
new file mode 100644
index 0000000..c276a61
Binary files /dev/null and b/en/.gitbook/assets/image (48).png differ
diff --git a/en/.gitbook/assets/image (49).png b/en/.gitbook/assets/image (49).png
new file mode 100644
index 0000000..3abdd4e
Binary files /dev/null and b/en/.gitbook/assets/image (49).png differ
diff --git a/en/.gitbook/assets/image (5).png b/en/.gitbook/assets/image (5).png
new file mode 100644
index 0000000..2c0fa11
Binary files /dev/null and b/en/.gitbook/assets/image (5).png differ
diff --git a/en/.gitbook/assets/image (50).png b/en/.gitbook/assets/image (50).png
new file mode 100644
index 0000000..e3f360e
Binary files /dev/null and b/en/.gitbook/assets/image (50).png differ
diff --git a/en/.gitbook/assets/image (51).png b/en/.gitbook/assets/image (51).png
new file mode 100644
index 0000000..8c544fb
Binary files /dev/null and b/en/.gitbook/assets/image (51).png differ
diff --git a/en/.gitbook/assets/image (52).png b/en/.gitbook/assets/image (52).png
new file mode 100644
index 0000000..9aa1946
Binary files /dev/null and b/en/.gitbook/assets/image (52).png differ
diff --git a/en/.gitbook/assets/image (53).png b/en/.gitbook/assets/image (53).png
new file mode 100644
index 0000000..09c869e
Binary files /dev/null and b/en/.gitbook/assets/image (53).png differ
diff --git a/en/.gitbook/assets/image (54).png b/en/.gitbook/assets/image (54).png
new file mode 100644
index 0000000..4623dee
Binary files /dev/null and b/en/.gitbook/assets/image (54).png differ
diff --git a/en/.gitbook/assets/image (55).png b/en/.gitbook/assets/image (55).png
new file mode 100644
index 0000000..7355239
Binary files /dev/null and b/en/.gitbook/assets/image (55).png differ
diff --git a/en/.gitbook/assets/image (56).png b/en/.gitbook/assets/image (56).png
new file mode 100644
index 0000000..ca5aa4f
Binary files /dev/null and b/en/.gitbook/assets/image (56).png differ
diff --git a/en/.gitbook/assets/image (57).png b/en/.gitbook/assets/image (57).png
new file mode 100644
index 0000000..15e2ac4
Binary files /dev/null and b/en/.gitbook/assets/image (57).png differ
diff --git a/en/.gitbook/assets/image (59).png b/en/.gitbook/assets/image (59).png
new file mode 100644
index 0000000..7b1c492
Binary files /dev/null and b/en/.gitbook/assets/image (59).png differ
diff --git a/en/.gitbook/assets/image (6).png b/en/.gitbook/assets/image (6).png
new file mode 100644
index 0000000..bff16da
Binary files /dev/null and b/en/.gitbook/assets/image (6).png differ
diff --git a/en/.gitbook/assets/image (60).png b/en/.gitbook/assets/image (60).png
new file mode 100644
index 0000000..ba9147c
Binary files /dev/null and b/en/.gitbook/assets/image (60).png differ
diff --git a/en/.gitbook/assets/image (61).png b/en/.gitbook/assets/image (61).png
new file mode 100644
index 0000000..d77c263
Binary files /dev/null and b/en/.gitbook/assets/image (61).png differ
diff --git a/en/.gitbook/assets/image (62).png b/en/.gitbook/assets/image (62).png
new file mode 100644
index 0000000..bc44644
Binary files /dev/null and b/en/.gitbook/assets/image (62).png differ
diff --git a/en/.gitbook/assets/image (63).png b/en/.gitbook/assets/image (63).png
new file mode 100644
index 0000000..2ea7d6f
Binary files /dev/null and b/en/.gitbook/assets/image (63).png differ
diff --git a/en/.gitbook/assets/image (64).png b/en/.gitbook/assets/image (64).png
new file mode 100644
index 0000000..499f03a
Binary files /dev/null and b/en/.gitbook/assets/image (64).png differ
diff --git a/en/.gitbook/assets/image (65).png b/en/.gitbook/assets/image (65).png
new file mode 100644
index 0000000..af2f828
Binary files /dev/null and b/en/.gitbook/assets/image (65).png differ
diff --git a/en/.gitbook/assets/image (67).png b/en/.gitbook/assets/image (67).png
new file mode 100644
index 0000000..a99d97c
Binary files /dev/null and b/en/.gitbook/assets/image (67).png differ
diff --git a/en/.gitbook/assets/image (68).png b/en/.gitbook/assets/image (68).png
new file mode 100644
index 0000000..37d39f7
Binary files /dev/null and b/en/.gitbook/assets/image (68).png differ
diff --git a/en/.gitbook/assets/image (69).png b/en/.gitbook/assets/image (69).png
new file mode 100644
index 0000000..8cb4687
Binary files /dev/null and b/en/.gitbook/assets/image (69).png differ
diff --git a/en/.gitbook/assets/image (7).png b/en/.gitbook/assets/image (7).png
new file mode 100644
index 0000000..2406525
Binary files /dev/null and b/en/.gitbook/assets/image (7).png differ
diff --git a/en/.gitbook/assets/image (70).png b/en/.gitbook/assets/image (70).png
new file mode 100644
index 0000000..378c018
Binary files /dev/null and b/en/.gitbook/assets/image (70).png differ
diff --git a/en/.gitbook/assets/image (72).png b/en/.gitbook/assets/image (72).png
new file mode 100644
index 0000000..0ef5f75
Binary files /dev/null and b/en/.gitbook/assets/image (72).png differ
diff --git a/en/.gitbook/assets/image (73).png b/en/.gitbook/assets/image (73).png
new file mode 100644
index 0000000..47df9c3
Binary files /dev/null and b/en/.gitbook/assets/image (73).png differ
diff --git a/en/.gitbook/assets/image (74).png b/en/.gitbook/assets/image (74).png
new file mode 100644
index 0000000..378c018
Binary files /dev/null and b/en/.gitbook/assets/image (74).png differ
diff --git a/en/.gitbook/assets/image (75).png b/en/.gitbook/assets/image (75).png
new file mode 100644
index 0000000..adc63ca
Binary files /dev/null and b/en/.gitbook/assets/image (75).png differ
diff --git a/en/.gitbook/assets/image (76).png b/en/.gitbook/assets/image (76).png
new file mode 100644
index 0000000..2f6c180
Binary files /dev/null and b/en/.gitbook/assets/image (76).png differ
diff --git a/en/.gitbook/assets/image (77).png b/en/.gitbook/assets/image (77).png
new file mode 100644
index 0000000..16115b0
Binary files /dev/null and b/en/.gitbook/assets/image (77).png differ
diff --git a/en/.gitbook/assets/image (78).png b/en/.gitbook/assets/image (78).png
new file mode 100644
index 0000000..d30d2ab
Binary files /dev/null and b/en/.gitbook/assets/image (78).png differ
diff --git a/en/.gitbook/assets/image (79).png b/en/.gitbook/assets/image (79).png
new file mode 100644
index 0000000..3d4bcca
Binary files /dev/null and b/en/.gitbook/assets/image (79).png differ
diff --git a/en/.gitbook/assets/image (8).png b/en/.gitbook/assets/image (8).png
new file mode 100644
index 0000000..cc5a958
Binary files /dev/null and b/en/.gitbook/assets/image (8).png differ
diff --git a/en/.gitbook/assets/image (80).png b/en/.gitbook/assets/image (80).png
new file mode 100644
index 0000000..12f51f4
Binary files /dev/null and b/en/.gitbook/assets/image (80).png differ
diff --git a/en/.gitbook/assets/image (81).png b/en/.gitbook/assets/image (81).png
new file mode 100644
index 0000000..12277a9
Binary files /dev/null and b/en/.gitbook/assets/image (81).png differ
diff --git a/en/.gitbook/assets/image (82).png b/en/.gitbook/assets/image (82).png
new file mode 100644
index 0000000..db1690f
Binary files /dev/null and b/en/.gitbook/assets/image (82).png differ
diff --git a/en/.gitbook/assets/image (83).png b/en/.gitbook/assets/image (83).png
new file mode 100644
index 0000000..97bdd29
Binary files /dev/null and b/en/.gitbook/assets/image (83).png differ
diff --git a/en/.gitbook/assets/image (84).png b/en/.gitbook/assets/image (84).png
new file mode 100644
index 0000000..d15f46e
Binary files /dev/null and b/en/.gitbook/assets/image (84).png differ
diff --git a/en/.gitbook/assets/image (9).png b/en/.gitbook/assets/image (9).png
new file mode 100644
index 0000000..4623dee
Binary files /dev/null and b/en/.gitbook/assets/image (9).png differ
diff --git a/en/.gitbook/assets/image.png b/en/.gitbook/assets/image.png
new file mode 100644
index 0000000..db1690f
Binary files /dev/null and b/en/.gitbook/assets/image.png differ
diff --git a/en/.gitbook/assets/notion-connect.png b/en/.gitbook/assets/notion-connect.png
new file mode 100644
index 0000000..9d7175b
Binary files /dev/null and b/en/.gitbook/assets/notion-connect.png differ
diff --git a/en/.gitbook/assets/pre-prompt.png b/en/.gitbook/assets/pre-prompt.png
new file mode 100644
index 0000000..aee298a
Binary files /dev/null and b/en/.gitbook/assets/pre-prompt.png differ
diff --git a/en/.gitbook/assets/screenshot-20230802-114025.png b/en/.gitbook/assets/screenshot-20230802-114025.png
new file mode 100644
index 0000000..311d12a
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20230802-114025.png differ
diff --git a/en/.gitbook/assets/screenshot-20230802-141724.png b/en/.gitbook/assets/screenshot-20230802-141724.png
new file mode 100644
index 0000000..715b757
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20230802-141724.png differ
diff --git a/en/.gitbook/assets/screenshot-20230802-141913.png b/en/.gitbook/assets/screenshot-20230802-141913.png
new file mode 100644
index 0000000..0db0705
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20230802-141913.png differ
diff --git a/en/.gitbook/assets/screenshot-20230802-142407.png b/en/.gitbook/assets/screenshot-20230802-142407.png
new file mode 100644
index 0000000..3276d98
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20230802-142407.png differ
diff --git a/en/.gitbook/assets/screenshot-20230802-145326.png b/en/.gitbook/assets/screenshot-20230802-145326.png
new file mode 100644
index 0000000..63250ec
Binary files /dev/null and b/en/.gitbook/assets/screenshot-20230802-145326.png differ
diff --git a/en/.gitbook/assets/segment-list.png b/en/.gitbook/assets/segment-list.png
new file mode 100644
index 0000000..68f8cc8
Binary files /dev/null and b/en/.gitbook/assets/segment-list.png differ
diff --git a/en/.gitbook/assets/share your App.png b/en/.gitbook/assets/share your App.png
new file mode 100644
index 0000000..627a54d
Binary files /dev/null and b/en/.gitbook/assets/share your App.png differ
diff --git a/en/.gitbook/assets/sync-notion-data.png b/en/.gitbook/assets/sync-notion-data.png
new file mode 100644
index 0000000..0f0b035
Binary files /dev/null and b/en/.gitbook/assets/sync-notion-data.png differ
diff --git a/en/README.md b/en/README.md
new file mode 100644
index 0000000..e42f48e
--- /dev/null
+++ b/en/README.md
@@ -0,0 +1,64 @@
+---
+description: >-
+ The name "Dify" is derived from the two words "Define" and "Modify". It
+ represents the vision to help developers continuously improve their AI
+ applications. "Dify" can be understood as "Do it for you"
+---
+
+# Welcome to Dify!
+
+{% hint style="info" %}
+Tips: Dify is currently in the beta preview stage. If there are any inconsistencies between the documentation and the product, please refer to the actual product experience.
+{% endhint %}
+
+If you are amazed and excited by the rapid development of LLM technologies such as GPT-4 and can't wait to use them for something useful! But you have all these confusing questions in your mind:
+
+* How do I "train" a model based on my content?
+* How do I let AI know about things that happened after 2021?
+* How do I prevent AI from babbling nonsense with users?
+* What do fine-tuning and embedding mean?
+
+Well, Dify is just what you need.
+
+**Dify aims to enable developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.**
+
+> "We shape our tools, and then our tools shape us." - Marshall McLuhan
+
+You can quickly build a Web App using Dify, and the generated frontend code can be hosted on Dify. If you want to develop further based on this Web App, you can obtain these templates from GitHub and deploy them anywhere (e.g., Vercel or your server). Alternatively, you can develop your own Web frontend, mobile App, etc., based on the WebAPI, saving you backend development work.
+
+Moreover, the core concept of Dify is to create, configure, and improve your application in a visual interface. Application development based on LLM has a continuous improvement lifecycle, and you may need to make AI give correct answers based on your content, improve AI's accuracy and narrative style, or even download a subtitle from YouTube as context.
+
+This process will involve some logic design, context enhancement, data preparation, and other efforts that may be challenging without the right tools... We call this process LLMOps.
+
+### Next Steps
+
+* Check out these applications created with Dify
+* Quickly create applications in the cloud
+* Install Dify on your server
+
+> "Only a few companies will have the budget to build and manage large language models (LLM) like GPT-3, but there will be many billion-dollar 'second layer' companies that emerge over the next decade."———Sam Altman
+
+Just as the LLM technology is rapidly evolving, Dify is a constantly improving product, and there may be some discrepancies between the content of this document and the actual product. You can share your thoughts with us on [GitHub](https://github.com/langgenius) or Discord.
+
+### Q\&A
+
+**Q: What can I do with Dify?**
+A: Dify is a simple yet powerful natural language programming tool. You can use it to build commercial-grade applications, personal assistants. If you want to develop applications yourself, Dify can also save you the backend work of accessing OpenAI, but using our gradually provided high visual operation ability, you can continuously improve and train your GPT model.
+
+**Q: How do I use Dify to train my own models?**
+A: A valuable application consists of Prompt Engineering, Context Enhancement and Fine-tuning. We have created a hybrid programming method that combines prompts and programming languages (similar to a template engine). You can easily complete long text embedding or grab the subtitles of a YouTube video entered by the user - these will be used as context submitted to LLMs for calculation. We pay great attention to the operability of the application. The data generated by your users during the use of the App can be analyzed, labeled and continuously trained. The above steps may consume a lot of your time without good tool support.
+
+**Q: What do I need to prepare to create my own application?**
+A: You choose a model provider such as OpenAI. Our cloud version has a built-in trial model of GPT-4. You can fill in your own API key. Then you can create an app based on prompts or your own context.
+
+**Q: Can applications built with Dify maintain conversations?**
+A: Yes, if you create a conversational application, it has built-in session saving capabilities, supported in both generated web apps and APIs.
+
+**Q: What's the difference between LLMOps and MLOps?**
+A: In the past, MLOps allowed developers to train models from scratch, while LLMOps developed AI-native applications based on powerful models such as GPT-4. You can refer to this article.
+
+**Q: What interface languages are provided?**
+A: English and Chinese are currently supported. You can contribute language packs for us.
+
+**Q: What is LangGenius?**
+A: LangGenius was the product name before Dify's official launch. We are still updating all the documentation. The name "Dify" is derived from the two words "Define" and "Modify". It represents the vision to help developers continuously improve their AI applications. "Dify" can be understood as "Do it for you".
diff --git a/en/SUMMARY.md b/en/SUMMARY.md
new file mode 100644
index 0000000..1259622
--- /dev/null
+++ b/en/SUMMARY.md
@@ -0,0 +1,65 @@
+# Table of contents
+
+## Getting Started
+
+* [Welcome to Dify!](README.md)
+* [Cloud](getting-started/cloud.md)
+* [Install(Self hosted)](getting-started/install-self-hosted/README.md)
+ * [Docker Compose Deployment](getting-started/install-self-hosted/docker-compose.md)
+ * [Local Source Code Start](getting-started/install-self-hosted/local-source-code.md)
+ * [Start the frontend Docker container separately](getting-started/install-self-hosted/start-the-frontend-docker-container.md)
+ * [Environments](getting-started/install-self-hosted/environments.md)
+* [What is LLMOps?](getting-started/what-is-llmops.md)
+* [FAQ](getting-started/faq/README.md)
+ * [Install FAQ](getting-started/faq/install-faq.md)
+ * [LLMs-use-FAQ](getting-started/faq/llms-use-faq.md)
+ * [API-use-FAQ](getting-started/faq/api-use-faq.md)
+
+## Application
+
+* [Creating An Application](application/creating-an-application.md)
+* [Launch the WebApp quickly](application/launch-webapp.md)
+* [Prompt Engineering](application/prompt-engineering/README.md)
+ * [Text Generator](application/prompt-engineering/text-generation-application.md)
+ * [Conversation Application](application/prompt-engineering/conversation-application.md)
+* [Developing with APIs](application/developing-with-apis.md)
+* [Logs & Annotations](application/logs.md)
+
+## web application
+
+* [Overview](web-application/overview.md)
+* [Text Generator](web-application/text-generator.md)
+* [Conversation Application](web-application/conversation-application.md)
+
+## Explore
+
+* [Chat](explore/chat.md)
+
+## Advanced
+
+* [Datasets\&Index](advanced/datasets/README.md)
+ * [Sync from Notion](advanced/datasets/sync-from-notion.md)
+ * [Dataset of QA model](advanced/datasets/dataset-of-qa-model.md)
+* [Plugins](advanced/ai-plugins.md)
+* [Based on WebApp Template](advanced/based-on-frontend-templates.md)
+* [Model Configuration](advanced/model-configuration/README.md)
+ * [Hugging Face](advanced/model-configuration/hugging-face.md)
+ * [Replicate](advanced/model-configuration/replicate.md)
+* [More Integration](advanced/more-integration.md)
+
+## use cases
+
+* [How to Build an Notion AI Assistant Based on Your Own Notes?](use-cases/build-an-notion-ai-assistant.md)
+* [Create an AI ChatBot with Business Data in Minutes](use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md)
+* [Create a Midjoureny Prompt Bot Without Code in Just a Few Minutes](use-cases/create-a-midjoureny-prompt-bot-with-dify.md)
+
+## Community
+
+* [Support](community/support.md)
+* [Open-Source License](community/open-source.md)
+* [Data Security](community/data-security.md)
+
+## User Agreement
+
+* [Terms of Service](user-agreement/terms-of-service.md)
+* [Privacy Policy](user-agreement/privacy-policy.md)
diff --git a/en/advanced/ai-plugins.md b/en/advanced/ai-plugins.md
new file mode 100644
index 0000000..9f5186a
--- /dev/null
+++ b/en/advanced/ai-plugins.md
@@ -0,0 +1,5 @@
+# Plugins
+
+{% hint style="info" %}
+Plugins is an upcoming feature of Dify. You can incorporate plugins into your App orchestration and access AI applications with plugin capabilities through an API or WebApp. Dify is compatible with the ChatGPT Plugins standard and provides some native plugins.
+{% endhint %}
diff --git a/en/advanced/based-on-frontend-templates.md b/en/advanced/based-on-frontend-templates.md
new file mode 100644
index 0000000..2a412b7
--- /dev/null
+++ b/en/advanced/based-on-frontend-templates.md
@@ -0,0 +1,39 @@
+# Based on WebApp Template
+
+If developers are developing new products from scratch or in the product prototype design phase, you can quickly launch AI sites using Dify. At the same time, Dify hopes that developers can fully freely create different forms of front-end applications. For this reason, we provide:
+
+* **SDK** for quick access to the Dify API in various languages
+* **WebApp Template** for WebApp development scaffolding for each type of application
+
+The WebApp Templates are open source under the MIT license. You are free to modify and deploy them to achieve all the capabilities of Dify or as a reference code for implementing your own App.
+
+You can find these Templates on GitHub:
+
+* [Conversational app](https://github.com/langgenius/webapp-conversation)
+* [Text generation app](https://github.com/langgenius/webapp-text-generator)
+
+The fastest way to use the WebApp Template is to click "**Use this template**" on GitHub, which is equivalent to forking a new repository. Then you need to configure the Dify App ID and API Key, like this:
+
+```javascript
+export const APP_ID = ''
+export const API_KEY = ''
+```
+
+More config in `config/index.ts`:
+
+```
+export const APP_INFO: AppInfo = {
+ "title": 'Chat APP',
+ "description": '',
+ "copyright": '',
+ "privacy_policy": '',
+ "default_language": 'zh-Hans'
+}
+
+export const isShowPrompt = true
+export const promptTemplate = ''
+```
+
+Each WebApp Template provides a README file containing deployment instructions. Usually, WebApp Templates contain a lightweight backend service to ensure that developers' API keys are not directly exposed to users.
+
+These WebApp Templates can help you quickly build prototypes of AI applications and use all the capabilities of Dify. If you develop your own applications or new templates based on them, feel free to share with us.
diff --git a/en/advanced/datasets/README.md b/en/advanced/datasets/README.md
new file mode 100644
index 0000000..446ca3b
--- /dev/null
+++ b/en/advanced/datasets/README.md
@@ -0,0 +1,137 @@
+# Datasets\&Index
+
+Most language models use outdated training data and have length limitations for the context of each request. For example, GPT-3.5 is trained on corpora from 2021 and has a limit of approximately 4k tokens per request. This means that developers who want their AI applications to be based on the latest and private context conversations must use techniques like embedding.
+
+Dify' dataset feature allows developers (and even non-technical users) to easily manage datasets and automatically integrate them into AI applications. All you need to do is prepare text content, such as:
+
+* Long text content (TXT, Markdown, JSONL, or even PDF files)
+* Structured data (CSV, Excel, etc.)
+
+Additionally, we are gradually supporting syncing data from various data sources to datasets, including:
+
+* GitHub
+* Databases
+* Webpages
+* ...
+
+{% hint style="info" %}
+**Practice**: If your company wants to build an AI customer service assistant based on existing knowledge bases and product documentation, you can upload the documents to a dataset in Dify and create a conversational application. This might have taken you several weeks in the past and been difficult to maintain continuously.
+{% endhint %}
+
+### Datasets and Documents
+
+In Dify, datasets (Datasets) are collections of documents (Documents). A dataset can be integrated as a whole into an application to be used as context. Documents can be uploaded by developers or operations staff, or synced from other data sources (typically corresponding to a file unit in the data source).
+
+**Steps to upload a document:**
+
+1. Upload your file, usually a long text file or a spreadsheet
+2. Segment, clean, and preview
+3. Dify submits it to the LLM provider for embedding as vector data and storage
+4. Set metadata for the document
+5. Ready to use in the application!
+
+#### How to write a good dataset description
+
+When multiple datasets are referenced in an application, AI uses the description of the datasets and the user's question to determine which dataset to use to answer the user's question. Therefore, a well-written dataset description can improve the accuracy of AI in selecting datasets.
+
+The key to writing a good dataset description is to clearly describe the content and characteristics of the dataset. **It is recommended that the dataset description begin with this: `Useful only when the question you want to answer is about the following: specific description`**. Here is an example of a real estate dataset description:
+
+> Useful only when the question you want to answer is about the following: global real estate market data from 2010 to 2020. This data includes information such as the average housing price, property sales volume, and housing types for each city. In addition, this dataset also includes some economic indicators such as GDP and unemployment rate, as well as some social indicators such as population and education level. These indicators can help analyze the trends and influencing factors of the real estate market.
+> With this data, we can understand the development trends of the global real estate market, analyze the changes in housing prices in various cities, and understand the impact of economic and social factors on the real estate market.
+
+### Create a dataset
+
+1. Click on datasets in the main navigation bar of Dify. On this page, you can see the existing datasets. Click on "Create Dataset" to enter the creation wizard.
+2. If you have already prepared your files, you can start by uploading the files.
+3. If you haven't prepared your documents yet, you can create an empty dataset first.
+
+### Uploading Documents By upload file
+
+1. Select the file you want to upload.We support batch uploads
+2. Preview the full text
+3. Perform segmentation and cleaning
+4. Wait for Dify to process the data for you; this step usually consumes tokens in the LLM provider
+
+### Text Preprocessing and Cleaning
+
+Text Preprocessing and cleaning refers to Dify automatically segmenting and vectorizing your data documents so that user's questions (input) can match relevant paragraphs (Q to P), and generate results.
+
+When uploading a dataset, you need to select a **indexing mode** to specify how data is matched. This affects the accuracy of AI replies.
+
+In **High Quality mode**, OpenAI's embedding API is used for higher accuracy in user queries.
+
+In **Economic mode**, offline vector engines, keyword indexing etc. are used to reduce costs at the expense of lower accuracy.
+
+In **Segmenting in Question & Answer format**, instead of normal "Q to P" (question matches paragraphs), it uses "Q to Q" (question matches question) matching. After segmentation, Q\&A pairs are generated for each passage. When users ask questions, the system finds the most similar question and returns the corresponding passage as the answer. This is more precise because it directly matches the user's question and retrieves the information they need.
+
+> Questions have complete syntax while keywords lack semantics and context. So Q to Q improves clarity and handles similar high-frequency questions better.
+
+
In Segmenting in Question & Answer format, the text is summarized into multiple QA pairs
+
+
The difference between Q to P and Q to Q indexing modes
+
+### Modify Documents
+
+Modify Documents For technical reasons, if developers make the following changes to documents, Dify will create a new document for you, and the old document will be archived and deactivated:
+
+1. Adjust segmentation and cleaning settings
+2. Re-upload the file
+
+Dify support customizing the segmented and cleaned text by adding, deleting, and editing paragraphs. You can dynamically adjust your segmentation to make your dataset more accurate. Click **Document --> paragraph --> Edit** in the dataset to modify paragraphs content. Click **Document --> paragraph --> Add new segment** to manually add new paragraph.
+
+
+
+
Add new segment
+
+### Maintain Datasets via API
+
+TODO
+
+### Dataset Settings
+
+Click **Settings** in the left navigation of the dataset. You can change the following settings for the dataset:
+
+* Dataset **name** for identifying a dataset
+* Dataset **description** to allow AI to better use the dataset appropriately. If the description is empty, Dify's automatic indexing strategy will be used.
+* **Permissions** can be set to Only Me or All Team Members. Those without permissions cannot view and edit the dataset.
+* **Indexing mode**: In High Quality mode, OpenAI's embedding interface will be called to process and provide higher accuracy when users query. In Economic mode, offline vector engines, keyword indexing, etc. will be used to reduce accuracy without consuming tokens.
+
+Note: Upgrading the indexing mode from Economic to High Quality will incur additional token consumption. Downgrading from High Quality to Economic will not consume tokens.
+
+### Integrate into Applications
+
+Once the dataset is ready, it needs to be integrated into the application. When the AI application processes will automatically use the associated dataset content as a reference context.
+
+1. Go to the application - Prompt Arrangement page
+2. In the context options, select the dataset you want to integrate
+3. Save the settings to complete the integration
+
+### Q\&A
+
+**Q: What should I do if the PDF upload is garbled?**
+
+A: If your PDF parsing appears garbled under certain formatted contents, you could consider converting the PDF to Markdown format, which currently offers higher accuracy, or you could reduce the use of images, tables, and other formatted content in the PDF. We are researching ways to optimize the experience of using PDFs.
+
+**Q: How does the consumption mechanism of context work?**
+A: With a dataset added, each query will consume segmented content (currently embedding two segments) + question + prompt + chat history combined. However, it will not exceed model limitations, such as 4096.
+
+**Q: Where does the embedded dataset appear when asking questions?**
+A: It will be embedded as context before the question.
+
+**Q: Is there any priority between the added dataset and OpenAI's answers?**
+A: The dataset serves as context and is used together with questions for LLM to understand and answer; there is no priority relationship.
+
+**Q: Why can I hit in test but not in application?**
+A: You can troubleshoot issues by following these steps:
+
+1. Make sure you have added text on the prompt page and clicked on the save button in the top right corner.
+2. Test whether it responds normally in the prompt debugging interface.
+3. Try again in a new WebApp session window.
+4. Optimize your data format and quality. For practice reference, visit [https://github.com/langgenius/dify/issues/90](https://github.com/langgenius/dify/issues/90)
+ If none of these steps solve your problem, please join our community for help.
+
+**Q: Will APIs related to hit testing be opened up so that dify can access knowledge bases and implement dialogue generation using custom models?**
+A: We plan to open up Webhooks later on; however, there are no current plans for this feature. You can achieve your requirements by connecting to any vector database.
+
+**Q: How do I add multiple datasets?**
+A: Due to short-term performance considerations, we currently only support one dataset. If you have multiple sets of data, you can upload them within the same dataset for use.
diff --git a/en/advanced/datasets/dataset-of-qa-model.md b/en/advanced/datasets/dataset-of-qa-model.md
new file mode 100644
index 0000000..2968110
--- /dev/null
+++ b/en/advanced/datasets/dataset-of-qa-model.md
@@ -0,0 +1,21 @@
+---
+description: QAmodeldataset
+---
+
+# Dataset of QA model
+
+**The Q\&A paragraph mode feature is different from the normal "Q2P" (question matches paragraph content) matching mode. The "Q2Q" (question matches question) matching mode means that when a user asks a question, the system will find the most similar question to it, and then return the corresponding paragraph as the answer. This method is more precise, because it directly matches the user's question, and can more accurately obtain the information that the user really needs.**
+
+
+
+1. QA dataset is created by summarizing each paragraph in a document and generating QA pairs from the summaries. This process summarizes the information in each paragraph and breaks it down to extract valuable insights for the user.
+
+
+
+
+
+
+2. At the same time, we support custom additions and modifications to the segmentation. Users can dynamically adjust their own segmentation information to make your dataset more precise.
+
+ .png>).png>)
+3. The problem text has a complete syntactic structure of natural language, rather than some keywords in a document retrieval task. Therefore, matching Q to Q makes the semantics and matching clearer while satisfying some high-frequency and high-similarity question scenarios.
diff --git a/en/advanced/datasets/sync-from-notion.md b/en/advanced/datasets/sync-from-notion.md
new file mode 100644
index 0000000..a28a0f4
--- /dev/null
+++ b/en/advanced/datasets/sync-from-notion.md
@@ -0,0 +1,26 @@
+# Sync from Notion
+
+Dify dataset supports importing from Notion and setting up **Sync** so that data is automatically synced to Dify after updates in Notion.
+
+### Authorization verification
+
+1. When creating a dataset, select the data source, click **Sync from Notion--Go to connect**, and complete the authorization verification according to the prompt.
+2. You can also: click **Settings--Data Sources--Add a Data Source**, click Notion Source **Connect** to complete authorization verification.
+
+
Connect Notion
+
+### Import Notion data
+
+After completing authorization verification, go to the dataset creation page, click **Sync from Notion**, and select the required authorization page to import.
+
+### Segmentation and cleaning
+
+Next, select your **segmentation settings** and **indexing method**, **save and process**. Wait for Dify to process this data, usually this step requires token consumption in LLM providers. Dify not only supports importing ordinary page types but also summarizes and saves the page attributes under the database type.
+
+_**Note: Images and files are not currently supported for import. Table data will be converted to text.**_
+
+### Sync Notion data
+
+If your Notion content has been modified, you can click Sync directly on the Dify dataset document list page to sync the data with one click(Please note that each time you click, the current content will be synchronized). This step requires token consumption.
+
+
Sync Notion data
diff --git a/en/advanced/model-configuration/README.md b/en/advanced/model-configuration/README.md
new file mode 100644
index 0000000..6b383aa
--- /dev/null
+++ b/en/advanced/model-configuration/README.md
@@ -0,0 +1,83 @@
+# Model Configuration
+
+Dify currently supports major model providers such as OpenAI's GPT series. Here are the model providers we currently support:
+
+* OpenAI
+* Azure OpenAI Service
+* Anthropic
+* Hugging Face Hub
+* Replicate
+* iFLYTEK SPARK
+* WENXINYIYAN
+* TONGYI
+* MINIMAX
+* ChatGLM
+
+Based on technology developments and user needs, we will continue adding support for more LLM providers over time.
+
+### Trial Hosted Models
+
+We provide trial quotas for different models for Dify cloud service users. Please set up your own model provider before the trial quota runs out, otherwise it may impact normal use of your application.
+
+* **OpenAI hosted model trial:** We provide 500 free call credits for you to try out GPT3.5-turbo, GPT3.5-turbo-16k, text-davinci-003 models.
+* **Anthropic Claude hosted model trial:** We provide 1000 free call credits for you to try out Claude-instant-1, Claude2 models.
+
+### Model type
+
+In Dify, we divide models into the following 3 categories according to their usage scenarios:
+
+1. System Reasoning Model. In the created application, this type of model is used. Smart chat, dialogue name generation, and next question suggestions also use reasoning models.
+2. Embedding Model. In the dataset, this type of model is used to embedding segmented documents. In applications that use data sets, this type of model is also used to process user questions as Embedding.
+3. Speech-to-Text model. In conversational applications, this type of model is used to convert speech to text.
+
+### Set default model
+
+When Dify needs a model, it will select the set default model according to the usage scenario. Set the default model in `Settings > Model Provider`.
+
+
+
+### Access model settings
+
+Set the model to be imported in Dify's `Settings > Model Provider`.
+
+.png>)
+
+There are two types of model suppliers:
+
+1. Own model. Model suppliers of this type provide models developed by themselves. Such as OpenAI, Anthropic, etc.
+2. Hosting model. This type of model provider provides third-party models. Such as Hugging face, Replicate, etc.
+
+The different types of model suppliers are accessed slightly differently in Dify.
+
+
+
+### Model suppliers that access their own models
+
+After importing the supplier of its own model, Dify will automatically import all the models under the supplier.
+
+Set the API key of the corresponding model provider in Dify to access the model provider. Get the API address of the model provider as follows:
+
+* OpenAI: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)
+* Anthropic:[https://console.anthropic.com/account/keys](https://console.anthropic.com/account/keys)
+* iFLYTEK SPARK:[https://www.xfyun.cn/solutions/xinghuoAPI](https://www.xfyun.cn/solutions/xinghuoAPI)
+* MINIMAX:[https://api.minimax.chat/user-center/basic-information/interface-key](https://api.minimax.chat/user-center/basic-information/interface-key)
+* WENXINYIYAN:[https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application](https://console.bce.baidu.com/qianfan/ais/console/applicationConsole/application)
+* TONGYI:[https://dashscope.console.aliyun.com/api-key_management?spm=a2c4g.11186623.0.0.3bbc424dxZms9k](https://dashscope.console.aliyun.com/api-key_management?spm=a2c4g.11186623.0.0.3bbc424dxZms9k)
+* ChatGLM: This model provider does not provide official services. But self-deployment is supported ([deployment docs](https://github.com/THUDM/ChatGLM2-6B/blob/main/README_EN.md#environment-setup)).
+
+{% hint style="info" %}
+Dify uses [PKCS1_OAEP](https://pycryptodome.readthedocs.io/en/latest/src/cipher/oaep.html) to encrypt and store user-managed API keys, and each tenant uses an independent key pair for encryption to ensure that your API keys are not leaked.
+{% endhint %}
+
+### Model suppliers that access hosted models
+
+There are many third-party models on hosting type providers. Access models need to be added one by one. The specific access method is as follows:
+
+* [Hugging Face](hugging-face.md).
+* [Replicate](replicate.md).
+
+### Use model
+
+Once you have configured your models, you can use them in your application:
+
+
diff --git a/en/advanced/model-configuration/hugging-face.md b/en/advanced/model-configuration/hugging-face.md
new file mode 100644
index 0000000..8edd10d
--- /dev/null
+++ b/en/advanced/model-configuration/hugging-face.md
@@ -0,0 +1,64 @@
+# Hugging Face
+
+Dify supports models of the [text-generation](https://huggingface.co/models?pipeline_tag=text-generation\&sort=trending) and [text2text-generation](https://huggingface.co/models?pipeline_tag=text2text-generation\&sort=trending) types on the Hugging Face. Specific steps are as follows:
+
+1. You need a Hugging Face account ([registered address](https://huggingface.co/join)).
+2. Set the API key of Hugging Face ([obtain address](https://huggingface.co/settings/tokens)).
+3. Select a model to enter the [Hugging Face model list page](https://huggingface.co/models?pipeline_tag=text-generation\&sort=trending), and filter the models with [text-generation](https://huggingface.co/models?pipeline_tag=text-generation\&sort=trending) and [text2text-generation](https://huggingface.co/models?pipeline_tag=text2text-generation\&sort=trending).
+
+
+
+Dify supports accessing models on Hugging Face in two ways:
+
+1. Hosted Inference API. This method uses the model officially deployed by Hugging Face. No fee is required. But the downside is that only a small number of models support this approach.
+2. Inference Endpiont. This method uses resources such as AWS accessed by the Hugging Face to deploy the model and requires payment.
+
+### Models that access the Hosted Inference API
+
+#### 1 Select a model
+
+Hosted inference API is supported only when there is an area containing Hosted inference API on the right side of the model details page. As shown in the figure below:
+
+
+
+On the model details page, you can get the name of the model.
+
+
+
+#### 2 Using access models in Dify
+
+Select Hosted Inference API for Endpoint Type in `Settings > Model Provider > Hugging Face`. As shown below:
+
+
+
+API Token is the API Key set at the beginning of the article. The model name is the model name obtained in the previous step.
+
+
+
+### Method 2: Inference Endpoint
+
+#### 1 Select the model to deploy
+
+Inference Endpiont is only supported for models with the Inference Endpionts option under the Deploy button on the right side of the model details page. As shown below:
+
+
+
+
+
+#### 2 Deployment model
+
+Click the Deploy button for the model and select the Inference Endpiont option. If you have not bound a bank card before, you will need to bind the card. Just follow the process. After binding the card, the following interface will appear: modify the configuration according to the requirements, and click Create Endpoint in the lower left corner to create an Inference Endpoint.
+
+
+
+After the model is deployed, you can see the Endpoint URL.
+
+
+
+#### 3 Using access models in Dify
+
+Select Inference Endpoints for Endpoint Type in `Settings > Model Provider > Hugging face`. As shown below:
+
+
+
+API Token is the API Key set at the beginning of the article. The name of the model is arbitrary. Endpoint URL is the Endpoint URL obtained after successfully deploying the model in the previous step.
diff --git a/en/advanced/model-configuration/replicate.md b/en/advanced/model-configuration/replicate.md
new file mode 100644
index 0000000..2af1e9c
--- /dev/null
+++ b/en/advanced/model-configuration/replicate.md
@@ -0,0 +1,18 @@
+# Replicate
+
+Dify supports accessing [Language models](https://replicate.com/collections/language-models) and [Embedding models](https://replicate.com/collections/embedding-models) on Replicate. Language models correspond to Dify's reasoning model, and Embedding models correspond to Dify's Embedding model.
+
+
+
+Specific steps are as follows:
+
+1. You need to have a Replicate account ([registered address](https://replicate.com/signin?next=/docs)).
+2. Get API Key ([get address](https://replicate.com/signin?next=/docs)).
+3. Pick a model. Select the model under [Language models](https://replicate.com/collections/language-models) and [Embedding models](https://replicate.com/collections/embedding-models) .
+4. Add models in Dify's `Settings > Model Provider > Replicate`.
+
+
+
+The API key is the API Key set in step 2. Model Name and Model Version can be found on the model details page:
+
+
diff --git a/en/advanced/more-integration.md b/en/advanced/more-integration.md
new file mode 100644
index 0000000..d89ce30
--- /dev/null
+++ b/en/advanced/more-integration.md
@@ -0,0 +1,3 @@
+# More Integration
+
+TODO
diff --git a/en/application/creating-an-application.md b/en/application/creating-an-application.md
new file mode 100644
index 0000000..e8214f2
--- /dev/null
+++ b/en/application/creating-an-application.md
@@ -0,0 +1,53 @@
+# Creating An Application
+
+In Dify, an "application" refers to a real-world scenario application built on large language models such as GPT. By creating an application, you can apply intelligent AI technology to specific needs. It encompasses both the engineering paradigms for developing AI applications and the specific deliverables.
+
+**In short, an application delivers to developers:**
+
+* A user-friendly, encapsulated LLM API that can be called directly by backend or frontend applications with token authentication
+* A ready-to-use, beautiful, and hosted Web App that you can develop further using the Web App templates
+* A set of easy-to-use interfaces for Prompt Engineering, context management, log analysis, and annotation
+
+You can choose one or all of them to support your AI application development.
+
+### Application Types
+
+Dify offers two types of applications: text generation and conversational. More application paradigms may appear in the future (we should keep up-to-date), and the ultimate goal of Dify is to cover more than 80% of typical LLM application scenarios. The differences between text generation and conversational applications are shown in the table below:
+
+
Text Generation
Conversational
WebApp Interface
Form + Results
Chat style
API Endpoint
completion-messages
chat-messages
Interaction Mode
One question and one answer
Multi-turn dialogue
Streaming results return
Supported
Supported
Context Preservation
Current time
Continuous
User input form
Supported
Supported
Datasets&Plugins
Supported
Supported
AI opening remarks
Not supported
Supported
Scenario example
Translation, judgment, indexing
Chat or everything
+
+### Steps to Create an Application
+
+After logging in as an administrator in Dify, go to the main navigation application page Click "Create New Application" Choose a conversational or text generation application and give it a name (modifiable later)
+
+
Create a new App
+
+We provide some templates in the application creation interface, and you can click to create from a template in the popup when creating an application. These templates will provide inspiration and reference for the application you want to develop.
+
+### Creating from a Configuration File
+
+If you have obtained a template from the community or someone else, you can click to create from an application configuration file. Uploading the file will load most of the settings from the other party's application (but not the datasets at present).
+
+### Your Application
+
+If you are using it for the first time, you will be prompted to enter your OpenAI API key. A properly functioning LLM key is a prerequisite for using Dify. If you don't have one yet, please apply for one.
+
+
Enter your OpenAI API Key
+
+After creating an application or selecting an existing one, you will arrive at an application overview page showing the application's profile. You can directly access your WebApp or check the API status here, as well as enable or disable them.
+
+Statistics show the usage, active user count, and LLM call consumption of the application over a period of time—enabling you to continually improve the cost-effectiveness of application operations. We will gradually provide more useful visualization capabilities; please let us know what you want.
+
+1. Total Messages: Daily AI interactions count; prompt engineering/debugging excluded.
+2. Active Users: Unique users engaging in Q\&A with AI; prompt engineering/debugging excluded.
+3. Avg. Session Interactions: Continuous user-AI communication count; for conversation-based apps.
+4. User Satisfaction Rate: Likes per 1,000 messages; indicates satisfaction with AI answers.
+5. Avg. Response Time: Time (ms) for AI to process/respond; for text-based apps.
+6. Token Usage: Daily language model token usage; for cost control.
+
+### What's Next
+
+* Try your WebApp
+* Take a tour of the Configuration, Development, and Logs pages on the left
+* Try configuring an application using a reference case
+* If you have the ability to develop frontend applications, please consult the API documentation
diff --git a/en/application/developing-with-apis.md b/en/application/developing-with-apis.md
new file mode 100644
index 0000000..e9b3561
--- /dev/null
+++ b/en/application/developing-with-apis.md
@@ -0,0 +1,69 @@
+# Developing with APIs
+
+Dify offers a "Backend-as-a-Service" API, providing numerous benefits to AI application developers. This approach enables developers to access the powerful capabilities of large language models (LLMs) directly in frontend applications without the complexities of backend architecture and deployment processes.
+
+### Benefits of using Dify API
+
+* Allow frontend apps to securely access LLM capabilities without backend development
+* Design applications visually with real-time updates across all clients
+* Well-encapsulated original LLM APIs
+* Effortlessly switch between LLM providers and centrally manage API keys
+* Operate applications visually, including log analysis, annotation, and user activity observation
+* Continuously provide more tools, plugins, and datasets
+
+### How to use
+
+Choose an application, and find the API Access in the left-side navigation of the Apps section. On this page, you can view the API documentation provided by Dify and manage credentials for accessing the API.
+
+
API document
+
+You can create multiple access credentials for an application to deliver to different users or developers. This means that API users can use the AI capabilities provided by the application developer, but the underlying Prompt engineering, datasets, and tool capabilities are encapsulated.
+
+{% hint style="warning" %}
+In best practices, API keys should be called through the backend, rather than being directly exposed in plaintext within frontend code or requests. This helps prevent your application from being abused or attacked.
+{% endhint %}
+
+For example, if you're a developer in a consulting company, you can offer AI capabilities based on the company's private database to end-users or developers, without exposing your data and AI logic design. This ensures a secure and sustainable service delivery that meets business objectives.
+
+### Text-generation application
+
+These applications are used to generate high-quality text, such as articles, summaries, translations, etc., by calling the completion-messages API and sending user input to obtain generated text results. The model parameters and prompt templates used for generating text depend on the developer's settings in the Dify Prompt Arrangement page.
+
+You can find the API documentation and example requests for this application in **Applications -> Access API**.
+
+For example, here is a sample call an API for text generation:
+
+```
+curl --location --request POST 'https://api.dify.dev/v1/completion-messages' \
+--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "inputs": {},
+ "query": "Hi",
+ "response_mode": "streaming",
+ "user": "abc-123"
+}'
+```
+
+
+
+### Conversational applications
+
+Suitable for most scenarios, conversational applications engage in continuous dialogue with users in a question-and-answer format. To start a conversation, call the chat-messages API and maintain the session by continuously passing in the returned conversation_id.
+
+You can find the API documentation and example requests for this application in **Applications -> Access API**.
+
+For example, here is a sample call an API for chat-messages:
+
+```
+curl --location --request POST 'https://api.dify.dev/v1/chat-messages' \
+--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "inputs": {},
+ "query": "eh",
+ "response_mode": "streaming",
+ "conversation_id": "1c7e55fb-1ba2-4e10-81b5-30addcea2276"
+ "user": "abc-123"
+}'
+```
diff --git a/en/application/launch-webapp.md b/en/application/launch-webapp.md
new file mode 100644
index 0000000..3a9e342
--- /dev/null
+++ b/en/application/launch-webapp.md
@@ -0,0 +1,46 @@
+# Launch the WebApp quickly
+
+One of the benefits of creating AI applications with Dify is that you can launch a user-friendly Web application in just a few minutes, based on your Prompt orchestration.
+
+* If you are using the self-hosted open-source version, the application will run on your server.
+* If you are using the cloud version, the application will be hosted on udify.app.
+
+### Launch WebApp
+
+In the application overview page, you can find a card for the AI site (WebApp). Simply enable WebApp access to get a shareable link for your users.
+
+
Share your WebApp
+
+We provide a sleek WebApp interface for both of the following applications:
+
+* Text Generation (go to preview)
+* Conversational (go to preview)
+
+### Configure your WebApp
+
+Click the settings button on the WebApp card to configure some options for the AI site. These will be visible to the end users:
+
+* Icon
+* Name
+* Application Description
+* Interface Language
+* Copyright Information
+* Privacy Policy Link
+
+### Embed your WebApp
+
+Dify supports embedding your AI application into your business website. With this capability, you can create AI customer service and business knowledge Q\&A applications with business data on your official website within minutes. Click the embed button on the WebApp card, copy the embed code, and paste it into the desired location on your website.
+
+* For iframe tag:
+
+ Copy the iframe code and paste it into the tags (such as `
`, ``, etc.) on your website used to display the AI application.
+
+* For script tag:
+
+ Copy the script code and paste it into the `` or `` tags on your website.
+
+
+
+For example, if you paste the script code into the section of your official website, you will get an AI chatbot on your website:
+
+
diff --git a/en/application/logs.md b/en/application/logs.md
new file mode 100644
index 0000000..1d1cd53
--- /dev/null
+++ b/en/application/logs.md
@@ -0,0 +1,34 @@
+# Logs & Annotations
+
+{% hint style="warning" %}
+Please ensure that your application complies with local regulations when collecting user data. The common practice is to publish a privacy policy and obtain user consent.
+{% endhint %}
+
+The **Logs** feature is designed to observe and annotate the performance of Dify applications. Dify records logs for all interactions with the application, whether through the WebApp or API. If you are a Prompt Engineer or LLM operator, it will provide you with a visual experience of LLM application operations.
+
+### Using the Logs Console
+
+You can find the Logs in the left navigation of the application. This page typically displays:
+
+* Interaction records between users and AI within the selected timeframe
+* The results of user input and AI output, which for conversational applications are usually a series of message flows
+* Ratings from users and operators, as well as improvement annotations from operators
+
+The logs currently do not include interaction records from the Prompt debugging process.
+
+### Improvement Annotations
+
+{% hint style="info" %}
+These annotations will be used for model fine-tuning in future versions of Dify to improve model accuracy and response style. The current preview version only supports annotations.
+{% endhint %}
+
+\[Image]
+
+Clicking on a log entry will open the log details panel on the right side of the interface. In this panel, operators can annotate an interaction:
+
+* Give a thumbs up for well-performing messages
+* Give a thumbs down for poorly-performing messages
+* Mark improved responses for improvement, which represents the text you expect AI to reply with
+
+Please note that if multiple administrators in the team annotate the same log entry, the last annotation will overwrite the previous ones.
+
diff --git a/en/application/prompt-engineering/README.md b/en/application/prompt-engineering/README.md
new file mode 100644
index 0000000..da564f6
--- /dev/null
+++ b/en/application/prompt-engineering/README.md
@@ -0,0 +1,40 @@
+---
+description: >-
+ Master the use of Dify for orchestrating applications and practicing Prompt
+ Engineering, and build high-value AI applications with the two built-in
+ application types.
+---
+
+# Prompt Engineering
+
+The core concept of Dify is the declarative definition of AI applications. Everything including Prompts, context, plugins, etc. can be described in a YAML file (which is why it is called Dify). It ultimately presents a single API or out-of-the-box WebApp.
+
+At the same time, Dify provides an easy-to-use Prompt orchestration interface where developers can visually orchestrate various application features based on Prompts. Doesn't it sound simple?
+
+For both simple and complex AI applications, good Prompts can effectively improve the quality of model output, reduce error rates, and meet the needs of specific scenarios. Dify currently provides two common application forms: conversational and text generator. This section will guide you through visually orchestrating AI applications.
+
+### Application Orchestration Steps
+
+1. Determine application scenarios and functional requirements
+2. Design and test Prompts and model parameters
+3. Orchestrate Prompts
+4. Publish the application
+5. Observe and continuously iterate
+
+### Hands-on Practice
+
+TODO
+
+### The Differences between Application Types
+
+Text generation and conversation applications in Dify have slight differences in prompt orchestration. Conversation applications require incorporating "conversation lifecycle" to meet more complex user scenarios and context management needs.
+
+Prompt Engineering has developed into a field with tremendous potential, worthy of continuous exploration. Please continue reading to learn about the orchestration guidelines for both types of applications.
+
+### Extended Reading
+
+1. [Learn Prompting](https://learnprompting.org/zh-Hans/)
+2. [ChatGPT Prompt Engineering for Developers](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/)
+3. [Awesome ChatGPT Prompts](https://github.com/f/awesome-chatgpt-prompts)
+
+
diff --git a/en/application/prompt-engineering/conversation-application.md b/en/application/prompt-engineering/conversation-application.md
new file mode 100644
index 0000000..117a5ab
--- /dev/null
+++ b/en/application/prompt-engineering/conversation-application.md
@@ -0,0 +1,89 @@
+# Conversation Application
+
+Conversation applications use a one-question-one-answer mode to have a continuous conversation with the user.
+
+### Applicable scenarios
+
+Conversation applications can be used in fields such as customer service, online education, healthcare, financial services, etc. These applications can help organizations improve work efficiency, reduce labor costs, and provide a better user experience.
+
+### How to compose
+
+Conversation applications supports: prompts, variables, context, opening remarks, and suggestions for the next question.
+
+Here, we use a interviewer application as an example to introduce the way to compose a conversation applications.
+
+#### Step 1 Create an application
+
+Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select **"Chat App"** as the application type.
+
+
Create Application
+
+#### Step 2: Compose the Application
+
+After the application is successfully created, it will automatically redirect to the application overview page. Click on the left-hand menu: “**Prompt Eng.**” to compose the application.
+
+
+
+
+
+**2.1 Fill in Prompts**
+
+Prompts are used to give a series of instructions and constraints to the AI response. Form variables can be inserted, such as `{{input}}`. The value of variables in the prompts will be replaced with the value filled in by the user.
+
+The prompt we are filling in here is:
+
+> I want you to be the interviewer for the \{{jobName\}} position. I will be the candidate, and you will ask me interview questions for the position of \{{jobName\}} developer. I hope you will only answer as the interviewer. Don't write all the questions at once. I wish for you to only interview me. Ask me questions and wait for my answers. Don't write explanations. Ask me one by one like an interviewer and wait for my answer.
+>
+> When I am ready, you can start asking questions.
+
+.png>)
+
+
+
+For a better experience, we will add an opening dialogue: `"Hello, {{name}}. I'm your interviewer, Bob. Are you ready?"`
+
+To add the opening dialogue, click the "Add Feature" button in the upper left corner, and enable the "Conversation remarkers" feature:
+
+
+
+And then edit the opening remarks:
+
+.png>)
+
+
+
+**2.2 Adding Context**
+
+If an application wants to generate content based on private contextual conversations, it can use our [dataset](../../advanced/datasets/) feature. Click the "Add" button in the context to add a dataset.
+
+.png>)
+
+
+
+**2.3 Debugging**
+
+We fill in the user input on the right side and debug the input content.
+
+.png>)
+
+If the results are not satisfactory, you can adjust the prompts and model parameters. Click on the model name in the upper right corner to set the parameters of the model:
+
+.png>)
+
+We support the GPT-4 model.
+
+
+
+**2.4 Publish**
+
+After debugging the application, click the **"Publish"** button in the upper right corner to save the current settings.
+
+### **Share Application**
+
+On the overview page, you can find the sharing address of the application. Click the "Preview" button to preview the shared application. Click the "Share" button to get the sharing link address. Click the "Settings" button to set the shared application information.
+
+
+
+
+
+If you want to customize the application that you share, you can Fork our open source [WebApp template](https://github.com/langgenius/webapp-conversation). Based on the template, you can modify the application to meet your specific needs and style requirements.
diff --git a/en/application/prompt-engineering/text-generation-application.md b/en/application/prompt-engineering/text-generation-application.md
new file mode 100644
index 0000000..277249d
--- /dev/null
+++ b/en/application/prompt-engineering/text-generation-application.md
@@ -0,0 +1,76 @@
+# Text Generator
+
+Text generation applications are applications that can automatically generate high-quality text based on prompts provided by users. They can generate various types of text, such as article summaries, translations, etc.
+
+### **Applicable scenarios**
+
+Text generation applications are suitable for scenarios that require a large amount of text creation, such as news media, advertising, SEO, marketing, etc. They can provide efficient and fast text generation services for these industries, reduce labor costs, and improve production efficiency.
+
+### **How to c**ompose
+
+Text generation applications supports: prefix prompt words, variables, context, and generating more similar content.
+
+Here, we use a translation application as an example to introduce the way to compose a text generation applications.
+
+#### **Step 1: Create the application**
+
+Click the "Create Application" button on the homepage to create an application. Fill in the application name, and select "Text Generator" as the application type.
+
+
Docker 19.03 or later Docker Compose 1.25.1 or later
| Please refer to the [Docker installation guide](https://docs.docker.com/engine/install/) and [the Docker Compose installation guide](https://docs.docker.com/compose/install/) for more information on how to install Docker and Docker Compose, respectively. |
+| Windows with WSL 2 enabled | Docker Desktop | We recommend storing the source code and other data that is bound to Linux containers in the Linux file system rather than the Windows file system. For more information, please refer to the [Docker Desktop installation guide for using the WSL 2 backend on Windows.](https://docs.docker.com/desktop/windows/install/#wsl-2-backend) |
+
+### Clone Dify
+
+Clone the Dify source code to your local machine:
+
+```Shell
+git clone https://github.com/langgenius/dify.git
+```
+
+### Start Dify
+
+Navigate to the docker directory in the Dify source code and execute the following command to start Dify:
+
+```Shell
+cd dify/docker
+docker compose up -d
+```
+
+> If your system has Docker Compose V2 installed instead of V1, use `docker compose` instead of `docker-compose`. Check if this is the case by running `$ docker compose version`. [Read more information here](https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command).
+
+Deployment Results:
+
+```Shell
+[+] Running 7/7
+ ✔ Container docker-web-1 Started 1.0s
+ ✔ Container docker-redis-1 Started 1.1s
+ ✔ Container docker-weaviate-1 Started 0.9s
+ ✔ Container docker-db-1 Started 0.0s
+ ✔ Container docker-worker-1 Started 0.7s
+ ✔ Container docker-api-1 Started 0.8s
+ ✔ Container docker-nginx-1 Started
+```
+
+Finally, check if all containers are running successfully:
+
+```Shell
+docker compose ps
+```
+
+This includes 3 business services: api / worker / web, and 4 underlying components: weaviate / db / redis / nginx.
+
+```Shell
+NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
+docker-api-1 langgenius/dify-api:0.3.2 "/entrypoint.sh" api 4 seconds ago Up 2 seconds 80/tcp, 5001/tcp
+docker-db-1 postgres:15-alpine "docker-entrypoint.s…" db 4 seconds ago Up 2 seconds 0.0.0.0:5432->5432/tcp
+docker-nginx-1 nginx:latest "/docker-entrypoint.…" nginx 4 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp
+docker-redis-1 redis:6-alpine "docker-entrypoint.s…" redis 4 seconds ago Up 3 seconds 6379/tcp
+docker-weaviate-1 semitechnologies/weaviate:1.18.4 "/bin/weaviate --hos…" weaviate 4 seconds ago Up 3 seconds
+docker-web-1 langgenius/dify-web:0.3.2 "/entrypoint.sh" web 4 seconds ago Up 3 seconds 80/tcp, 3000/tcp
+docker-worker-1 langgenius/dify-api:0.3.2 "/entrypoint.sh" worker 4 seconds ago Up 2 seconds 80/tcp, 5001/tcp
+```
diff --git a/en/getting-started/install-self-hosted/environments.md b/en/getting-started/install-self-hosted/environments.md
new file mode 100644
index 0000000..2f79092
--- /dev/null
+++ b/en/getting-started/install-self-hosted/environments.md
@@ -0,0 +1,385 @@
+# Environments
+
+### Common Variables
+
+#### EDITION
+
+Deployment version.
+
+* `SELF_HOSTED`: Self-hosted version
+ * Only supports single team/tenant mode
+ * Can only use email and password to log in
+ * No trial hosted OpenAI API-Key feature
+* `CLOUD`: Cloud version
+ * Supports multi-team/tenant mode
+ * Unable to log in using email and password, only supports GitHub, Google authorization login.
+ * Has 200 trials hosted OpenAI API-Key feature
+
+#### CONSOLE_API_URL
+
+The backend URL of the console API, used to concatenate the authorization callback. If empty, it is the same domain. Example: `https://api.console.dify.ai`
+
+#### CONSOLE_WEB_URL
+
+The front-end URL of the console web, used to concatenate some front-end addresses and for CORS configuration use. If empty, it is the same domain. Example: `https://console.dify.ai`
+
+> Starting from version `0.3.8`, `CONSOLE_URL` has been split into `CONSOLE_API_URL` and `CONSOLE_WEB_URL`, but `CONSOLE_URL` is still available.
+
+#### SERVICE_API_URL
+
+Service API Url, used to display Service API Base Url to the front-end. If empty, it is the same domain. Example: `https://api.dify.ai`
+
+> Starting from version `0.3.8`, `API_URL` has been renamed to `SERVICE_API_URL`, but `API_URL` is still available.
+
+#### APP_API_URL
+
+WebApp API backend Url, used to declare the back-end URL for the front-end API. If empty, it is the same domain. Example: `https://app.dify.ai`
+
+#### APP_WEB_URL
+
+WebApp Url, used to display WebAPP API Base Url to the front-end. If empty, it is the same domain. Example: `https://api.app.dify.ai`
+
+> Starting from version `0.3.8`, `APP_URL` has been split into `APP_API_URL` and `APP_WEB_URL`, but `APP_URL` is still available.
+
+### Server
+
+#### MODE
+
+Startup mode, only available when starting with docker, not effective when starting from source code.
+
+* api
+
+ Start API Server.
+* worker
+
+ Start asynchronous queue worker.
+
+#### DEBUG
+
+Debug mode, default is false. It is recommended to turn on this configuration for local development to prevent some problems caused by monkey patch.
+
+#### FLASK_DEBUG
+
+Flask debug mode, it can output trace information at the interface when turned on, which is convenient for debugging.
+
+#### SECRET_KEY
+
+A key used to securely sign session cookies and encrypt sensitive information in the database.
+
+This variable needs to be set when starting for the first time.
+
+You can use `openssl rand -base64 42` to generate a strong key.
+
+#### DEPLOY_ENV
+
+Deployment environment.
+
+* PRODUCTION (default)
+
+ Production environment.
+* TESTING
+
+ Testing environment. There will be a distinct color label on the front-end page, indicating that this environment is a testing environment.
+
+#### LOG_LEVEL
+
+Log output level, default is INFO.
+
+It is recommended to set it to ERROR for production.
+
+#### MIGRATION_ENABLED
+
+When set to true, the database migration will be automatically executed when the container starts, only available when starting with docker, not effective when starting from source code.
+
+You need to manually execute `flask db upgrade` in the api directory when starting from source code.
+
+#### CHECK_UPDATE_URL
+
+Whether to enable the version check policy. If set to false, `https://updates.dify.ai` will not be called for version check.
+
+Since the version interface based on CloudFlare Worker cannot be directly accessed in China at present, setting this variable to empty can shield this interface call.
+
+#### OPENAI_API_BASE
+
+Used to change the OpenAI base address, default is [https://api.openai.com/v1](https://api.openai.com/v1).
+
+When OpenAI cannot be accessed in China, replace it with a domestic mirror address, or when a local model provides OpenAI compatible API, it can be replaced.
+
+#### Container Startup Related Configuration
+
+Only effective when starting with docker image or docker-compose.
+
+* DIFY_BIND_ADDRESS
+
+ API service binding address, default: 0.0.0.0, i.e., all addresses can be accessed.
+* DIFY_PORT
+
+ API service binding port number, default 5001.
+* SERVER_WORKER_AMOUNT
+
+ The number of API server workers, i.e., the number of gevent workers. Formula: `number of cpu cores x 2 + 1`
+
+ Reference: [https://docs.gunicorn.org/en/stable/design.html#how-many-workers](https://docs.gunicorn.org/en/stable/design.html#how-many-workers)
+* SERVER_WORKER_CLASS
+
+ Defaults to gevent. If using windows, it can be switched to sync or solo.
+* GUNICORN_TIMEOUT
+
+ Request handling timeout. The default is 200, it is recommended to set it to 360 to support a longer sse connection time.
+* CELERY_WORKER_CLASS
+
+ Similar to `SERVER_WORKER_CLASS`. Default is gevent. If using windows, it can be switched to sync or solo.
+* CELERY_WORKER_AMOUNT
+
+ The number of Celery workers. The default is 1, and can be set as needed.
+
+#### Database Configuration
+
+The database uses PostgreSQL. Please use the public schema.
+
+* DB_USERNAME: username
+* DB_PASSWORD: password
+* DB_HOST: database host
+* DB_PORT: database port number, default is 5432
+* DB_DATABASE: database name
+* SQLALCHEMY_POOL_SIZE: The size of the database connection pool. The default is 30 connections, which can be appropriately increased.
+* SQLALCHEMY_POOL_RECYCLE: Database connection pool recycling time, the default is 3600 seconds.
+* SQLALCHEMY_ECHO: Whether to print SQL, default is false.
+
+#### Redis Configuration
+
+This Redis configuration is used for caching and for pub/sub during conversation.
+
+* REDIS_HOST: Redis host
+* REDIS_PORT: Redis port, default is 6379
+* REDIS_DB: Redis Database, default is 0. Please use a different Database from Session Redis and Celery Broker.
+* REDIS_USERNAME: Redis username, default is empty
+* REDIS_PASSWORD: Redis password, default is empty. It is strongly recommended to set a password.
+* REDIS_USE_SSL: Whether to use SSL protocol for connection, default is false
+
+#### Session Configuration
+
+Only used by the API service for interface identity verification.
+
+* SESSION_TYPE:
+
+ Session component type
+
+ * redis (default)
+
+ If you choose this, you need to set the environment variables starting with SESSION_REDIS_ below.
+ * sqlalchemy
+
+ If you choose this, the current database connection will be used and the sessions table will be used to read and write session records.
+* SESSION_REDIS_HOST: Redis host
+* SESSION_REDIS_PORT: Redis port, default is 6379
+* SESSION_REDIS_DB: Redis Database, default is 0. Please use a different Database from Redis and Celery Broker.
+* SESSION_REDIS_USERNAME: Redis username, default is empty
+* SESSION_REDIS_PASSWORD: Redis password, default is empty. It is strongly recommended to set a password.
+* SESSION_REDIS_USE_SSL: Whether to use SSL protocol for connection, default is false
+
+#### Celery Configuration
+
+* CELERY_BROKER_URL
+
+ Format as follows:
+
+ ```
+ redis://:@:/
+ ```
+
+ Example: `redis://:difyai123456@redis:6379/1`
+* BROKER_USE_SSL
+
+ If set to true, use SSL protocol for connection, default is false
+
+#### CORS Configuration
+
+Used to set the front-end cross-domain access policy.
+
+* CONSOLE_CORS_ALLOW_ORIGINS
+
+ Console CORS cross-domain policy, default is `*`, that is, all domains can access.
+* WEB_API_CORS_ALLOW_ORIGINS
+
+ WebAPP CORS cross-domain policy, default is `*`, that is, all domains can access.
+
+For detailed configuration, please refer to: [Cross-domain/identity related guide](https://avytux375gg.feishu.cn/wiki/HyX3wdF1YiejX3k3U2CcTcmQnjg)
+
+#### Cookie Policy Configuration
+
+Used to set the browser policy for session cookies used for identity verification.
+
+* COOKIE_HTTPONLY
+
+ Cookie HttpOnly configuration, default is true.
+* COOKIE_SAMESITE
+
+ Cookie SameSite configuration, default is Lax.
+* COOKIE_SECURE
+
+ Cookie Secure configuration, default is false.
+
+For detailed configuration, please refer to: [Cross-domain/identity related guide](https://avytux375gg.feishu.cn/wiki/HyX3wdF1YiejX3k3U2CcTcmQnjg)
+
+#### File Storage Configuration
+
+Used to store uploaded data set files, team/tenant encryption keys, and other files.
+
+* STORAGE_TYPE
+
+ Type of storage facility
+
+ * local (default)
+
+ Local file storage, if this option is selected, the following `STORAGE_LOCAL_PATH` configuration needs to be set.
+ * s3
+
+ S3 object storage, if this option is selected, the following S3_ prefixed configurations need to be set.
+* STORAGE_LOCAL_PATH
+
+ Default is storage, that is, it is stored in the storage directory of the current directory.
+
+ If you are deploying with docker or docker-compose, be sure to mount the `/app/api/storage` directory in both containers to the same local directory, otherwise, you may encounter file not found errors.
+* S3_ENDPOINT: S3 endpoint address
+* S3_BUCKET_NAME: S3 bucket name
+* S3_ACCESS_KEY: S3 Access Key
+* S3_SECRET_KEY: S3 Secret Key
+* S3_REGION: S3 region information, such as: us-east-1
+
+#### Vector Database Configuration
+
+* VECTOR_STORE
+
+ The available enum types include: `weaviate`, `qdrant`, `pinecone`, `milvus` (the last two are not yet available)
+
+ Both `milvus` and `zilliz` use the same configuration, both being `milvus`.
+* WEAVIATE_ENDPOINT
+
+ Weaviate endpoint address, such as: `http://weaviate:8080`.
+* WEAVIATE_API_KEY
+
+ The api-key credential used to connect to Weaviate.
+* WEAVIATE_BATCH_SIZE
+
+ The number of index Objects created in batches in Weaviate, default is 100.
+
+ Refer to this document: [https://weaviate.io/developers/weaviate/manage-data/import#how-to-set-batch-parameters](https://weaviate.io/developers/weaviate/manage-data/import#how-to-set-batch-parameters)
+* WEAVIATE_GRPC_ENABLED
+
+ Whether to use the gRPC method to interact with Weaviate, performance will greatly increase when enabled, may not be usable locally, default is true.
+* QDRANT_URL
+
+ Qdrant endpoint address, such as: `https://your-qdrant-cluster-url.qdrant.tech/`
+* QDRANT_API_KEY
+
+ The api-key credential used to connect to Qdrant.
+* PINECONE_API_KEY
+
+ The api-key credential used to connect to Pinecone.
+* PINECONE_ENVIRONMENT
+
+ The environment where Pinecone is located, such as: `us-east4-gcp`
+* MILVUS_HOST
+
+ Milvus host configuration.
+* MILVUS_PORT
+
+ Milvus port configuration.
+* MILVUS_USER
+
+ Milvus user configuration, default is empty.
+* MILVUS_PASSWORD
+
+ Milvus password configuration, default is empty.
+* MILVUS_USE_SECURE
+
+ Whether Milvus uses SSL connection, default is false.
+
+#### Dataset Configuration
+
+* UPLOAD_FILE_SIZE_LIMIT:
+
+ Upload file size limit, default 15M.
+* UPLOAD_FILE_BATCH_LIMIT:
+
+ Number of files that can be uploaded in batch, default 5.
+
+
+#### Sentry Configuration
+
+Used for application monitoring and error log tracking.
+
+* SENTRY_DSN
+
+ Sentry DSN address, default is empty, when empty, all monitoring information is not reported to Sentry.
+* SENTRY_TRACES_SAMPLE_RATE
+
+ The reporting ratio of Sentry events, if it is 0.01, it is 1%.
+* SENTRY_PROFILES_SAMPLE_RATE
+
+ The reporting ratio of Sentry profiles, if it is 0.01, it is 1%.
+
+#### Notion Integration Configuration
+
+Notion integration configuration, variables can be obtained by applying for Notion integration: [https://www.notion.so/my-integrations](https://www.notion.so/my-integrations)
+
+* NOTION_CLIENT_ID
+* NOTION_CLIENT_SECRET
+
+#### Mail related configuration
+
+* MAIL_TYPE
+
+ The type of mail provider, currently only supports: resend (https://resend.com). If left empty, no mail will be sent.
+* MAIL_DEFAULT_SEND_FROM
+
+ The sender's email name, such as: no-reply [no-reply@dify.ai](mailto:no-reply@dify.ai), not mandatory.
+* RESEND_API_KEY
+
+ API-Key for the Resend email provider, can be obtained from API-Key.
+
+#### Third-Party Authorization Settings
+
+Only available for cloud version.
+
+* GITHUB_CLIENT_ID: GitHub authorization login Client ID
+* GITHUB_CLIENT_SECRET: GitHub authorization login Client Secret
+* GOOGLE_CLIENT_ID: Google authorization login Client ID
+* GOOGLE_CLIENT_SECRET: Google authorization login Client Secret
+
+#### Platform Hosting Model Related Configuration
+
+Only available for cloud version, used for model hosting configuration.
+
+* HOSTED_OPENAI_ENABLED: Enable OpenAI hosted service, default False
+* HOSTED_OPENAI_API_KEY: OpenAI hosted service API key
+* HOSTED_OPENAI_API_BASE: OpenAI hosted service API base URL, default is empty, i.e. `https://api.openai.com/v1`
+* HOSTED_OPENAI_API_ORGANIZATION: OpenAI hosted service organization ID, default is empty
+* HOSTED_OPENAI_QUOTA_LIMIT: OpenAI hosted service default trial quota (unit: call count), default 200 calls
+* HOSTED_OPENAI_PAID_ENABLED: Enable OpenAI hosted paid service, default False
+* HOSTED_OPENAI_PAID_STRIPE_PRICE_ID: OpenAI hosted paid service Stripe price ID
+* HOSTED_OPENAI_PAID_INCREASE_QUOTA: Increase quota amount after payment for OpenAI hosted paid service
+* HOSTED_AZURE_OPENAI_ENABLED: Enable Azure OpenAI hosted service, default False
+* HOSTED_AZURE_OPENAI_API_KEY: Azure OpenAI hosted service API key
+* HOSTED_AZURE_OPENAI_API_BASE: Azure OpenAI hosted service API base URL
+* HOSTED_AZURE_OPENAI_QUOTA_LIMIT: Azure OpenAI hosted service default trial quota (unit: call count)
+* HOSTED_ANTHROPIC_ENABLED: Enable Anthropic hosted service, default False
+* HOSTED_ANTHROPIC_API_BASE: Anthropic hosted service API base URL, default is empty
+* HOSTED_ANTHROPIC_API_KEY: Anthropic hosted service API key
+* HOSTED_ANTHROPIC_QUOTA_LIMIT: Anthropic hosted service default trial quota (unit: tokens), default 600,000 tokens
+* HOSTED_ANTHROPIC_PAID_ENABLED: Enable Anthropic hosted paid service, default False
+* HOSTED_ANTHROPIC_PAID_STRIPE_PRICE_ID: Anthropic hosted paid service Stripe price ID
+* HOSTED_ANTHROPIC_PAID_INCREASE_QUOTA: Increase quota amount for Anthropic hosted paid service
+* HOSTED_ANTHROPIC_PAID_MIN_QUANTITY: Minimum purchase quantity for Anthropic hosted paid service
+* HOSTED_ANTHROPIC_PAID_MAX_QUANTITY: Maximum purchase quantity for Anthropic hosted paid service
+* STRIPE_API_KEY: Stripe's API key
+* STRIPE_WEBHOOK_SECRET: Stripe's Webhook secret
+
+***
+
+### Web Frontend
+
+#### SENTRY_DSN
+
+Sentry DSN address, default is empty, when empty, all monitoring information is not reported to Sentry.
diff --git a/en/getting-started/install-self-hosted/local-source-code.md b/en/getting-started/install-self-hosted/local-source-code.md
new file mode 100644
index 0000000..9168d97
--- /dev/null
+++ b/en/getting-started/install-self-hosted/local-source-code.md
@@ -0,0 +1,223 @@
+# Local Source Code Start
+
+## Prerequisites
+
+| Operating System | Software | Explanation |
+| -------------------------- | -------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| macOS 10.14 or later | Docker Desktop | Set the Docker virtual machine (VM) to use a minimum of 2 virtual CPUs (vCPUs) and 8 GB of initial memory. Otherwise, the installation may fail. For more information, please refer to the [Docker Desktop installation guide for Mac](https://docs.docker.com/desktop/mac/install/). |
+| Linux platforms |
Docker 19.03 or later Docker Compose 1.25.1 or later
| Please refer to the [Docker installation guide](https://docs.docker.com/engine/install/) and [the Docker Compose installation guide](https://docs.docker.com/compose/install/) for more information on how to install Docker and Docker Compose, respectively. |
+| Windows with WSL 2 enabled |
Docker Desktop
| We recommend storing the source code and other data that is bound to Linux containers in the Linux file system rather than the Windows file system. For more information, please refer to the [Docker Desktop installation guide for using the WSL 2 backend on Windows.](https://docs.docker.com/desktop/windows/install/#wsl-2-backend) |
+
+### Clone Dify
+
+```Bash
+git clone https://github.com/langgenius/dify.git
+```
+
+Before enabling business services, we need to first deploy PostgresSQL / Redis / Weaviate (if not locally available). We can start them with the following commands:
+
+```Bash
+cd docker
+docker compose -f docker-compose.middleware.yaml up -d
+```
+
+***
+
+### Server Deployment
+
+* API Interface Service
+* Worker Asynchronous Queue Consumption Service
+
+#### Installation of the basic environment:
+
+Server startup requires Python 3.10.x. It is recommended to use [Anaconda](https://docs.anaconda.com/free/anaconda/install/) for quick installation of the Python environment, which already includes the pip package management tool.
+
+To create a Python 3.10 environment named "dify," you can use the following command:
+
+```Bash
+conda create --name dify python=3.10
+```
+
+To switch to the "dify" Python environment, use the following command:
+
+```
+conda activate dify
+```
+
+#### Follow these steps :
+
+1. Navigate to the "api" directory:
+
+
cd api
+
+2. Copy the environment variable configuration file:
+
+ ```
+ cp .env.example .env
+ ```
+3. Generate a random secret key and replace the value of SECRET_KEY in the .env file:
+
+ ```
+ openssl rand -base64 42
+ sed -i 's/SECRET_KEY=.*/SECRET_KEY=/' .env
+ ```
+4. Install the required dependencies:
+
+
pip install -r requirements.txt
+
+5. Perform the database migration
+
+ Perform database migration to the latest version:
+
+
flask db upgrade6. Start the API server:
+
+
+
+6. Start the API server:
+
+ ```
+ flask run --host 0.0.0.0 --port=5001 --debug
+ ```
+
+ output:
+
+ ```
+ * Debug mode: on
+ INFO:werkzeug:WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
+ * Running on all addresses (0.0.0.0)
+ * Running on http://127.0.0.1:5001
+ INFO:werkzeug:Press CTRL+C to quit
+ INFO:werkzeug: * Restarting with stat
+ WARNING:werkzeug: * Debugger is active!
+ INFO:werkzeug: * Debugger PIN: 695-801-919
+ ```
+7. start the Worker service
+
+ To consume asynchronous tasks from the queue, such as dataset file import and dataset document updates, follow these steps to start the Worker service on Linux or macOS:
+
+ `celery -A app.celery worker -P gevent -c 1 -Q dataset,generation,mail --loglevel INFO`
+
+ If you are using a Windows system to start the Worker service, please use the following command instead:
+
+ ```
+ celery -A app.celery worker -P solo --without-gossip --without-mingle -Q dataset,generation,mail --loglevel INFO
+ ```
+
+ output:
+
+ ```
+ -------------- celery@TAKATOST.lan v5.2.7 (dawn-chorus)
+ --- ***** -----
+ -- ******* ---- macOS-10.16-x86_64-i386-64bit 2023-07-31 12:58:08
+ - *** --- * ---
+ - ** ---------- [config]
+ - ** ---------- .> app: app:0x7fb568572a10
+ - ** ---------- .> transport: redis://:**@localhost:6379/1
+ - ** ---------- .> results: postgresql://postgres:**@localhost:5432/dify
+ - *** --- * --- .> concurrency: 1 (gevent)
+ -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
+ --- ***** -----
+ -------------- [queues]
+ .> dataset exchange=dataset(direct) key=dataset
+ .> generation exchange=generation(direct) key=generation
+ .> mail exchange=mail(direct) key=mail
+
+ [tasks]
+ . tasks.add_document_to_index_task.add_document_to_index_task
+ . tasks.clean_dataset_task.clean_dataset_task
+ . tasks.clean_document_task.clean_document_task
+ . tasks.clean_notion_document_task.clean_notion_document_task
+ . tasks.create_segment_to_index_task.create_segment_to_index_task
+ . tasks.deal_dataset_vector_index_task.deal_dataset_vector_index_task
+ . tasks.document_indexing_sync_task.document_indexing_sync_task
+ . tasks.document_indexing_task.document_indexing_task
+ . tasks.document_indexing_update_task.document_indexing_update_task
+ . tasks.enable_segment_to_index_task.enable_segment_to_index_task
+ . tasks.generate_conversation_summary_task.generate_conversation_summary_task
+ . tasks.mail_invite_member_task.send_invite_member_mail_task
+ . tasks.remove_document_from_index_task.remove_document_from_index_task
+ . tasks.remove_segment_from_index_task.remove_segment_from_index_task
+ . tasks.update_segment_index_task.update_segment_index_task
+ . tasks.update_segment_keyword_index_task.update_segment_keyword_index_task
+
+ [2023-07-31 12:58:08,831: INFO/MainProcess] Connected to redis://:**@localhost:6379/1
+ [2023-07-31 12:58:08,840: INFO/MainProcess] mingle: searching for neighbors
+ [2023-07-31 12:58:09,873: INFO/MainProcess] mingle: all alone
+ [2023-07-31 12:58:09,886: INFO/MainProcess] pidbox: Connected to redis://:**@localhost:6379/1.
+ [2023-07-31 12:58:09,890: INFO/MainProcess] celery@TAKATOST.lan ready.
+ ```
+
+***
+
+## Deploy the frontend page
+
+Start the web frontend client page service
+
+#### Installation of the basic environment:
+
+To start the web frontend service, you will need [Node.js v18.x (LTS)](http://nodejs.org/) and [NPM version 8.x.x](https://www.npmjs.com/) or [Yarn](https://yarnpkg.com/).
+
+* Install NodeJS + NPM
+
+Please visit [https://nodejs.org/en/download](https://nodejs.org/en/download) and choose the installation package for your respective operating system that is v18.x or higher. It is recommended to download the stable version, which includes NPM by default.
+
+#### Follow these steps :
+
+1. Enter the web directory
+
+ ```
+ cd web
+ ```
+2. Install the dependencies.
+
+ ```
+ npm install
+ ```
+3. Configure the environment variables. Create a file named .env.local in the current directory and copy the contents from .env.example. Modify the values of these environment variables according to your requirements:
+
+ ```
+ # For production release, change this to PRODUCTION
+ NEXT_PUBLIC_DEPLOY_ENV=DEVELOPMENT
+ # The deployment edition, SELF_HOSTED or CLOUD
+ NEXT_PUBLIC_EDITION=SELF_HOSTED
+ # The base URL of console application, refers to the Console base URL of WEB service if console domain is
+ # different from api or web app domain.
+ # example: http://cloud.dify.ai/console/api
+ NEXT_PUBLIC_API_PREFIX=http://localhost:5001/console/api
+ # The URL for Web APP, refers to the Web App base URL of WEB service if web app domain is different from
+ # console or api domain.
+ # example: http://udify.app/api
+ NEXT_PUBLIC_PUBLIC_API_PREFIX=http://localhost:5001/api
+
+ # SENTRY
+ NEXT_PUBLIC_SENTRY_DSN=
+ NEXT_PUBLIC_SENTRY_ORG=
+ NEXT_PUBLIC_SENTRY_PROJECT=
+ ```
+4. Build the code
+
+ ```
+ npm run build
+ ```
+5. Start the web service:
+
+ ```
+ npm run start
+ # or
+ yarn start
+ # or
+ pnpm start
+ ```
+
+After successful startup, the terminal will output the following information:
+
+```
+ready - started server on 0.0.0.0:3000, url: http://localhost:3000
+warn - You have enabled experimental feature (appDir) in next.config.js.
+warn - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk.
+info - Thank you for testing `appDir` please leave your feedback at https://nextjs.link/app-feedback
+```
+
+### Access Dify
+
+Finally, access [http://127.0.0.1:3000](http://127.0.0.1:3000/) to use the locally deployed Dify.
diff --git a/en/getting-started/install-self-hosted/start-the-frontend-docker-container.md b/en/getting-started/install-self-hosted/start-the-frontend-docker-container.md
new file mode 100644
index 0000000..9806f98
--- /dev/null
+++ b/en/getting-started/install-self-hosted/start-the-frontend-docker-container.md
@@ -0,0 +1,24 @@
+# Start the frontend Docker container separately
+
+When developing the backend separately, you may only need to start the backend service from source code without building and launching the frontend locally. In this case, you can directly start the frontend service by pulling the Docker image and running the container. Here are the specific steps:
+
+#### Pull the Docker image for the frontend service from DockerHub:
+
+```Bash
+docker run -it -p 3000:3000 -e EDITION=SELF_HOSTED -e CONSOLE_URL=http://127.0.0.1:3000 -e APP_URL=http://127.0.0.1:3000 langgenius/dify-web:latest
+```
+
+#### Build Docker Image from Source Code
+
+1. Build the frontend image:
+
+ ```
+ cd web && docker build . -t dify-web
+ ```
+2. Start the frontend image
+
+ ```
+ docker run -it -p 3000:3000 -e EDITION=SELF_HOSTED -e CONSOLE_URL=http://127.0.0.1:3000 -e APP_URL=http://127.0.0.1:3000 dify-web
+ ```
+3. When the console domain and web app domain are different, you can set the CONSOLE_URL and APP_URL separately.
+4. To access it locally, you can visit [http://127.0.0.1:3000](http://127.0.0.1:3000/).
diff --git a/en/getting-started/what-is-llmops.md b/en/getting-started/what-is-llmops.md
new file mode 100644
index 0000000..d4666d1
--- /dev/null
+++ b/en/getting-started/what-is-llmops.md
@@ -0,0 +1,41 @@
+# What is LLMOps?
+
+**LLMOps (Large Language Model Operations) is a comprehensive set of practices and processes that cover the development, deployment, maintenance, and optimization of large language models (such as the GPT series). The goal of LLMOps is to ensure the efficient, scalable, and secure use of these powerful AI models to build and run real-world applications. It involves aspects such as model training, deployment, monitoring, updating, security, and compliance.**
+
+The table below illustrates the differences in various stages of AI application development before and after using Dify:
+
+| Steps | Before | After | Save time |
+| ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | --------- |
+| Developing Frontend & Backend for ApplicationsPrompt Engineering | Integrating and encapsulating LLM capabilities requires a lot of time to develop front-end applications. | Directly use Dify' backend services to develop based on a WebApp scaffold. | -80% |
+| Prompt Engineering | Can only be done by calling APIs or Playground. | Debug based on the user's input data. | -25% |
+| Data Preparation and Embedding | Writing code to implement long text data processing and embedding. | Upload text or bind data sources to the platform. | -80% |
+| Application Logging and Analysis | Writing code to record logs and accessing databases to view them. | The platform provides real-time logging and analysis. | -70% |
+| Data Analysis and Fine-Tuning | Technical personnel manage data and create fine-tuning queues. | Non-technical personnel can collaborate and adjust the model visually. | -60% |
+| AI Plugin Development and Integration | Writing code to create and integrate AI plugins. | The platform provides visual tools for creating and integrating plugins. | -50% |
+
+
+
+Before using an LLMOps platform like Dify, the process of developing applications based on LLMs can be cumbersome and time-consuming. Developers need to handle tasks at each stage on their own, which can lead to inefficiencies, difficulties in scaling, and security issues. Here is the development process before using an LLMOps platform:
+
+1. Data Preparation: Manually collect and preprocess data, which may involve complex data cleaning and annotation work, requiring a significant amount of code.
+2. Prompt Engineering: Developers can only write and debug Prompts through API calls or Playgrounds, lacking real-time feedback and visual debugging.
+3. Embedding and Context Management: Manually handling the embedding and storage of long contexts, which can be difficult to optimize and scale, requiring a fair amount of programming work and familiarity with model embedding and vector databases.
+4. Application Monitoring and Maintenance: Manually collect and analyze performance data, possibly unable to detect and address issues in real-time, and may even lack log records.
+5. Model Fine-tuning: Independently manage the fine-tuning data preparation and training process, which can lead to inefficiencies and require more code.
+6. System and Operations: Technical personnel involvement or cost required for developing a management backend, increasing development and maintenance costs, and lacking support for collaboration and non-technical users.
+
+With the introduction of an LLMOps platform like Dify, the process of developing applications based on LLMs becomes more efficient, scalable, and secure. Here are the advantages of developing LLM applications using Dify:
+
+1. Data Preparation: The platform provides data collection and preprocessing tools, simplifying data cleaning and annotation tasks, and minimizing or even eliminating coding work.
+2. Prompt Engineering: WYSIWYG Prompt editing and debugging, allowing real-time optimization and adjustments based on user input data.
+3. Embedding and Context Management: Automatically handling the embedding, storage, and management of long contexts, improving efficiency and scalability without the need for extensive coding.
+4. Application Monitoring and Maintenance: Real-time monitoring of performance data, quickly identifying and addressing issues, ensuring the stable operation of applications, and providing complete log records.
+5. Model Fine-tuning: The platform offers one-click fine-tuning functionality based on previously annotated real-use data, improving model performance and reducing coding work.
+6. System and Operations: User-friendly interface accessible to non-technical users, supporting collaboration among multiple team members, and reducing development and maintenance costs. Compared to traditional development methods, Dify offers more transparent and easy-to-monitor application management, allowing team members to better understand the application's operation.
+
+
+
+ Additionally, Dify will provide AI plugin development and integration features, enabling developers to easily create and deploy LLM-based plugins for various applications, further enhancing development efficiency and application value.
+
+**Dify** is an easy-to-use LLMOps platform designed to empower more people to create sustainable, AI-native applications. With visual orchestration for various application types, Dify offers out-of-the-box, ready-to-use applications that can also serve as Backend-as-a-Service APIs. Unify your development process with one API for plugins and datasets integration, and streamline your operations using a single interface for prompt engineering, visual analytics, and continuous improvement.
+
diff --git a/en/use-cases/build-an-notion-ai-assistant.md b/en/use-cases/build-an-notion-ai-assistant.md
new file mode 100644
index 0000000..4ed2bc1
--- /dev/null
+++ b/en/use-cases/build-an-notion-ai-assistant.md
@@ -0,0 +1,166 @@
+# How to Build an Notion AI Assistant Based on Your Own Notes?
+
+### Intro[](https://wsyfin.com/notion-dify#intro)
+
+Notion is a powerful tool for managing knowledge. Its flexibility and extensibility make it an excellent personal knowledge library and shared workspace. Many people use it to store their knowledge and work in collaboration with others, facilitating the exchange of ideas and the creation of new knowledge.
+
+However, this knowledge remains static, as users must search for the information they need and read through it to find the answers they're seeking. This process is neither particularly efficient nor intelligent.
+
+Have you ever dreamed of having an AI assistant based on your Notion library? This assistant would not only assist you in reviewing your knowledge base, but also engage in the communication like a seasoned butler, even answering other people's questions as if you were the master of your personal Notion library.
+
+### How to Make Your Notion AI Assistant Come True?[](https://wsyfin.com/notion-dify#how-to-make-your-notion-ai-assistant-come-true)
+
+Now, you can make this dream come true through [Dify](https://dify.ai/). Dify is an open-source LLMOps (Large Language Models Ops) platform.
+
+Large Language Models like ChatGPT and Claude, have been using their impressive abilities to reshape the world. Their powerful learning aptitude primarily attributable to robust training data. Luckily, they've evolved to be sufficiently intelligent to learn from the content you provide, thus making the process of ideating from your personal Notion library, a reality.
+
+Without Dify, you might need to acquaint yourself with langchain, an abstraction that streamlines the process of assembling these pieces.
+
+### How to Use Dify to Build Your Personal Notion AI Assistant?[](https://wsyfin.com/notion-dify#how-to-use-dify-to-build-your-own-ai-assistant)
+
+The process to train a Notion AI assistant is relatively straightforward. Just follow these steps:
+
+1. Login to Dify.
+2. Create a new datasets.
+3. Connect with Notion and your datasets.
+4. Start training.
+5. Create your own AI application.
+
+#### 1. Login to dify[](https://wsyfin.com/notion-dify#1-login-to-dify)
+
+Click [here](https://dify.ai/) to login to Dify. You can conveniently log in using your GitHub or Google account.
+
+> If you are using GitHub account to login, how about getting this [project](https://github.com/langgenius/dify) a star? It really help us a lot!
+
+
+
+#### 2. Create new datasets[](https://wsyfin.com/notion-dify#2-create-a-new-datasets)[](https://wsyfin.com/notion-dify#2-create-a-new-datasets)
+
+Click the `Datasets` button on the top side bar, followed by the `Create Dataset` button.
+
+
+
+#### 3. Connect with Notion and Your Datasets[](https://wsyfin.com/notion-dify#3-connect-with-notion-and-datasets)
+
+Select "Sync from Notion" and then click the "Connect" button..
+
+
+
+Afterward, you'll be redirected to the Notion login page. Log in with your Notion account.
+
+
+
+Check the permissions needed by Dify, and then click the "Select pages" button.
+
+
+
+Select the pages you want to synchronize with Dify, and press the "Allow access" button.
+
+
+
+#### 4. Start training[](https://wsyfin.com/notion-dify#4-start-training)
+
+Specifying the pages for AI need to study, enabling it to comprehend the content within this section of Notion. Then click the "next" button.
+
+
+
+We suggest selecting the "Automatic" and "High Quality" options to train your AI assistant. Then click the "Save & Process" button.
+
+
+
+Enjoy your coffee while waiting for the training process to complete.
+
+
+
+#### 5. Create Your AI application[](https://wsyfin.com/notion-dify#5-create-your-ai-application)
+
+You must create an AI application and link it with the dataset you've recently created.
+
+Return to the dashboard, and click the "Create new APP" button. It's recommended to use the Chat App directly.
+
+
+
+Select the "Prompt Eng." and link your notion datasets in the "context".
+
+
+
+I recommend adding a 'Pre Prompt' to your AI application. Just like spells are essential to Harry Potter, similarly, certain tools or features can greatly enhance the ability of AI application.
+
+For example, if your Notion notes focus on problem-solving in software development, could write in one of the prompts:
+
+_I want you to act as an IT Expert in my Notion workspace, using your knowledge of computer science, network infrastructure, Notion notes, and IT security to solve the problems_.
+
+
+
+It's recommended to initially enable the AI to actively furnish the users with a starter sentence, providing a clue as to what they can ask. Furthermore, activating the 'Speech to Text' feature can allow users to interact with your AI assistant using their voice.
+
+
+
+Finally, Click the "Publish" button on the top right of the page. Now you can click the public URL in the "Overview" section to converse with your personalized AI assistant!
+
+
+
+### Utilizing API to Integrate With Your Project
+
+Each AI application baked by Dify can be accessed via its API. This method allows developers to tap directly into the robust characteristics of large language models (LLMs) within frontend applications, delivering a true "Backend-as-a-Service" (BaaS) experience.
+
+With effortless API integration, you can conveniently invoke your Notion AI application without the need for intricate configurations.
+
+Click the "API Reference" button on the page of Overview page. You can refer to it as your App's API document.
+
+
+
+#### 1. Generate API Secret Key[](https://wsyfin.com/notion-dify#1-generate-api-secret-key)
+
+For sercurity reason, it's recommened to create new API secret key to access your AI application.
+
+
+
+#### 2. Retrieve Conversation ID[](https://wsyfin.com/notion-dify#2-retrieve-conversation-id)
+
+After chatting with your AI application, you can retrieve the session ID from the "Logs & Ann." pages.
+
+
+
+#### 3. Invoke API[](https://wsyfin.com/notion-dify#3-invoke-api)
+
+You can run the example request code on the API document to invoke your AI application in terminal.
+
+Remember to replace `YOUR SECRET KEY` and `conversation_id` on your code.
+
+> You can input empty `conversation_id` at the first time, and replace it after you receive response contained `conversation_id`.
+
+```
+curl --location --request POST 'https://api.dify.ai/v1/chat-messages' \
+--header 'Authorization: Bearer ENTER-YOUR-SECRET-KEY' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "inputs": {},
+ "query": "eh",
+ "response_mode": "streaming",
+ "conversation_id": "",
+ "user": "abc-123"
+}'
+```
+
+Sending request in terminal and you will get a successful response.
+
+
+
+If you want to continue this chat, go to replace the `conversation_id` of the request code to the `conversation_id` you get from the response.
+
+And you can check all the conversation history on the "Logs & Ann." page.
+
+
+
+### Sync with notion periodically[](https://wsyfin.com/notion-dify#sync-with-notion-periodically)
+
+If your Notion's pages have updated, you can sync with Dify periodically to keep your AI assistant up-to-date. Your AI assistant will learn from the new content.
+
+
+
+### Summary[](https://wsyfin.com/notion-dify#summary)
+
+In this tutorial, we have learned not only how to import Your Notion data into Dify, but also know how to use the API to integrate it with your project.
+
+[Dify](https://dify.ai/) is a user-friendly LLMOps platform targeted to empower more individuals to create sustainable, AI-native applications. With visual orchestration designed for various application types, Dify offers ready-to-use applications that can assist you in utilizing data to craft your distinctive AI assistant. Do not hesitate to contact us if you have any inquiries.
diff --git a/en/use-cases/create-a-midjoureny-prompt-bot-with-dify.md b/en/use-cases/create-a-midjoureny-prompt-bot-with-dify.md
new file mode 100644
index 0000000..243191e
--- /dev/null
+++ b/en/use-cases/create-a-midjoureny-prompt-bot-with-dify.md
@@ -0,0 +1,59 @@
+# Create a Midjoureny Prompt Bot Without Code in Just a Few Minutes
+
+via [@op7418](https://twitter.com/op7418) on Twitter
+
+I recently tried out a natural language programming tool called Dify, developed by [@goocarlos](https://twitter.com/goocarlos). It allows someone without coding knowledge to create a web application just by writing prompts. It even generates the API for you, making it easy to deploy your application on your preferred platform.
+
+
+
+The application I created using Dify took me only 20 minutes, and the results were impressive. Without Dify, it might have taken me much longer to achieve the same outcome. The specific functionality of the application is to generate Midjourney prompts based on short input topics, assisting users in quickly filling in common Midjourney commands. In this tutorial, I will walk you through the process of creating this application to familiarize you with the platform.
+
+Dify offers two types of applications: conversational applications similar to ChatGPT, which involve multi-turn dialogue, and text generation applications that directly generate text content with the click of a button. Since we want to create a Midjoureny prompt bot, we'll choose the text generator.
+
+You can access Dify here: https://dify.ai/
+
+
+
+Once you've created your application, the dashboard page will display some data monitoring and application settings. Click on "Prompt Engineering" on the left, which is the main working page.
+
+
+
+On this page, the left side is for prompt settings and other functions, while the right side provides real-time previews and usage of your created content. The prefix prompts are the triggers that the user inputs after each content, and they instruct the GPT model how to process the user's input information.
+
+
+
+Take a look at my prefix prompt structure: the first part instructs GPT to output a description of a photo in the following structure. The second structure serves as the template for generating the prompt, mainly consisting of elements like 'Color photo of the theme,' 'Intricate patterns,' 'Stark contrasts,' 'Environmental description,' 'Camera model,' 'Lens focal length description related to the input content,' 'Composition description relative to the input content,' and 'The names of four master photographers.' This constitutes the main content of the prompt. In theory, you can now save this to the preview area on the right, input the theme you want to generate, and the corresponding prompt will be generated.
+
+
+
+You may have noticed the "\{{proportion\}}" and "\{{version\}}" at the end. These are variables used to pass user-selected information. On the right side, users are required to choose image proportions and model versions, and these two variables help carry that information to the end of the prompt. Let's see how to set them up.
+
+
+
+Our goal is to fill in the user's selected information at the end of the prompt, making it easy for users to copy without having to rewrite or memorize these commands. For this, we use the variable function.
+
+Variables allow us to dynamically incorporate the user's form-filled or selected content into the prompt. For example, I've created two variables: one represents the image proportion, and the other represents the model version. Click the "Add" button to create the variables.
+
+
+
+After creation, you'll need to fill in the variable key and field name. The variable key should be in English. The optional setting means the field will be non-mandatory when the user fills it. Next, click "Settings" in the action bar to set the variable content.
+
+
+
+Variables can be of two types: text variables, where users manually input content, and select options where users select from given choices. Since we want to avoid manual commands, we'll choose the dropdown option and add the required choices.
+
+
+
+Now, let's use the variables. We need to enclose the variable key within double curly brackets {} and add it to the prefix prompt. Since we want the GPT to output the user-selected content as is, we'll include the phrase "Producing the following English photo description based on user input" in the prompt.
+
+
+
+However, there's still a chance that GPT might modify our variable content. To address this, we can lower the diversity in the model selection on the right, reducing the temperature and making it less likely to alter our variable content. You can check the tooltips for other parameters' meanings.
+
+
+
+With these steps, your application is now complete. After testing and ensuring there are no issues with the output, click the "Publish" button in the upper right corner to release your application. You and users can access your application through the publicly available URL. You can also customize the application name, introduction, icon, and other details in the settings.
+
+
+
+That's how you create a simple AI application using Dify. You can also deploy your application on other platforms or modify its UI using the generated API. Additionally, Dify supports uploading your own data, such as building a customer service bot to assist with product-related queries. This concludes the tutorial, and a special thanks to @goocarlos for creating such a fantastic product.
diff --git a/en/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md b/en/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md
new file mode 100644
index 0000000..a88aa88
--- /dev/null
+++ b/en/use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md
@@ -0,0 +1,68 @@
+# Create an AI ChatBot with Business Data in Minutes
+
+AI-powered customer service may be a standard feature for every business website, and it is becoming easier to implement with higher levels of customization. The following content will guide you on how to create an AI-powered customer service for your website in just a few minutes using Dify.
+
+### Prerequisite
+
+**Register or Deploy Dify.AI**
+
+Dify is an open source product which you can find on[ GitHub](https://github.com/langgenius/dify) and deploy it to your local or company intranet. Meanwhile, it provides a cloud SaaS version, access [Didy.AI ](https://dify.ai/)to register and use it.
+
+**Apply for API key from OpenAI and other model providers.**
+
+Dify provides free message call usage quotas for OpenAI GPT series (200 times) and Antropic Claude (1000 times) AI models, which require tokens to be consumed. Before you run out, you need to apply for your own API key through the official channel of the model provider. You can enter the key in Dify's "Settings" - "Model Provider".
+
+### Upload your product documentation or knowledge base.
+
+If you want to build an AI Chatbot based on the company's existing knowledge base and product documents, then you need to upload as many product-related documents as possible to Dify's dataset. Dify helps you **complete segmentation and cleaning of the data.** The Dify dataset supports two indexing modes: high quality and economical. We recommend using the high quality mode, which consumes tokens but provides higher accuracy.
+
+1. Create a new dataset
+2. upload your business data (support batch uploading multiple texts)
+3. select the cleaning method
+4. Click \[Save and Process], and it will take only a few seconds to complete the processing.
+
+
+
+### Create an AI application and give it instructions
+
+Create a conversational app on the \[Build App] page. Then start setting up the prompt and its front-end user experience interactions.
+
+1. Give the AI instruction: Click on the "Pre Prompt" on the left to edit your Prompt, so that it can play the role of customer service and communicate with users. You can specify its tone, style, and limit it to answer or not answer certain questions.
+2. Let AI possess your business knowledge: add the target dataset you just uploaded in the \[context].
+3. Set up the opening remarks: click "Add Feature" to turn on the feature. The purpose is to add an opening line for AI applications, so that when the user opens the customer service window, it will greet the user first and increase affinity.
+4. Set up the "Next Question Suggestion": turn on this feature to "Add Feature". The purpose is to give users a direction for their next question after they have asked one.
+5. Choose a suitable model and adjust the parameters: different models can be selected in the upper right corner of the page. The performance and token price consumed by different models are different. In this example, we use the GPT3.5 model.
+
+In this case, we assign a role to the AI:
+
+> Pre prompt:You are Bob, the AI customer service for Dify, specializing in answering questions about Dify's products, team, or LLMOps for users.Please note, refuse to answer when users ask "inappropriate questions", i.e., content beyond the scope of this document.
+
+> Opening remarks:Hey \{{username\}}, I'm Bob☀️, the first AI member of Dify. You can discuss with me any questions related to Dify products, team, and even LLMOps.
+
+
+
+### Debug the performance of AI Chatbot and publish.
+
+After completing the setup, you can send messages to it on the right side of the current page to debug whether its performance meets expectations. Then click "Publish". And then you get an AI chatbot.
+
+
+
+### Embed AI Chatbot application into your front-end page.
+
+This step is to embed the prepared AI chatbot into your official website . Click \[Overview] -> \[Embedded], select the script tag method, and copy the script code into the \ or \ tag of your website. If you are not a technical person, you can ask the developer responsible for the official website to paste and update the page.
+
+
+
+1. Paste the copied code into the target location on your website.
+
+
+
+1. Update your official website and you can get an AI intelligent customer service with your business data. Try it out to see the effect.
+
+
+
+Above is an example of how to embed Dify into the official website through the AI chatbot Bob of Dify official website. Of course, you can also use more features provided by Dify to enhance the performance of the chatbot, such as adding some variable settings, so that users can fill in necessary judgment information before interaction, such as name, specific product used and so on.
+
+Welcome to explore in Dify together!
+
+
diff --git a/en/user-agreement/privacy-policy.md b/en/user-agreement/privacy-policy.md
new file mode 100644
index 0000000..0cd1e18
--- /dev/null
+++ b/en/user-agreement/privacy-policy.md
@@ -0,0 +1,128 @@
+# Privacy Policy
+
+Welcome to LangGenius. LangGenius, Inc. (hereinafter referred to as"LangGenius","Dify.AI","Dify","we","our" or "us" ) respects your privacy and is committed to protecting your personal information. This privacy policy is intended to explain how we collect, use, protect, and share the information you provide when using LangGenius services.
+
+Please carefully read this Policy before you use or submit any information through or in connection with the Services. If you do not agree with this Policy, please do not access or use our Services or interact with any other aspect of our business. Unless otherwise required by laws in your residence, by using our Services, you accept our privacy practices described in this Policy.
+
+### What information we collect about you?
+
+We collect and store personal information that you directly provide us through our Site, when using our Products, and other ways.
+
+#### Information you provide to us
+
+We will collect and store personal information that you provide to us directly through our website when using our products, as well as through other means (such as through user support requests, interacting through social media, participating in surveys or promotions, applying for a job, and interacting on our website and at events). The information we collect includes, such as:
+
+* Account and profile information. When you register for an account, create or modify your profile, set preferences, sign-up for or make purchases through the Services, we collect information about you which includes without limitation your name, business telephone number and your email address, passwords, and similar security information used for authentication and account access. You may also choose to provide us with a display name, profile photo, job title, and other details to your profile information to be displayed in our Services.
+* Content you provide through our products: As part of the service, we collect and store the content you post, send, receive, and share through our product. This includes any data you enter in any "free text" box on our product, as well as files and links you upload to the service. Examples of the content we collect and store include: applications you create in Dify.AI, descriptions of application-related commands, links to access applications, links to privacy policies for applications, or any other information you provide.
+* Content provided by you through community platforms, instant messaging tools, or our website: We also collect other content that you submit to us for operation of website channels (such as social media or social networking sites). For example, when you provide feedback or participate in any interactive features, surveys, contests, promotions, sweepstakes, events, or activities, you provide content to us through phone, community interaction, IM services, etc. (such as GitHub, Twitter, Discord, WeChat, Slack, etc.)
+* Information provided through our support channels: Through our user support, you can choose to submit information about any issues you encounter while using our services. You may contact us through email, third-party IM tools to directly communicate with our support team. You will be asked to provide contact information, a summary of the issue you are facing, and any additional documents, screenshots, or information that may help to resolve the problem.
+* Payment and billing information: When you use certain paid services on Dify.AI, we collect your payment and billing information. You may also be required to provide credit card information to third-party secure payment processing service providers (such as Stripe). We do not store your credit card information.
+
+#### Information we collect automatically when you use the Services
+
+When you use our services (including browsing our website and taking certain actions within the service), we may collect information about you.
+
+* Your Use of the Services: When you access any of our services and interact with them, we may track certain information about you, including but not limited to the features you use; the links you click on; the type, size, and filenames of attachments you upload to the services; and how you interact or click on our product services.
+* Device and Connection Information: We collect information about the devices you use to access our services, such as your computer, phone, tablet, or other devices. This type of device information also includes your connection type and settings when installing, accessing, updating, or using our services. We also collect information about your operating system, browser type, URL of referring/exit pages, IP address, device identifiers, and crash data through your device.
+* Geolocation Data: Based on your device settings, we may collect geolocation data when you access our website and use our products. For example, we may use your IP address to infer your approximate location.
+* Cookies and Other Tracking Technologies: We and our third-party partners, such as our advertising and analytics partners, use various common technologies to provide functionality and identify you across different services and devices. Such technologies typically include tracking pixels, JavaScript, and various "local storage data" technologies, such as cookies and local storage. Such data may include text, personal information (such as your IP address), and information about how you use our services, depending on the technologies we use. For the purposes of this policy, we collectively refer to the cookies and other technologies identified here as "Cookies." Most web browsers have a feature to block cookies. You can also choose to clear all cookies stored on your computer.
+
+#### Information we receive from other sources
+
+We receive information about you from other service users, our partners, and third-party service providers, social media platforms, and public databases. We may combine this information with the information we collect through other means. This helps us update and improve our records, identify new customers, create more personalized advertising, and recommend services that you may be interested in. When asked to provide personal information, you may refuse. However, if you choose not to provide the information required for certain products, these products or some of their features may not be available or may not function properly.
+
+We are not responsible for the data policies and procedures or content of any third party. We recommend that you review the privacy policies of each website you visit.
+
+### How we use information we collect?
+
+We collect and process personal information about you as necessary to provide the Products you use, operate our Sites and business, meet our contractual and legal obligations, protect the security of our systems and our customers, or fulfil other legitimate interests as described in this Privacy Policy and in our notices to you.
+
+For example, we may use any of the categories of personal information we describe above to:
+
+* Operate, maintain and improve our internal operations, systems, Sites, and Products.
+* Understand you and your preferences to enhance your experience and enjoyment using our Sites and Products, to provide recommendations, to solicit feedback, and to better market and advertise to you.
+* Monitor and analyze user interactions with our Sites and Products to identify trends, usage, and activity patterns.
+* Respond to your comments and questions and provide technical support or customer service.
+* Provide and deliver the Products you request.
+* Comply with applicable laws, rules, or regulations and cooperate and defend legal claims and audits.
+* Communicate with you about promotions, upcoming events, and other news about products and services offered by Dify.AI and our partners.
+* Plan and host corporate events.
+* Protect the Site and Products, and investigate and deter against fraudulent, unauthorized, or illegal activity.
+
+We may also use such information in any other way we may describe when you provide the information or for any other purpose with your consent.
+
+### How we share information we collect?
+
+We may share your personal information with your consent. We may also share any category of personal information described above:
+
+* Sharing with our business partners and other third-party service providers. We share information with third parties who help us operate, provide, improve, integrate, customize, support, and market our services. For example, to provide services to you, we may share information with third-party service partners who provide consulting support. We work with third-party service providers who provide website and application development, hosting, maintenance, backup, storage, virtual infrastructure, payment processing, analytics, and other services to us. Such services may require the service provider to access or use information about you. If a service provider needs to access information about you to act on our behalf in performing services, they will do so under our close instruction and adopt appropriate security and confidentiality procedures to protect your information.
+* Sharing with potential buyers and advisors. If there is a company sale, merger, reorganization, dissolution, similar event, or measures taken in anticipation of such an event (such as due diligence in a transaction), your personal information may (in accordance with applicable law) be shared with our advisors and any potential buyer's advisors and be transferred to the new owner of the business.
+* Sharing information to maintain compliance with laws and regulations. We may share information as required by law or subpoena, or if we reasonably believe that such action is necessary to comply with applicable laws or the reasonable requests of law enforcement, enforce our terms of service, or protect the security or integrity of our website and products, or to exercise or protect the rights, property, or personal safety of our customers, users, or others.
+
+### How we store and secure information we collect?
+
+Storage and Processing. The information collected through our website and our products may be stored and processed in any country/region where LangGenius or its affiliated companies or service providers maintain facilities, including your region, the United States, Australia, Canada, China, and the European Economic Area (including the United Kingdom). Our choice of processing location is to ensure efficient operations, improve performance, and create redundancy to protect data in the event of disruptions or other issues. We take measures to ensure that the data we collect in accordance with this Privacy Policy is processed in compliance with this Privacy Policy and applicable laws, regardless of where the data is located.
+
+International Data Transfers. When we transfer personal information from the European Economic Area (including the UK) and Switzerland to the United States or other countries/regions where the European Commission has not determined their laws provide adequate data protection, we use legal mechanisms designed to help ensure your rights and protections, including contracts. Specifically, our website servers are located in the United States, and our affiliates, partners, third parties, and service providers operate in the United States, European Economic Area, and China. This means that when we collect your personal information, we may process it in any of these countries. However, we have taken appropriate safeguards to require that your personal information is protected in accordance with this privacy policy. The main safeguard relied upon by LangGenius is the Standard Contractual Clauses for Data Protection approved by the European Commission. For more information about these mechanisms, please contact us using the detailed contact information provided in the "How to Contact Us" section below.
+
+Keeping your information safe. LangGenius cares about the security of your information and takes reasonable and appropriate technical and organizational measures designed to prevent loss, misuse, and unauthorized access, disclosure, alteration, and destruction of personal information. However, no security system is impenetrable, and we cannot guarantee the security of our systems or your information.
+
+Lawful basis for processing personal information (EEA only). LangGenius , is the data controller of your information.
+
+This section below is specifically for you if you are located in the European EconomicArea (EEA), United Kingdom or Switzerland.
+
+Our legal basis for collecting and using the personal information above will depend on the personal information concerned and the specific context in which we collect it. However, we will normally collect personal information only where we have your consent to do so, where we need the personal information to perform a contract with you, or where the processing is in our legitimate interests and not overridden by your data protection interests or fundamental rights and freedoms. In some cases, we may also have a legal obligation to collect personal information from you.
+
+If we ask you to provide personal information to comply with a legal requirement or to perform a contract with you, we will indicate this at the relevant time and advise you whether the provision of your personal information is mandatory or not (as well as the possible consequences, if any, if you do not provide your personal information). Similarly, if we collect and use your personal information in reliance on our legitimate interests (or those of a third party), we will indicate to you at the relevant time what those legitimate interests are.
+
+If you have questions about the legal basis for processing or want to find out more, please contact us using the details at the end of this Privacy Policy.
+
+Retention. We retain personal information for as long as necessary for the purposes for which the personal information is processed and for longer periods as necessary for us to comply with applicable laws. For example, we retain your account information for as long as your account is active or as needed to provide you with Products you have requested or authorized, including maintaining and improving the performance of the Products and protecting system security. We also retain personal data as needed to maintain appropriate business and financial records, protect our legal interests, resolve disputes, or comply with legal or regulatory requirements. Thereafter, we will either delete or anonymize it or, if this is not possible (for example, because your personal information has been stored in backup archives), then we will store your personal information using appropriate security measures and take appropriate steps designed to isolate it from any further processing until deletion is possible.
+
+### How to access and control your information?
+
+You have certain rights regarding your personal information, subject to the applicable laws. These include the following rights to:
+
+* Access your personal information: You have the right to ask us to confirm whether we are processing your personal information, and, where that is the case, access to the personal information and receive information on how your data is processed as well as ask us to provide a copy of your personal information.
+* Rectify your personal information: You have the right to have any incorrect, incomplete or inaccurate data we hold about you corrected.
+* Erase your personal information: You have the right to ask us to delete your personal information when, for example, the data we hold on you is no longer needed or when your data has been processed unlawfully.
+* Object to processing: You have the right to object to the processing of your personal information and request us to cease processing of it if, for example, this data is being processed for the purpose of direct marketing or where we are relying on a legitimate interest (or those of a third party). Under certain circumstances, we may demonstrate that we have compelling legitimate grounds to process your information which override your rights and freedoms.
+* Restrict the processing: You have the right to ask us to suspend the processing of your personal information in the following scenarios: (a) if you want us to establish the accuracy of the personal information; (b) if our use of the data is illegal but you do not want it erased; (c) if you require us to hold the data even if we no longer need it as you require it to establish, exercise or defend legal claims; or (d) if you have objected to our use of your data but we need to verify whether we have overriding legitimate grounds to use it.
+* Receive your personal information in a usable electronic format and transmit it to a third party (right to data portability): If we are processing your personal information based on your consent or a contract, you can ask to receive your personal information in a structured, commonly used and machine-readable format. Without any obstacle from us, you can also ask us to transmit those data to another controller.
+* Withdraw consent: Where we are relying on your consent to process your personal information, you have the right to withdraw your consent at any time. However, this will not affect the lawfulness of any processing carried out before you withdraw your consent. If you withdraw your consent, we will no longer process that personal information, but we may be unable to continue providing certain products or services to you for which the personal information was sought. At the time you withdraw your consent, we will advise you if this is the case.
+* Opt-out of communications: By using the unsubscribe link within each email, updating your email preferences within your Service account settings menu, or by contacting us as provided below to have your contact information removed from our promotional email list or registration database, you may opt-out of receiving promotional communications from us. You will continue to receive transactional messages from us regarding our Services even after you opt-out from receiving promotional messages from us. You can opt-out of some notification messages in your account settings. Please note, you will continue to receive generic ads.
+* Send "Do Not Track" Signals: Some browsers have incorporated "Do Not Track" (DNT) features that can send a signal to the websites you visit indicating you do not wish to be tracked. Our Services do not currently respond to browser DNT signals since there is not yet a common understanding of how to interpret the DNT signal. You can use the range of other tools we provide to control data collection and use, including the ability to opt-out of receiving marketing from us as described above.
+
+These rights may be limited in some situations – for example, where we can demonstrate that we have a legal requirement to process your data (such as where tax authorities require us to retain it) or where it is needed for the proper performance of a contract. Under certain circumstances, this may mean that we are able to retain data even if you withdraw your consent.
+
+If an administrator manages the service for you (see "Notice to End Users" below), you may need to contact your administrator first to assist with your requests. For all other requests, please contact us through the details in the "How to Contact Us" section below. If you have unresolved concerns, you may have the right to complain to a data protection authority in the country where you live, where you work or where you feel your rights were infringed.
+
+If you are a California resident, please kindly refer to the "California Requirements" below for your rights and other important information.
+
+### Other important privacy information
+
+#### California Requirements
+
+There are some additional rights that may be available to you under the California Consumer Protection Act ("CCPA") if you are a California resident. In addition to those listed under "How to access and control your information" above, we will explain to you in the following paragraphs how you may exercise your rights under the CCPA:
+
+* Right against discrimination: You have the right not to be discriminated against for exercising any of the rights described in this section. We will not discriminate against you for exercising your right to know, delete or opt-out of sales.
+* Right to opt-out of selling: You have the right to opt-out of having your personal information sold. We do not sell your personal information. The terms of "personal information" and "selling" are broadly defined under the CCPA and such that sharing identifiers linked to you for a benefit may be considered a sale. You have the right to understand the types of personal information that we sold about you and the types of third parties with whom we shared such information.
+* Processing your information: his Policy outlines the types of personal information we may collect, the sources of that information, as well as our rules on deletion and retention. We’ve also included information about how we may process your information, which includes for "business purposes" as defined by the CCPA - such as to protect against illegal activities, and for the development of new products, features, and technologies. If you have questions regarding the categories of information we may collect about you, please refer to the section of this Policy called "What information we collect about you". You may also refer to the section called "How we use information we collect" for more details about our processing activities.
+
+#### Our policy toward children
+
+Our Services are NOT directed to children under the age of 18 and we do not knowingly collect personal information from children under 18. If we become aware that a child under 18 has provided us with personal information, we will promptly delete such personal data from our systems. If you become aware or have reason to believe that a child has provided us with personal information through our Services, please contact us at the details in Section "How to Contact Us" below and we will delete that information from our databases.
+
+#### Changes to our policy
+
+We may modify this Policy at any time, without prior notice, and changes may apply to any personal information we already hold about you, as well as any new personal information collected after the Policy is modified. If we make changes, we will notify you by revising the date at the top of this Policy. We will provide you with advanced and more prominent notice if we make any material changes to how we collect, use or disclose your personal information that impacts your rights under this Policy. Unless otherwise required by laws in your residence, your continued access or use of our Services after receiving the notice of changes, constitutes your acknowledgement that you accept the updated Policy.
+
+In addition, we may provide you with real-time disclosures or additional information about the personal information handling practices of specific parts of our Services. Such notices may supplement this Policy or provide you with additional choices about how we process your personal information.If you disagree with any changes to this Policy, you will need to stop using the Services and deactivate your account(s), as outlined above.
+
+### How to contact us?
+
+Your information is controlled by LangGenius. If you have questions or concerns about how your information is handled, please direct your inquiry to LangGenius, which is responsible for facilitating such inquiries.
+
+LangGenius, Inc., a Delaware registered company (File No. 7358523), USA.
+
+Email: Hello@dify.ai.
diff --git a/en/user-agreement/terms-of-service.md b/en/user-agreement/terms-of-service.md
new file mode 100644
index 0000000..50e38c9
--- /dev/null
+++ b/en/user-agreement/terms-of-service.md
@@ -0,0 +1,26 @@
+# Terms of Service
+
+The following document describes the conditions of use of our Websites and Services.
+
+LangGenius, Inc. ("LangGenius") is a US company whose registered office is located at 651 N BROAD ST SUITE 201, Delaware, USA, under the file number 7358523 and represented by Mr. Luyu Zhang, its CEO.
+
+LangGenius is pleased to give you access to LangGenius's website ([dify.ai](https://dify.ai/)) ("Website","Dify.AI"or "Dify"), and related applications and resources (collectively, the "Services"). Your use of the Services is subject to the binding legal agreement set forth below ("Terms").
+
+### Beta Software
+
+We may provide you with beta and experimental products, features, and documentation ("Beta Software") on an early access basis. Beta Software is not generally available and may contain errors, defects, and inaccuracies. We provide Beta Software "as is" without any warranties and may terminate Beta Software at any time without ensuring the preservation of Beta Software data. Our service level agreements do not apply to Beta Software. If Beta Software becomes generally available, you may choose to pay for the software or stop using it. We may use your feedback on Beta Software.
+
+### Usage of LangGenius Services
+
+You may use the Dify.AI development platform to create software applications for your target user group to achieve your commercial objectives. You understand and acknowledge that the language and information generated by software applications created based on Dify.AI are derived from user-written prompts and their Large Language Model (LLM) providers (such as OpenAI). LLM is an immature technology, and any negative consequences arising from the language and information generated are not related to Dify.AI , especially if you commercialize the application but fail to meet your operational expectations (including but not limited to not meeting your advertising description expectations, target effect description expectations, etc.) or experience service interruptions.
+
+The development of prompts for LLM models is the core product capability and service provided by Dify. During your use of Dify for prompt engineering, embedding, fine-tuning, and testing of LLM models, additional resource consumption may occur. By registering or starting to use Dify, you agree to the potential consumption of resources and costs associated with the LLM algorithms of this product. You should closely monitor billing changes from third-party LLM providers. You shall bear all consequences of any abnormal consumption and billing issues that prevent you from using Dify services normally.
+
+### Intellectual Property
+
+The Services, logo, interfaces, photos, Websites are owned by LangGenius, Inc. or its licensees and protected by applicable law, with all rights reserved. LangGenius takes the protection of intellectual property rights, including copyrights, very seriously. LangGenius will terminate your access to, or use of, all or any part of the Services, without notice to you, if you infringe or misappropriate the intellectual property rights, including copyrights, of others.
+
+### Applicable Law
+
+The Terms are governed by the laws of the State of California, United States.
+
diff --git a/en/web-application/conversation-application.md b/en/web-application/conversation-application.md
new file mode 100644
index 0000000..4f41bff
--- /dev/null
+++ b/en/web-application/conversation-application.md
@@ -0,0 +1,48 @@
+# Conversation Application
+
+Conversational applications use a question-and-answer model to maintain a dialogue with the user. Conversational applications support the following capabilities (please confirm that the following functions are enabled when the application is programmed):
+
+* Variables to fill in before the dialog. Create, pin, and delete conversations.
+* Conversation remarks.
+* Follow-up.
+* Speech to text.
+
+### Variables filled in before the dialog
+
+If you have the requirement to fill in variables when you apply the layout, you need to fill in the information according to the prompts before entering the dialog window:
+
+
+
+Fill in the necessary content and click the "Start Chat" button to start chatting.
+
+
+
+Move to the AI's answer, you can copy the content of the conversation, and give the answer "like" and "dislike".
+
+
+
+### Conversation creation, pinning and deletion
+
+Click the "New Conversation" button to start a new conversation. Move to a session, and the session can be "pinned" and "deleted".
+
+
+
+### Conversation remarks
+
+If the "Conversation remarks" function is enabled when the application is programmed, the AI application will automatically initiate the first sentence of the dialogue when creating a new dialogue:
+
+
+
+### Follow-up
+
+If the "Follow-up" function is enabled during the application arrangement, the system will automatically generate 3 related question suggestions after the dialogue:
+
+
+
+### Speech to text
+
+If the "Speech to Text" function is enabled during application programming, you will see the voice input icon in the input box on the web application side, click the icon to convert the voice input into text:
+
+_Please make sure that the device environment you are using is authorized to use the microphone._
+
+
diff --git a/en/web-application/overview.md b/en/web-application/overview.md
new file mode 100644
index 0000000..75698f4
--- /dev/null
+++ b/en/web-application/overview.md
@@ -0,0 +1,28 @@
+# Overview
+
+Web applications are for application consumers. When an application developer creates an application in Dify, he will get a corresponding web application. Users of the web application can use it without logging in. The web application is adapted to different sizes of devices: PC, tablet and mobile.
+
+The content of the web application is consistent with the configuration published by the application. When the configuration of the application is modified and the "Publish" button is clicked on the prompt word layout page of the application to publish, the content of the web application will also be updated according to the configuration of the current application.
+
+We can enable and disable access to the web application on the application overview page, and modify the site information of the web application:
+
+* Icon
+* Name
+* Application Description
+* Interface Language
+* Copyright Information
+* Privacy Policy Link
+
+The functional performance of the web application depends on whether the developer enables this function when compiling the application, for example:
+
+* Conversation remarks
+* Variables filled in before the conversation
+* Follow-up
+* Speech to text
+* More answers like this (Text Generation apps)
+* ...
+
+In the following chapters, we will introduce the two types of web applications separately:
+
+* Text Generator
+* Conversational
diff --git a/en/web-application/text-generator.md b/en/web-application/text-generator.md
new file mode 100644
index 0000000..27baeb0
--- /dev/null
+++ b/en/web-application/text-generator.md
@@ -0,0 +1,60 @@
+# Text Generator
+
+The text generation application is an application that automatically generates high-quality text according to the prompts provided by the user. It can generate various types of text, such as article summaries, translations, etc.
+
+
+
+Text generation applications support the following features:
+
+1. Run it once.
+2. Run in batches.
+3. Save the run results.
+4. Generate more similar results.
+
+Let's introduce them separately.
+
+
+
+### Run it once
+
+Enter the query content, click the run button, and the result will be generated on the right, as shown in the following figure:
+
+
+
+In the generated results section, click the "Copy" button to copy the content to the clipboard. Click the "Save" button to save the content. You can see the saved content in the "Saved" tab. You can also "like" and "dislike" the generated content.
+
+### Run in batches
+
+Sometimes, we need to run an application many times. For example: There is a web application that can generate articles based on topics. Now we want to generate 100 articles on different topics. Then this task has to be done 100 times, which is very troublesome. Also, you have to wait for one task to complete before starting the next one.
+
+In the above scenario, the batch operation function is used, which is convenient to operate (enter the theme into a `csv` file, only need to be executed once), and also saves the generation time (multiple tasks run at the same time). The usage is as follows:
+
+#### Step 1 Enter the batch run page
+
+Click the "Run Batch" tab to enter the batch run page.
+
+
+
+#### Step 2 Download the template and fill in the content
+
+Click the Download Template button to download the template. Edit the template, fill in the content, and save as a `.csv` file.
+
+
+
+#### Step 3 Upload the file and run
+
+
+
+If you need to export the generated content, you can click the download "button" in the upper right corner to export as a `csv` file.
+
+### Save run results
+
+Click the "Save" button below the generated results to save the running results. In the "Saved" tab, you can see all saved content.
+
+
+
+### Generate more similar results
+
+If the "more similar" function is turned on when applying the arrangement. Clicking the "more similar" button in the web application generates content similar to the current result. As shown below:
+
+
diff --git a/zh_CN/.gitbook/assets/640-1.jpg b/zh_CN/.gitbook/assets/640-1.jpg
new file mode 100644
index 0000000..1042d3c
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-1.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-10.jpg b/zh_CN/.gitbook/assets/640-10.jpg
new file mode 100644
index 0000000..befb777
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-10.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-10.png b/zh_CN/.gitbook/assets/640-10.png
new file mode 100644
index 0000000..e490cda
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-10.png differ
diff --git a/zh_CN/.gitbook/assets/640-11.jpg b/zh_CN/.gitbook/assets/640-11.jpg
new file mode 100644
index 0000000..e350543
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-11.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-11.png b/zh_CN/.gitbook/assets/640-11.png
new file mode 100644
index 0000000..76b380d
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-11.png differ
diff --git a/zh_CN/.gitbook/assets/640-12.jpg b/zh_CN/.gitbook/assets/640-12.jpg
new file mode 100644
index 0000000..0f37af7
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-12.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-12.png b/zh_CN/.gitbook/assets/640-12.png
new file mode 100644
index 0000000..ca14ace
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-12.png differ
diff --git a/zh_CN/.gitbook/assets/640-13.png b/zh_CN/.gitbook/assets/640-13.png
new file mode 100644
index 0000000..a6f074c
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-13.png differ
diff --git a/zh_CN/.gitbook/assets/640-14.png b/zh_CN/.gitbook/assets/640-14.png
new file mode 100644
index 0000000..9d519ff
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-14.png differ
diff --git a/zh_CN/.gitbook/assets/640-2.jpg b/zh_CN/.gitbook/assets/640-2.jpg
new file mode 100644
index 0000000..a131126
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-2.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-2.png b/zh_CN/.gitbook/assets/640-2.png
new file mode 100644
index 0000000..712c2da
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-2.png differ
diff --git a/zh_CN/.gitbook/assets/640-3.jpg b/zh_CN/.gitbook/assets/640-3.jpg
new file mode 100644
index 0000000..741a60c
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-3.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-3.png b/zh_CN/.gitbook/assets/640-3.png
new file mode 100644
index 0000000..00e0295
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-3.png differ
diff --git a/zh_CN/.gitbook/assets/640-4.jpg b/zh_CN/.gitbook/assets/640-4.jpg
new file mode 100644
index 0000000..f186ae4
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-4.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-4.png b/zh_CN/.gitbook/assets/640-4.png
new file mode 100644
index 0000000..5be378b
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-4.png differ
diff --git a/zh_CN/.gitbook/assets/640-5.jpg b/zh_CN/.gitbook/assets/640-5.jpg
new file mode 100644
index 0000000..f349a08
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-5.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-5.png b/zh_CN/.gitbook/assets/640-5.png
new file mode 100644
index 0000000..0853125
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-5.png differ
diff --git a/zh_CN/.gitbook/assets/640-6.jpg b/zh_CN/.gitbook/assets/640-6.jpg
new file mode 100644
index 0000000..afb849a
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-6.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-6.png b/zh_CN/.gitbook/assets/640-6.png
new file mode 100644
index 0000000..2dbb7d6
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-6.png differ
diff --git a/zh_CN/.gitbook/assets/640-7.jpg b/zh_CN/.gitbook/assets/640-7.jpg
new file mode 100644
index 0000000..9074985
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-7.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-7.png b/zh_CN/.gitbook/assets/640-7.png
new file mode 100644
index 0000000..0459222
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-7.png differ
diff --git a/zh_CN/.gitbook/assets/640-8.jpg b/zh_CN/.gitbook/assets/640-8.jpg
new file mode 100644
index 0000000..dcf1f93
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-8.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-8.png b/zh_CN/.gitbook/assets/640-8.png
new file mode 100644
index 0000000..4ab0a5b
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-8.png differ
diff --git a/zh_CN/.gitbook/assets/640-9.jpg b/zh_CN/.gitbook/assets/640-9.jpg
new file mode 100644
index 0000000..911923b
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-9.jpg differ
diff --git a/zh_CN/.gitbook/assets/640-9.png b/zh_CN/.gitbook/assets/640-9.png
new file mode 100644
index 0000000..304a64f
Binary files /dev/null and b/zh_CN/.gitbook/assets/640-9.png differ
diff --git a/zh_CN/.gitbook/assets/640.jpg b/zh_CN/.gitbook/assets/640.jpg
new file mode 100644
index 0000000..010f651
Binary files /dev/null and b/zh_CN/.gitbook/assets/640.jpg differ
diff --git a/zh_CN/.gitbook/assets/640.png b/zh_CN/.gitbook/assets/640.png
new file mode 100644
index 0000000..50c35db
Binary files /dev/null and b/zh_CN/.gitbook/assets/640.png differ
diff --git a/zh_CN/.gitbook/assets/add-new-segment.png b/zh_CN/.gitbook/assets/add-new-segment.png
new file mode 100644
index 0000000..5bf6e00
Binary files /dev/null and b/zh_CN/.gitbook/assets/add-new-segment.png differ
diff --git a/zh_CN/.gitbook/assets/app-log.png b/zh_CN/.gitbook/assets/app-log.png
new file mode 100644
index 0000000..c066d21
Binary files /dev/null and b/zh_CN/.gitbook/assets/app-log.png differ
diff --git a/zh_CN/.gitbook/assets/app-share.png b/zh_CN/.gitbook/assets/app-share.png
new file mode 100644
index 0000000..95bfb8c
Binary files /dev/null and b/zh_CN/.gitbook/assets/app-share.png differ
diff --git a/zh_CN/.gitbook/assets/create-app.png b/zh_CN/.gitbook/assets/create-app.png
new file mode 100644
index 0000000..8f5c28c
Binary files /dev/null and b/zh_CN/.gitbook/assets/create-app.png differ
diff --git a/zh_CN/.gitbook/assets/how-to-configure-azure-openai.png b/zh_CN/.gitbook/assets/how-to-configure-azure-openai.png
new file mode 100644
index 0000000..a0d3dd1
Binary files /dev/null and b/zh_CN/.gitbook/assets/how-to-configure-azure-openai.png differ
diff --git a/zh_CN/.gitbook/assets/image (1).png b/zh_CN/.gitbook/assets/image (1).png
new file mode 100644
index 0000000..c1d4744
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (1).png differ
diff --git a/zh_CN/.gitbook/assets/image (10).png b/zh_CN/.gitbook/assets/image (10).png
new file mode 100644
index 0000000..d30d2ab
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (10).png differ
diff --git a/zh_CN/.gitbook/assets/image (100).png b/zh_CN/.gitbook/assets/image (100).png
new file mode 100644
index 0000000..15b5f3c
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (100).png differ
diff --git a/zh_CN/.gitbook/assets/image (101).png b/zh_CN/.gitbook/assets/image (101).png
new file mode 100644
index 0000000..dcd40ac
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (101).png differ
diff --git a/zh_CN/.gitbook/assets/image (102).png b/zh_CN/.gitbook/assets/image (102).png
new file mode 100644
index 0000000..d5b1b11
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (102).png differ
diff --git a/zh_CN/.gitbook/assets/image (103).png b/zh_CN/.gitbook/assets/image (103).png
new file mode 100644
index 0000000..54b9924
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (103).png differ
diff --git a/zh_CN/.gitbook/assets/image (104).png b/zh_CN/.gitbook/assets/image (104).png
new file mode 100644
index 0000000..49fd5c1
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (104).png differ
diff --git a/zh_CN/.gitbook/assets/image (105).png b/zh_CN/.gitbook/assets/image (105).png
new file mode 100644
index 0000000..6a5bdf4
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (105).png differ
diff --git a/zh_CN/.gitbook/assets/image (106).png b/zh_CN/.gitbook/assets/image (106).png
new file mode 100644
index 0000000..d451097
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (106).png differ
diff --git a/zh_CN/.gitbook/assets/image (107).png b/zh_CN/.gitbook/assets/image (107).png
new file mode 100644
index 0000000..1d045ec
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (107).png differ
diff --git a/zh_CN/.gitbook/assets/image (108).png b/zh_CN/.gitbook/assets/image (108).png
new file mode 100644
index 0000000..27452ba
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (108).png differ
diff --git a/zh_CN/.gitbook/assets/image (109).png b/zh_CN/.gitbook/assets/image (109).png
new file mode 100644
index 0000000..c981540
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (109).png differ
diff --git a/zh_CN/.gitbook/assets/image (11).png b/zh_CN/.gitbook/assets/image (11).png
new file mode 100644
index 0000000..043610d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (11).png differ
diff --git a/zh_CN/.gitbook/assets/image (110).png b/zh_CN/.gitbook/assets/image (110).png
new file mode 100644
index 0000000..2004039
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (110).png differ
diff --git a/zh_CN/.gitbook/assets/image (111).png b/zh_CN/.gitbook/assets/image (111).png
new file mode 100644
index 0000000..8621e9f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (111).png differ
diff --git a/zh_CN/.gitbook/assets/image (12).png b/zh_CN/.gitbook/assets/image (12).png
new file mode 100644
index 0000000..b48dea6
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (12).png differ
diff --git a/zh_CN/.gitbook/assets/image (13).png b/zh_CN/.gitbook/assets/image (13).png
new file mode 100644
index 0000000..12f51f4
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (13).png differ
diff --git a/zh_CN/.gitbook/assets/image (14).png b/zh_CN/.gitbook/assets/image (14).png
new file mode 100644
index 0000000..ca4cc0b
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (14).png differ
diff --git a/zh_CN/.gitbook/assets/image (15).png b/zh_CN/.gitbook/assets/image (15).png
new file mode 100644
index 0000000..db1690f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (15).png differ
diff --git a/zh_CN/.gitbook/assets/image (16).png b/zh_CN/.gitbook/assets/image (16).png
new file mode 100644
index 0000000..97bdd29
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (16).png differ
diff --git a/zh_CN/.gitbook/assets/image (17).png b/zh_CN/.gitbook/assets/image (17).png
new file mode 100644
index 0000000..4508c5a
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (17).png differ
diff --git a/zh_CN/.gitbook/assets/image (18).png b/zh_CN/.gitbook/assets/image (18).png
new file mode 100644
index 0000000..0fba94f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (18).png differ
diff --git a/zh_CN/.gitbook/assets/image (19).png b/zh_CN/.gitbook/assets/image (19).png
new file mode 100644
index 0000000..2ea7d6f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (19).png differ
diff --git a/zh_CN/.gitbook/assets/image (2).png b/zh_CN/.gitbook/assets/image (2).png
new file mode 100644
index 0000000..db1690f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (2).png differ
diff --git a/zh_CN/.gitbook/assets/image (20).png b/zh_CN/.gitbook/assets/image (20).png
new file mode 100644
index 0000000..0686b1f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (20).png differ
diff --git a/zh_CN/.gitbook/assets/image (21).png b/zh_CN/.gitbook/assets/image (21).png
new file mode 100644
index 0000000..db58b18
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (21).png differ
diff --git a/zh_CN/.gitbook/assets/image (22).png b/zh_CN/.gitbook/assets/image (22).png
new file mode 100644
index 0000000..30795cc
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (22).png differ
diff --git a/zh_CN/.gitbook/assets/image (23).png b/zh_CN/.gitbook/assets/image (23).png
new file mode 100644
index 0000000..a2a9fe4
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (23).png differ
diff --git a/zh_CN/.gitbook/assets/image (24).png b/zh_CN/.gitbook/assets/image (24).png
new file mode 100644
index 0000000..c24e33d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (24).png differ
diff --git a/zh_CN/.gitbook/assets/image (25).png b/zh_CN/.gitbook/assets/image (25).png
new file mode 100644
index 0000000..a8c856e
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (25).png differ
diff --git a/zh_CN/.gitbook/assets/image (26).png b/zh_CN/.gitbook/assets/image (26).png
new file mode 100644
index 0000000..b06128f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (26).png differ
diff --git a/zh_CN/.gitbook/assets/image (27).png b/zh_CN/.gitbook/assets/image (27).png
new file mode 100644
index 0000000..bfde71f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (27).png differ
diff --git a/zh_CN/.gitbook/assets/image (28).png b/zh_CN/.gitbook/assets/image (28).png
new file mode 100644
index 0000000..165d029
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (28).png differ
diff --git a/zh_CN/.gitbook/assets/image (29).png b/zh_CN/.gitbook/assets/image (29).png
new file mode 100644
index 0000000..cd6f027
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (29).png differ
diff --git a/zh_CN/.gitbook/assets/image (3).png b/zh_CN/.gitbook/assets/image (3).png
new file mode 100644
index 0000000..1e24423
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (3).png differ
diff --git a/zh_CN/.gitbook/assets/image (30).png b/zh_CN/.gitbook/assets/image (30).png
new file mode 100644
index 0000000..13fa877
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (30).png differ
diff --git a/zh_CN/.gitbook/assets/image (31).png b/zh_CN/.gitbook/assets/image (31).png
new file mode 100644
index 0000000..f85301d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (31).png differ
diff --git a/zh_CN/.gitbook/assets/image (32).png b/zh_CN/.gitbook/assets/image (32).png
new file mode 100644
index 0000000..edb1468
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (32).png differ
diff --git a/zh_CN/.gitbook/assets/image (33).png b/zh_CN/.gitbook/assets/image (33).png
new file mode 100644
index 0000000..1bf35a9
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (33).png differ
diff --git a/zh_CN/.gitbook/assets/image (34).png b/zh_CN/.gitbook/assets/image (34).png
new file mode 100644
index 0000000..63dc5b2
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (34).png differ
diff --git a/zh_CN/.gitbook/assets/image (35).png b/zh_CN/.gitbook/assets/image (35).png
new file mode 100644
index 0000000..d23c1e3
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (35).png differ
diff --git a/zh_CN/.gitbook/assets/image (36).png b/zh_CN/.gitbook/assets/image (36).png
new file mode 100644
index 0000000..c24e33d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (36).png differ
diff --git a/zh_CN/.gitbook/assets/image (37).png b/zh_CN/.gitbook/assets/image (37).png
new file mode 100644
index 0000000..3d09805
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (37).png differ
diff --git a/zh_CN/.gitbook/assets/image (38).png b/zh_CN/.gitbook/assets/image (38).png
new file mode 100644
index 0000000..9883560
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (38).png differ
diff --git a/zh_CN/.gitbook/assets/image (39).png b/zh_CN/.gitbook/assets/image (39).png
new file mode 100644
index 0000000..b4e0be6
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (39).png differ
diff --git a/zh_CN/.gitbook/assets/image (4).png b/zh_CN/.gitbook/assets/image (4).png
new file mode 100644
index 0000000..0ef5f75
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (4).png differ
diff --git a/zh_CN/.gitbook/assets/image (40).png b/zh_CN/.gitbook/assets/image (40).png
new file mode 100644
index 0000000..472d6e8
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (40).png differ
diff --git a/zh_CN/.gitbook/assets/image (41).png b/zh_CN/.gitbook/assets/image (41).png
new file mode 100644
index 0000000..1b285bc
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (41).png differ
diff --git a/zh_CN/.gitbook/assets/image (42).png b/zh_CN/.gitbook/assets/image (42).png
new file mode 100644
index 0000000..46e504d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (42).png differ
diff --git a/zh_CN/.gitbook/assets/image (43).png b/zh_CN/.gitbook/assets/image (43).png
new file mode 100644
index 0000000..ece9b01
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (43).png differ
diff --git a/zh_CN/.gitbook/assets/image (44).png b/zh_CN/.gitbook/assets/image (44).png
new file mode 100644
index 0000000..c6c01ed
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (44).png differ
diff --git a/zh_CN/.gitbook/assets/image (45).png b/zh_CN/.gitbook/assets/image (45).png
new file mode 100644
index 0000000..db58b18
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (45).png differ
diff --git a/zh_CN/.gitbook/assets/image (46).png b/zh_CN/.gitbook/assets/image (46).png
new file mode 100644
index 0000000..63cd1e7
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (46).png differ
diff --git a/zh_CN/.gitbook/assets/image (47).png b/zh_CN/.gitbook/assets/image (47).png
new file mode 100644
index 0000000..c0c7213
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (47).png differ
diff --git a/zh_CN/.gitbook/assets/image (48).png b/zh_CN/.gitbook/assets/image (48).png
new file mode 100644
index 0000000..db58b18
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (48).png differ
diff --git a/zh_CN/.gitbook/assets/image (49).png b/zh_CN/.gitbook/assets/image (49).png
new file mode 100644
index 0000000..eca6a43
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (49).png differ
diff --git a/zh_CN/.gitbook/assets/image (5).png b/zh_CN/.gitbook/assets/image (5).png
new file mode 100644
index 0000000..47df9c3
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (5).png differ
diff --git a/zh_CN/.gitbook/assets/image (50).png b/zh_CN/.gitbook/assets/image (50).png
new file mode 100644
index 0000000..e46d582
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (50).png differ
diff --git a/zh_CN/.gitbook/assets/image (51).png b/zh_CN/.gitbook/assets/image (51).png
new file mode 100644
index 0000000..c16201b
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (51).png differ
diff --git a/zh_CN/.gitbook/assets/image (52).png b/zh_CN/.gitbook/assets/image (52).png
new file mode 100644
index 0000000..4d91b1a
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (52).png differ
diff --git a/zh_CN/.gitbook/assets/image (53).png b/zh_CN/.gitbook/assets/image (53).png
new file mode 100644
index 0000000..dbcb27d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (53).png differ
diff --git a/zh_CN/.gitbook/assets/image (54).png b/zh_CN/.gitbook/assets/image (54).png
new file mode 100644
index 0000000..a945f99
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (54).png differ
diff --git a/zh_CN/.gitbook/assets/image (55).png b/zh_CN/.gitbook/assets/image (55).png
new file mode 100644
index 0000000..8796ba5
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (55).png differ
diff --git a/zh_CN/.gitbook/assets/image (56).png b/zh_CN/.gitbook/assets/image (56).png
new file mode 100644
index 0000000..75ef270
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (56).png differ
diff --git a/zh_CN/.gitbook/assets/image (57).png b/zh_CN/.gitbook/assets/image (57).png
new file mode 100644
index 0000000..43831ce
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (57).png differ
diff --git a/zh_CN/.gitbook/assets/image (58).png b/zh_CN/.gitbook/assets/image (58).png
new file mode 100644
index 0000000..57664b7
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (58).png differ
diff --git a/zh_CN/.gitbook/assets/image (59).png b/zh_CN/.gitbook/assets/image (59).png
new file mode 100644
index 0000000..f3e176f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (59).png differ
diff --git a/zh_CN/.gitbook/assets/image (6).png b/zh_CN/.gitbook/assets/image (6).png
new file mode 100644
index 0000000..db1690f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (6).png differ
diff --git a/zh_CN/.gitbook/assets/image (60).png b/zh_CN/.gitbook/assets/image (60).png
new file mode 100644
index 0000000..55087c7
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (60).png differ
diff --git a/zh_CN/.gitbook/assets/image (61).png b/zh_CN/.gitbook/assets/image (61).png
new file mode 100644
index 0000000..9355e1d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (61).png differ
diff --git a/zh_CN/.gitbook/assets/image (62).png b/zh_CN/.gitbook/assets/image (62).png
new file mode 100644
index 0000000..cd55d00
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (62).png differ
diff --git a/zh_CN/.gitbook/assets/image (63).png b/zh_CN/.gitbook/assets/image (63).png
new file mode 100644
index 0000000..640e5aa
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (63).png differ
diff --git a/zh_CN/.gitbook/assets/image (64).png b/zh_CN/.gitbook/assets/image (64).png
new file mode 100644
index 0000000..74f46ef
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (64).png differ
diff --git a/zh_CN/.gitbook/assets/image (65).png b/zh_CN/.gitbook/assets/image (65).png
new file mode 100644
index 0000000..fa3b653
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (65).png differ
diff --git a/zh_CN/.gitbook/assets/image (66).png b/zh_CN/.gitbook/assets/image (66).png
new file mode 100644
index 0000000..6fa4306
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (66).png differ
diff --git a/zh_CN/.gitbook/assets/image (67).png b/zh_CN/.gitbook/assets/image (67).png
new file mode 100644
index 0000000..7630887
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (67).png differ
diff --git a/zh_CN/.gitbook/assets/image (68).png b/zh_CN/.gitbook/assets/image (68).png
new file mode 100644
index 0000000..bb00817
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (68).png differ
diff --git a/zh_CN/.gitbook/assets/image (69).png b/zh_CN/.gitbook/assets/image (69).png
new file mode 100644
index 0000000..bf08a9a
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (69).png differ
diff --git a/zh_CN/.gitbook/assets/image (7).png b/zh_CN/.gitbook/assets/image (7).png
new file mode 100644
index 0000000..2f6c180
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (7).png differ
diff --git a/zh_CN/.gitbook/assets/image (70).png b/zh_CN/.gitbook/assets/image (70).png
new file mode 100644
index 0000000..0e07879
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (70).png differ
diff --git a/zh_CN/.gitbook/assets/image (71).png b/zh_CN/.gitbook/assets/image (71).png
new file mode 100644
index 0000000..0ca666d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (71).png differ
diff --git a/zh_CN/.gitbook/assets/image (72).png b/zh_CN/.gitbook/assets/image (72).png
new file mode 100644
index 0000000..ece9b01
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (72).png differ
diff --git a/zh_CN/.gitbook/assets/image (73).png b/zh_CN/.gitbook/assets/image (73).png
new file mode 100644
index 0000000..0e46b7b
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (73).png differ
diff --git a/zh_CN/.gitbook/assets/image (74).png b/zh_CN/.gitbook/assets/image (74).png
new file mode 100644
index 0000000..db58b18
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (74).png differ
diff --git a/zh_CN/.gitbook/assets/image (75).png b/zh_CN/.gitbook/assets/image (75).png
new file mode 100644
index 0000000..357513b
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (75).png differ
diff --git a/zh_CN/.gitbook/assets/image (76).png b/zh_CN/.gitbook/assets/image (76).png
new file mode 100644
index 0000000..87d1a9f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (76).png differ
diff --git a/zh_CN/.gitbook/assets/image (77).png b/zh_CN/.gitbook/assets/image (77).png
new file mode 100644
index 0000000..0ca666d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (77).png differ
diff --git a/zh_CN/.gitbook/assets/image (78).png b/zh_CN/.gitbook/assets/image (78).png
new file mode 100644
index 0000000..95702f3
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (78).png differ
diff --git a/zh_CN/.gitbook/assets/image (79).png b/zh_CN/.gitbook/assets/image (79).png
new file mode 100644
index 0000000..a06e6bc
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (79).png differ
diff --git a/zh_CN/.gitbook/assets/image (8).png b/zh_CN/.gitbook/assets/image (8).png
new file mode 100644
index 0000000..da33fb3
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (8).png differ
diff --git a/zh_CN/.gitbook/assets/image (80).png b/zh_CN/.gitbook/assets/image (80).png
new file mode 100644
index 0000000..69f815c
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (80).png differ
diff --git a/zh_CN/.gitbook/assets/image (81).png b/zh_CN/.gitbook/assets/image (81).png
new file mode 100644
index 0000000..e972ddb
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (81).png differ
diff --git a/zh_CN/.gitbook/assets/image (82).png b/zh_CN/.gitbook/assets/image (82).png
new file mode 100644
index 0000000..dd1b133
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (82).png differ
diff --git a/zh_CN/.gitbook/assets/image (83).png b/zh_CN/.gitbook/assets/image (83).png
new file mode 100644
index 0000000..4bfe23d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (83).png differ
diff --git a/zh_CN/.gitbook/assets/image (84).png b/zh_CN/.gitbook/assets/image (84).png
new file mode 100644
index 0000000..99d8e0f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (84).png differ
diff --git a/zh_CN/.gitbook/assets/image (85).png b/zh_CN/.gitbook/assets/image (85).png
new file mode 100644
index 0000000..71c5637
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (85).png differ
diff --git a/zh_CN/.gitbook/assets/image (86).png b/zh_CN/.gitbook/assets/image (86).png
new file mode 100644
index 0000000..52b6ceb
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (86).png differ
diff --git a/zh_CN/.gitbook/assets/image (87).png b/zh_CN/.gitbook/assets/image (87).png
new file mode 100644
index 0000000..f9380eb
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (87).png differ
diff --git a/zh_CN/.gitbook/assets/image (88).png b/zh_CN/.gitbook/assets/image (88).png
new file mode 100644
index 0000000..d9a8400
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (88).png differ
diff --git a/zh_CN/.gitbook/assets/image (89).png b/zh_CN/.gitbook/assets/image (89).png
new file mode 100644
index 0000000..654a8b6
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (89).png differ
diff --git a/zh_CN/.gitbook/assets/image (9).png b/zh_CN/.gitbook/assets/image (9).png
new file mode 100644
index 0000000..16115b0
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (9).png differ
diff --git a/zh_CN/.gitbook/assets/image (90).png b/zh_CN/.gitbook/assets/image (90).png
new file mode 100644
index 0000000..b99288f
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (90).png differ
diff --git a/zh_CN/.gitbook/assets/image (91).png b/zh_CN/.gitbook/assets/image (91).png
new file mode 100644
index 0000000..f9380eb
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (91).png differ
diff --git a/zh_CN/.gitbook/assets/image (92).png b/zh_CN/.gitbook/assets/image (92).png
new file mode 100644
index 0000000..626b4e6
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (92).png differ
diff --git a/zh_CN/.gitbook/assets/image (93).png b/zh_CN/.gitbook/assets/image (93).png
new file mode 100644
index 0000000..4bcf779
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (93).png differ
diff --git a/zh_CN/.gitbook/assets/image (94).png b/zh_CN/.gitbook/assets/image (94).png
new file mode 100644
index 0000000..c78be69
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (94).png differ
diff --git a/zh_CN/.gitbook/assets/image (95).png b/zh_CN/.gitbook/assets/image (95).png
new file mode 100644
index 0000000..93526bc
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (95).png differ
diff --git a/zh_CN/.gitbook/assets/image (96).png b/zh_CN/.gitbook/assets/image (96).png
new file mode 100644
index 0000000..d369963
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (96).png differ
diff --git a/zh_CN/.gitbook/assets/image (97).png b/zh_CN/.gitbook/assets/image (97).png
new file mode 100644
index 0000000..d8bf8a4
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (97).png differ
diff --git a/zh_CN/.gitbook/assets/image (98).png b/zh_CN/.gitbook/assets/image (98).png
new file mode 100644
index 0000000..21af97c
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (98).png differ
diff --git a/zh_CN/.gitbook/assets/image (99).png b/zh_CN/.gitbook/assets/image (99).png
new file mode 100644
index 0000000..9a25a20
Binary files /dev/null and b/zh_CN/.gitbook/assets/image (99).png differ
diff --git a/zh_CN/.gitbook/assets/image-10.png b/zh_CN/.gitbook/assets/image-10.png
new file mode 100644
index 0000000..9043d52
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-10.png differ
diff --git a/zh_CN/.gitbook/assets/image-11.png b/zh_CN/.gitbook/assets/image-11.png
new file mode 100644
index 0000000..ba22b08
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-11.png differ
diff --git a/zh_CN/.gitbook/assets/image-13.png b/zh_CN/.gitbook/assets/image-13.png
new file mode 100644
index 0000000..90761dd
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-13.png differ
diff --git a/zh_CN/.gitbook/assets/image-14.png b/zh_CN/.gitbook/assets/image-14.png
new file mode 100644
index 0000000..21f2fcf
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-14.png differ
diff --git a/zh_CN/.gitbook/assets/image-15.png b/zh_CN/.gitbook/assets/image-15.png
new file mode 100644
index 0000000..74ef518
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-15.png differ
diff --git a/zh_CN/.gitbook/assets/image-16.png b/zh_CN/.gitbook/assets/image-16.png
new file mode 100644
index 0000000..7c5d7b3
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-16.png differ
diff --git a/zh_CN/.gitbook/assets/image-17.png b/zh_CN/.gitbook/assets/image-17.png
new file mode 100644
index 0000000..e72af9e
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-17.png differ
diff --git a/zh_CN/.gitbook/assets/image-18.png b/zh_CN/.gitbook/assets/image-18.png
new file mode 100644
index 0000000..8210586
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-18.png differ
diff --git a/zh_CN/.gitbook/assets/image-19.png b/zh_CN/.gitbook/assets/image-19.png
new file mode 100644
index 0000000..a4af82d
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-19.png differ
diff --git a/zh_CN/.gitbook/assets/image-2.png b/zh_CN/.gitbook/assets/image-2.png
new file mode 100644
index 0000000..824cac9
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-2.png differ
diff --git a/zh_CN/.gitbook/assets/image-20.png b/zh_CN/.gitbook/assets/image-20.png
new file mode 100644
index 0000000..ad1e0f9
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-20.png differ
diff --git a/zh_CN/.gitbook/assets/image-3.png b/zh_CN/.gitbook/assets/image-3.png
new file mode 100644
index 0000000..1e57166
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-3.png differ
diff --git a/zh_CN/.gitbook/assets/image-4.png b/zh_CN/.gitbook/assets/image-4.png
new file mode 100644
index 0000000..ea6193a
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-4.png differ
diff --git a/zh_CN/.gitbook/assets/image-5.png b/zh_CN/.gitbook/assets/image-5.png
new file mode 100644
index 0000000..08850d3
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-5.png differ
diff --git a/zh_CN/.gitbook/assets/image-6.png b/zh_CN/.gitbook/assets/image-6.png
new file mode 100644
index 0000000..2dfdddd
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-6.png differ
diff --git a/zh_CN/.gitbook/assets/image-7.png b/zh_CN/.gitbook/assets/image-7.png
new file mode 100644
index 0000000..11fefe0
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-7.png differ
diff --git a/zh_CN/.gitbook/assets/image-8.png b/zh_CN/.gitbook/assets/image-8.png
new file mode 100644
index 0000000..86e2f98
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-8.png differ
diff --git a/zh_CN/.gitbook/assets/image-9.png b/zh_CN/.gitbook/assets/image-9.png
new file mode 100644
index 0000000..9043d52
Binary files /dev/null and b/zh_CN/.gitbook/assets/image-9.png differ
diff --git a/zh_CN/.gitbook/assets/image.png b/zh_CN/.gitbook/assets/image.png
new file mode 100644
index 0000000..c1d4744
Binary files /dev/null and b/zh_CN/.gitbook/assets/image.png differ
diff --git a/zh_CN/.gitbook/assets/notion.png b/zh_CN/.gitbook/assets/notion.png
new file mode 100644
index 0000000..8948c67
Binary files /dev/null and b/zh_CN/.gitbook/assets/notion.png differ
diff --git a/zh_CN/.gitbook/assets/openaiKey.png b/zh_CN/.gitbook/assets/openaiKey.png
new file mode 100644
index 0000000..f0e0605
Binary files /dev/null and b/zh_CN/.gitbook/assets/openaiKey.png differ
diff --git a/zh_CN/.gitbook/assets/service-api-page.png b/zh_CN/.gitbook/assets/service-api-page.png
new file mode 100644
index 0000000..01d53bd
Binary files /dev/null and b/zh_CN/.gitbook/assets/service-api-page.png differ
diff --git a/zh_CN/.gitbook/assets/sync-notion.png b/zh_CN/.gitbook/assets/sync-notion.png
new file mode 100644
index 0000000..fdb788e
Binary files /dev/null and b/zh_CN/.gitbook/assets/sync-notion.png differ
diff --git a/zh_CN/README.md b/zh_CN/README.md
new file mode 100644
index 0000000..280e22c
--- /dev/null
+++ b/zh_CN/README.md
@@ -0,0 +1,65 @@
+---
+description: Dify 一词源自 Define + Modify,意指定义并且持续的改进你的 AI 应用,它是为你而做的(Do it for you)。
+---
+
+# 欢迎使用 Dify
+
+如果你对诸如 GPT-4 之类的 LLM 技术高速发展感到惊奇和兴奋,迫不及待的想用它做点什么有用的东西!可你的头脑里又有一切令人困惑的问题:
+
+* 我该如何“训练”一个基于我的内容的模型?
+* 怎么让 AI 知道 2021 年以后的事情?
+* 如何避免让 AI 跟用户胡说八道?
+* 微调(Fine-tuning)和嵌入(Embedding)是什么意思?
+
+那么,Dify 正好能满足你的需要。
+
+Dify 的目标是让开发者(甚至非开发者)可以快速基于大型语言模型搭建出有用的东西,并确保它是**可视化、可运营、可改进**的。
+
+> 我们塑造了工具,然后工具塑造了我们。——马歇尔·麦克卢汉(Marshall McLuhan)
+
+你可以使用 Dify 快速搭建一个 Web App,其生成的前端代码可以托管在 Dify 上。如果你想基于这个 Web App 进一步开发,你可以从 GitHub 中得到这些 [Template](advanced/based-on-frontend-templates.md),部署到任何地方(例如 Vercel 或你的服务器)。或者,你也可以基于 WebAPI 开发你自己的 Web 前端、移动 App…总之,为你省下了后端开发的工作。
+
+不止于此,Dify 的核心理念是在一个可视化的界面中创建、配置、改进你的应用。基于 LLM 的应用开发有一个持续改进的生命周期,你可能需要基于自己的内容让 AI 给予正确的回答,或是想提升 AI 的准确性和叙述风格,甚至让它去 YouTube 上下载一个字幕作为上下文。
+
+这当中将会有些逻辑设计、上下文增强、数据准备等需要花些功夫的事情,如果没有工具你可能会寸步难行…我们称这个过程为 **LLMOps**。
+
+### 下一步行动
+
+* 看看这些由 Dify 创建的应用
+* 在云端版中快速创建应用
+* 安装 Dify 到你的服务器
+
+> 可能只有少数公司有预算来构建和管理像 GPT-3 这样的大型语言模型 (LLM),但是将会有许多价值超过 10 亿美元的“第二层”公司建立起来下一个十年。
+> ——Sam Altman
+
+正如 LLM 技术的快速发展一样,Dify 还是一个正在不断进步的产品,这份文档的内容和产品可能会有些出入。你可以在 GitHub 或 Discord 上与我们分享想法。
+
+### Q\&A
+
+**Q: 我能用 Dify 做什么?**
+
+A: Dify 是一个简单且能力丰富的自然语言编程工具。你可以用它搭建商用级应用,个人助理。如果你想自己开发应用,Dify 也能为你省下接入 OpenAI 的后端工作,但使用我们逐步提供高的可视化运营能力,你可以持续的改进和训练你的 GPT 模型。
+
+**Q: 如何使用 Dify 训练自己的模型?**
+
+A: 一个有价值的应用由 Prompt Engineering、上下文增强和 Fine-tune 三个环节组成。我们创造了一种 Prompt 结合编程语言的 Hybrid 编程方式(类似一个模版引擎),你可以轻松的完成长文本嵌入,或抓取用户输入的一个 Youtube 视频的字幕——这些都将作为上下文提交给 LLMs 进行计算。我们十分注重应用的可运营性,你的用户在使用 App 期间产生的数据,可进行分析、标记和持续训练。以上环节如果没有好的工具支持,可能会消耗你大量的时间。
+
+**Q: 如果要创建一个自己的应用,我需要准备什么?**
+
+A: 你选择一个诸如 OpenAI 的模型供应商,我们的云端版内置了 GPT-4 的试用模型,你可以填入自己的 API Key。随后你就可以创建一个应用,基于 Prompt 或自己的上下文。
+
+**Q: Dify 搭建的应用能够保持会话吗?**
+
+A: 可以,如果你创建了对话型应用,它内置了会话保存的能力,在生成的 WebApp 和 API 中都支持。
+
+**Q: LLMOps 和 MLOps 有什么区别?**
+
+A: 过去的 MLOps 是让开发者从零开始训练模型,而 LLMOps 基于诸如 GPT-4 这样的强大模型之上开发 AI 原生应用,你可以查阅这篇[文章](https://blog.dify.ai/unleashing-the-power-of-llm-embeddings-with-datasets-revolutionizing-mlops/)。
+
+**Q: 提供哪些界面语言?**
+
+A: 现已支持英文与中文,你可以为我们贡献语言包。
+
+**Q: LangGenius 是什么?**
+
+A: LangGenius 是 Dify 正式发布前的产品名称,我们还在更新所有的文档。Dify 一词源自 **D**efine + Mo**dify**,意指定义并且持续的改进你的 AI 应用,它是为你而做的(Do it for you)。
diff --git a/zh_CN/SUMMARY.md b/zh_CN/SUMMARY.md
new file mode 100644
index 0000000..26c046a
--- /dev/null
+++ b/zh_CN/SUMMARY.md
@@ -0,0 +1,66 @@
+# Table of contents
+
+## 入门
+
+* [欢迎使用 Dify](README.md)
+* [使用云服务](getting-started/cloud.md)
+* [部署社区版](getting-started/install-self-hosted/README.md)
+ * [Docker Compose 部署](getting-started/install-self-hosted/docker-compose.md)
+ * [本地源码启动](getting-started/install-self-hosted/local-source-code.md)
+ * [单独启动前端 Docker 容器](getting-started/install-self-hosted/start-the-frontend-docker-container.md)
+ * [环境变量说明](getting-started/install-self-hosted/environments.md)
+* [什么是 LLMOps?](getting-started/what-is-llmops.md)
+* [常见问题](getting-started/faq/README.md)
+ * [本地部署相关](getting-started/faq/install-faq.md)
+ * [LLM 配置与使用](getting-started/faq/llms-use-faq.md)
+ * [API 使用](getting-started/faq/api-use-faq.md)
+
+## 应用
+
+* [创建应用](application/creating-an-application.md)
+* [快速发布 AI 站点](application/launch-your-webapp-quickly.md)
+* [设计提示词&编排应用](application/prompt-engineering/README.md)
+ * [文本生成型应用](application/prompt-engineering/text-generation-application.md)
+ * [对话型应用](application/prompt-engineering/conversation-application.md)
+* [基于 APIs 开发](application/developing-with-apis.md)
+* [日志与标注](application/logs.md)
+
+## Web 应用
+
+* [概览](web-application/overview.md)
+* [文本生成型应用](web-application/text-generator.md)
+* [对话型应用](web-application/conversation-application.md)
+
+## 探索
+
+* [智聊](explore/chat.md)
+
+## 进阶使用
+
+* [数据集管理](advanced/datasets/README.md)
+ * [从 Notion 导入数据](advanced/datasets/sync-from-notion.md)
+* [插件](advanced/ai-plugins.md)
+* [基于前端模版再开发](advanced/based-on-frontend-templates.md)
+* [模型配置](advanced/model-configuration/README.md)
+ * [接入 Hugging Face 上的开源模型](advanced/model-configuration/hugging-face.md)
+ * [接入 Replicate 上的开源模型](advanced/model-configuration/replicate.md)
+* [更多集成](advanced/more-integration.md)
+
+## 使用案例
+
+* [如何训练出专属于“你”的问答机器人?](use-cases/train-a-qa-chatbot-that-belongs-to-you.md)
+* [教你十几分钟不用代码创建 Midjourney 提示词机器人](use-cases/create-a-midjoureny-prompt-word-robot-with-zero-code.md)
+* [零代码,使用 Dify 两分钟接入企业微信](use-cases/integrate-with-wecom-using-dify.md)
+* [构建一个 Notion AI 助手](use-cases/build-an-notion-ai-assistant.md)
+* [如何在几分钟内创建一个带有业务数据的官网 AI 智能客服](use-cases/create-an-ai-chatbot-with-business-data-in-minutes.md)
+
+## 社区
+
+* [寻求支持](community/support.md)
+* [开源协议](community/open-source.md)
+* [数据安全](community/data-security.md)
+
+## 用户协议
+
+* [服务协议](user-agreement/terms-of-service.md)
+* [隐私协议](user-agreement/privacy-policy.md)
diff --git a/zh_CN/advanced/ai-plugins.md b/zh_CN/advanced/ai-plugins.md
new file mode 100644
index 0000000..e8e0367
--- /dev/null
+++ b/zh_CN/advanced/ai-plugins.md
@@ -0,0 +1,5 @@
+# 插件
+
+{% hint style="info" %}
+插件是 Dify 即将推出的功能,你可以将插件纳入 App 编排之中,通过一个 API 或 WebApp 访问具备插件能力的 AI 应用。Dify 兼容了 ChatGPT Plugins 标准的插件,并提供了一些原生插件。
+{% endhint %}
diff --git a/zh_CN/advanced/based-on-frontend-templates.md b/zh_CN/advanced/based-on-frontend-templates.md
new file mode 100644
index 0000000..53d556e
--- /dev/null
+++ b/zh_CN/advanced/based-on-frontend-templates.md
@@ -0,0 +1,38 @@
+# 基于前端模版再开发
+
+如果开发者是从头开发新产品或者在产品原型设计阶段,你可以使用 Dify 快速发布 AI 站点。与此同时,Dify 希望开发者能够充分自由的创造不同形式的前端应用,为此我们提供了:
+
+* **SDK**,用于在各种语言中快速接入 Dify API
+* **WebApp Template**,每种类型应用的 WebApp 开发脚手架
+
+WebApp Template 是基于 MIT 协议开源的,你可以充分自由的修改并部署他们,以实现 Dify 的所有能力。或者作为你实现自己 App 的一份参考代码。
+
+你可以在 GitHub 中找到这些 Template:
+
+* [对话型应用](https://github.com/langgenius/webapp-conversation)
+* [文本生成型应用](https://github.com/langgenius/webapp-text-generator)
+
+使用 WebApp 模版最快的方法就是在 GitHub 中点击「使用这个模版」,它相当于 Fork 了一个新的仓库。随后你需要配置 Dify 的 App ID 和 API Key,类似这样:
+
+````javascript
+export const APP_ID = ''
+export const API_KEY = ''
+```
+
+More config in `config/index.ts`:
+```js
+export const APP_INFO: AppInfo = {
+ "title": 'Chat APP',
+ "description": '',
+ "copyright": '',
+ "privacy_policy": '',
+ "default_language": 'zh-Hans'
+}
+
+export const isShowPrompt = true
+export const promptTemplate = ''
+````
+
+每一种 WebApp 模版都提供了 README 文件,内含部署方式的说明。通常,WebApp 模版都包含了一个轻后端服务,这是为了确保开发者的 API KEY 不会直接暴露给用户。
+
+这些 WebApp 模版能够帮助你快速搭建起 AI 应用原型,并使用 Dify 的所有能力。如果你基于它们开发了自己的应用或新的模版,欢迎你与我们分享。
diff --git a/zh_CN/advanced/datasets/README.md b/zh_CN/advanced/datasets/README.md
new file mode 100644
index 0000000..7798762
--- /dev/null
+++ b/zh_CN/advanced/datasets/README.md
@@ -0,0 +1,114 @@
+# 数据集管理
+
+大多数语言模型采用较为陈旧的训练数据,并且对每次请求的上下文有长度限制。例如 GPT-3.5 是基于 2021 年的语料进行训练的,且有每次约 4K Token 的限制。这意味着开发者如果想让 AI 应用基于最新的、私有的上下文对话,必须使用类似嵌入(Embedding)之类的技术。
+
+Dify 的数据集功能可以使开发者(甚至非技术人员)以简单的方式管理数据集,并自动集成至 AI 应用中。你只需准备文本内容,例如:
+
+* 长文本内容(TXT、Markdown、JSONL 甚至是 PDF 文件)
+* 结构化数据(CSV、Excel 等)
+
+另外,我们正在逐步支持从诸多数据源同步数据至数据集,包括:
+
+* 网页
+* Notion
+* Github
+* 数据库
+* ……
+
+{% hint style="info" %}
+**情景**:如果你的公司想基于现有知识库和产品文档建立一个 AI 客服助手,你可以在 Dify 中将文档上传至数据集,并建立一个对话型应用。这在过去可能需要花费你数周的时间,且难以持续维护。
+{% endhint %}
+
+### 数据集与文档
+
+在 Dify 中,**数据集(Datasets)**是一些**文档(Documents)**的集合。一个数据集可以被整体集成至一个应用中作为上下文使用。文档可以由开发者或运营人员上传,或由其它数据源同步(通常对应数据源中的一个文件单位)。
+
+**上传文档的步骤:**
+
+1. 上传你的文件,通常是长文本文件或表格文件。
+2. 分段、清洗并预览
+3. 由 Dify 提交至 LLM 供应商嵌入为向量数据,并存储
+4. 为文档设置元数据
+5. 可以在应用中使用了🎉!
+
+### 创建数据集
+
+在 Dify 主导航栏中点击**数据集**,在该页面你可以看到已有的数据集。你可以点击**创建数据集**进入创建向导:
+
+* 如果你已经准备好了文件,可以从上传文件开始
+* 如果你还没有准备好文档,可以先创建一个空数据集
+
+如果你在创建数据集时选择了使用外部数据源,该数据集的类型不可更改。这是为了防止单一数据集存在多数据源而造成的管理困难。如果你需要使用多个数据源,建议创建多个数据集。
+
+#### 编辑良好的数据集描述
+
+当一个应用中引用多个数据集时,AI 会根据用户的提问和数据集的描述来决定使用哪个数据集来回答用户的问题。因此,良好的数据集描述能提升 AI 选择数据集的准确率。
+
+编写良好的数据集描述的要点是写清楚数据集包含的内容和特点。**数据集的描述建议以这个开头:`仅当你想要回答的问题是关于以下内容时有用:具体描述`**。一个房地产数据集的描述:
+
+> 仅当你想要回答的问题是关于以下内容时有用: 2010 年到 2020 年的全球房地产市场数据。这些数据包括每个城市的平均房价、房产销售量、房屋类型等信息。此外,该数据集还包括了一些经济指标,如 GDP、失业率等,以及一些社会指标,如人口数量、教育水平等,这些指标可以帮助分析房地产市场的趋势和影响因素。
+>
+> 通过这些数据,我们可以了解全球房地产市场的发展趋势,分析各个城市的房价变化,以及了解经济和社会因素对房地产市场的影响。
+
+### 上传文档
+
+* 选择你要上传的文件,支持批量上传;
+* 预览全文;
+* 进行分段和清洗;
+* 等待 Dify 为你处理这些数据,通常该步骤在 LLM 供应商中需要消耗 Token。
+
+### 文本分段与清洗
+
+文本数据的分段与清洗是指 Dify 自动将你的数据进行段落分段 & 向量化处理,使得用户的提问(输入)能匹配到相关的文本段落(Q to P),最后输出结果。
+
+上传一个数据集的文档,你需要选择文本的**索引方式**来指定数据的匹配方式。这会影响到 AI 在回复问题时的准确度。
+
+**高质量模式**下,将调用 OpenAI 的嵌入接口进行处理,以在用户查询时提供更高的准确度。
+
+**经济模式**下,会使用离线的向量引擎、关键词索引等方式,降低了准确度但无需花费 Token。
+
+**Q\&A 分段模式**下,Q\&A 分段模式功能,与上述普通的「Q to P」(问题匹配文本段落)匹配模式不同,它是采用「Q to Q」(问题匹配问题)匹配工作,在文档经过分段后,经过总结为每一个分段生成 Q\&A 匹配对,当用户提问时,系统会找出与之最相似的问题,然后返回对应的分段作为答案。这种方式更加精确,因为它直接针对用户问题进行匹配,可以更准确地获取用户真正需要的信息。
+
+> 问题文本是具有完整语法结构的自然语言,而不是文档检索任务中的一些关键字,所以 Q to Q (问题匹配问题)的模式会令语意和匹配更加清晰,并同时满足一些高频和高相似度问题的提问场景。
+
+