Compare commits

...

137 Commits

Author SHA1 Message Date
Henry Heng 54ff43e8f1
Bugfix/HF custom endpoint (#2811)
include fix for hf custom endpoint
2024-07-16 21:42:24 +01:00
Ong Chung Yau 95b2cf7b7f
Feature/extract import all (#2796)
* use existing route to get all chatflows

* add export all chatflows functionality

* add read exported all chatflows json file functionality

* add save chatflows functionality in server

* chore rename saveChatflows to importChatflows and others

* chore rewrite snackbar message

* fix import chatflows when no data in chatflows db

* add handle when import file array length is 0

* chore update and add meaning comment in importChatflows

* update method of storing flowdata for importChatflows function

* Refresh/redirect to chatflows when import is successful

* fix lint

---------

Co-authored-by: Ilango <rajagopalilango@gmail.com>
2024-07-16 09:47:41 +08:00
Henry Heng 074bb738a3
Release/1.8.4 (#2805)
* 🥳 flowise release 1.8.4

* 🥳 flowise-components release 1.8.6
2024-07-15 15:34:33 +01:00
Henry Heng 78e60e22d2
Bugfix/Undefined substring error (#2804)
fix undefined substring error
2024-07-15 15:16:56 +01:00
Neal Beeken 9e88c45051
feat: add driverInfo to mongodb component (#2779)
* feat: add driverInfo to mongodb component

NODE-6240

* chore: add a getVersion utility function
2024-07-15 12:24:00 +01:00
Henry Heng 363d1bfc44
Chore/update deprecating nodes (#2540)
* update deprecating nodes

* add filters use cases to marketplace

* update log level
2024-07-12 18:37:57 +01:00
Asharib Ali 9ea439d135
Add Perplexity AI Search Tool to Marketplaces/Tools (#2771)
add the perplexity_ai_search tool
2024-07-12 18:01:36 +01:00
Pavlo Paliychuk 1015e1193f
chore: Bump zep cloud sdk version and clean up zep cloud vector store node (#2767) 2024-07-12 18:01:00 +01:00
William Espegren 7166317482
Fix docker command (#2743)
* fix docker compose

* correct docker compose command
2024-07-12 17:57:58 +01:00
Henry Heng 3cbbd59242
Bugfix/Enum type tools for gemini (#2766)
fix enum type tools for gemini
2024-07-09 00:25:18 +01:00
Ahmed Osman 90558ca688
FIX #2617 Cherio Web Crawler doesn't work with large sites (#2678)
* FIX #2617 Big sites scan error

* FIX #2617 Big sites scan error - review fix

---------

Co-authored-by: Ahmed Osman <ahmed.osman@evolpe.pl>
2024-07-05 11:34:47 +01:00
Arun Lodhi b1e38783e4
Bugfix/observation-includes-not-function (#2744)
* Bugfix: observation?.includes is not a function

* Check type of observation before checking source document prefix

* lint-fix

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-07-05 11:26:05 +01:00
Mubashir Shariq dfdeb02b3a
Feat/added chattBaiduWenxin chat model (#2752)
* added cahtBaiduWenxin model

* fix linting

* fixed linting

* added baidu secret key
2024-07-05 11:25:37 +01:00
William Espegren cacbfa8162
feat: Add limit parameter to Spider tool (#2762)
* feat: Add limit parameter to Spider tool

* fix pnpm lint
2024-07-05 11:23:34 +01:00
William Espegren 656f6cad81
Feature/Spider (open-source web scraper & crawler) (#2738)
* Add Spider Scraper & Crawler

* fix pnpm lint

* chore: Update metadata to be correct format

* fix pnpm lint
2024-07-02 00:00:52 +01:00
Henry Heng efc6e02828
Bugfix/Add showagent message when agentflow (#2749)
add showagent message when agentflow
2024-07-01 18:46:10 +01:00
Mubashir Shariq 512df4197c
Bugfix/broken icon display on market place tab (#2737)
* fixed broken display on marketplace tab

* updated

* updated backgroudImg to background when no Imgsrc
2024-07-01 16:28:19 +01:00
Rogério Chaves 4d174495dc
Fix langwatch link (#2748) 2024-07-01 16:20:42 +01:00
Henry Heng 15a416a58f
Bugfix/Verify apikey params typo (#2742)
fix verify apikey params typo
2024-06-28 22:09:12 +01:00
Aman Soni e69fee1375
Embed chat configuration updated (#2723) 2024-06-27 12:08:29 +01:00
Henry Heng cc24f94358
Release/1.8.3 (#2730)
* release flowise 1.8.3

* update flowise-components to 1.8.5

* Update pnpm-lock.yaml
2024-06-26 14:55:30 +01:00
Henry Heng b55f87cc40
Feature/FireCrawl (#2728)
* add firecrawl

* Update FireCrawl.ts (#2692)

---------

Co-authored-by: Eric Ciarla <43451761+ericciarla@users.noreply.github.com>
2024-06-26 14:40:43 +01:00
Henry Heng 7067f90153
Release/1.8.3 (#2727)
release flowise 1.8.3
2024-06-26 14:01:26 +01:00
Henry Heng d0354bb25c
Chore/update flowise embed version (#2722)
* update flowise-embed version on lock file

* add agent messages to share chatbot
2024-06-26 02:18:08 +01:00
Henry Heng 96dfedde6e
Bugfix/Add check for setError (#2721)
add check for setError
2024-06-25 17:54:38 +01:00
Henry Heng 1367f095d4
Bugfix/Fix export lead email (#2717)
fix export lead email
2024-06-25 00:23:16 +01:00
Henry Heng 109b0367cc
Bugfix/Pinecone Text Key (#2708)
* add source tag to pinecone

* update pinecone

* add text key to pinecone

* update pinecone version, and singleton
2024-06-24 02:08:30 +01:00
Saurav Maheshkar e2ae524edd
chore: move configurations to `package.json` (#2613)
feat: refactor to have minimal config files
2024-06-21 22:13:13 +01:00
rennokki eff1336b82
update: VoyageAI Models (#2688)
* Updated VoyageAI Models

* Update VoyageAI Rerankers

* Update VoyageAIRerankRetriever.ts

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-06-21 16:52:54 +01:00
YISH 18b83333d3
Fix `observation?.includes is not a function` (#2696) 2024-06-21 16:45:20 +01:00
Rogério Chaves 0fc5e3d0c5
Add LangWatch integration (#2677)
Add langwatch integration
2024-06-21 16:26:55 +01:00
Harsha aec9e7a3b7
Bug fix - ChatBot Share - CURL with i/p config (#2672) 2024-06-21 16:11:34 +01:00
YISH 83ecc88b35
feat (server): add support for setting listening host (#2604) 2024-06-21 14:51:03 +01:00
Henry Heng f811fc4e5d
Chore/Sonnet 3.5 (#2698)
* add gemini flash

* add gemin flash to vertex

* add gemin-1.5-flash-preview to vertex

* add azure gpt 4o

* add claude 3.5 sonnet
2024-06-21 14:23:06 +01:00
Daniel D'Abate d4f80394d3
Feature - Add option to start a new session with each interaction with the Chatflow tool (#2633)
* Feature - Add option to start a new session with each interaction with the Chatflow tool

* ChatflowTool - Create random chatId when startNewSession is set
2024-06-20 00:03:45 +01:00
Henry Heng 842bfc66fe
Feature/Add ability to upload image url when calling API (#2683)
add ability to upload image url when calling API
2024-06-19 23:26:56 +01:00
Henry Heng 72e5287343
Feature/Add tool choices to openai assistant (#2682)
add tool choices to openai
2024-06-19 22:41:53 +01:00
Aman Soni 8bb841641e
Embed Chat configuration updated (#2648) 2024-06-19 21:12:41 +01:00
Henry Heng b662dd79c6
Feature/Add Text Key to Pinecone (#2681)
* add source tag to pinecone

* update pinecone

* add text key to pinecone

* update pinecone version, and singleton
2024-06-19 21:10:09 +01:00
Henry Heng 1849637af8
Bugfix/Update retriever tool wordings (#2679)
update retriever tool wordings
2024-06-19 18:49:01 +01:00
Henry Heng 21743656a8
Feature/Add multi query retriever (#2676)
add multi query retriever
2024-06-19 14:52:58 +01:00
Henry Heng b5ead0745b
Feature/Add filter to Postgres VectorStore (#2674)
* add filter to pgvs

* fix types
2024-06-19 13:44:20 +01:00
YISH 371e632a2c
Fix apikey not URL safe (#2602) 2024-06-14 11:35:25 +01:00
Mohamed Akram c34eb8ee15
Unstructured Upsert bug (#2628)
* Unstructured Upsert bug
When upserting with the API, the uploaded files are of type pdfFile, txtFile, etc.
but the code reads only fileObject which is the uploaded file using the button

* Update UnstructuredFile.ts

fixed linting error

---------

Co-authored-by: Mohamed Akram <makram@ntgclarity.com>
2024-06-14 02:39:46 +01:00
Henry Heng f2c6a1988f
Chore/models update (#2634)
* add gemini flash

* add gemin flash to vertex

* add gemin-1.5-flash-preview to vertex

* add azure gpt 4o
2024-06-13 14:35:35 +01:00
Daniel D'Abate 76c5e6a893
[Feature] Add sort capabilities to ChatFlow and AgentFlow tables (#2609)
Add sort capabilities to ChatFlow and AgentFlow tables
2024-06-12 21:26:44 +01:00
Henry Heng 3ab0d99711
Bugfix/Correctly throw 401 error when unauthorized (#2626)
correctly throw 401 error when unauthorized
2024-06-12 19:10:42 +01:00
Henry Heng 5ba468b4cc
🥳 flowise@1.8.2 release (#2625) 2024-06-11 22:39:59 +01:00
Jared McQueen 5e4d640ed7
fix totalChars accumulator for undefined pageContent (#2619) 2024-06-11 22:05:04 +01:00
jiabaow 6fb775fe95
add Fireworks LLM (#2597)
* add Fireworks LLM

* support multiple models

* Update Fireworks.ts

* fix linting

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-06-11 21:56:34 +01:00
Mingxin Hou 88ee9b09a7
Fireworks ai chat model (#2596)
* fireworks chat model

* add chatFireworks to streamAvaliableLLMs

* add model parameter input

* Update ChatFireworks.ts

* fix linting

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-06-11 21:56:22 +01:00
Torsten Raudssus 9f9aff34f8
Adding pnpm-store to gitignore (#2600) 2024-06-11 16:29:28 +01:00
Torsten Raudssus 66e1296a06
Searxng Tool implementation (#2599)
* Searxng Tool implementation, first commit with most functionality

* Fixed complains of pnpm lint

* Picky linter
2024-06-11 16:29:07 +01:00
Saurav Maheshkar f1e78d870e
gh: refactor community files (#2572)
* gh: refactor community files

* fix: links to other files

* docs: move to i18n
2024-06-07 15:49:24 +01:00
Daniel D'Abate be3a887e68
Improvement - Reduce size of final docker image with multistage build (#2598)
* Reduce size of final docker image with multistage build

* Dockerfile - Move PUPPETEER_SKIP_DOWNLOAD flag to build stage
2024-06-07 15:35:57 +01:00
不知火 Shiranui c8939dc2a6
Add data source adapter of MariaDB (#2595)
fix(datasource.ts): add mariadb
2024-06-07 12:14:06 +01:00
Daniel D'Abate 34251fa336
Bugfix - Add x-forwarded-proto as source for http protocol for base url and add base url field to ChatflowTool (#2592) 2024-06-07 12:04:25 +01:00
Daniel D'Abate cb93d9d557
Bugfix - JS function "execute" from node crashing the frontend and response is hidden (#2589)
* Bugfix - Fix crash when executing JS function from node and fix hidden response

* add toString() to code executed result

* Change height of CodeEditor to make it bigger, but still make the result visible below.

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-06-06 14:17:35 +01:00
toi500 5899e50c54
Adding Agentflow Template (#2586)
adding agentflow template

Co-authored-by: toi500 <toi500@gmail.com>
2024-06-05 21:55:45 +01:00
Aman Soni e0a03ad46d
HTML Embed Chat popup configuration updated (#2583)
* HTML Embed Chat popup configuration updated

* HTML Embed Chat popup configuration

* Update EmbedChat.jsx

update spacing to be consistent

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-06-05 21:55:27 +01:00
Henry Heng 582dcc8508
Feature/add status check ping (#2579)
add status check ping
2024-06-04 18:45:45 +01:00
Erasmo Pinheiro 5a73eaa588
fix: Change the CMD flowise command in Dockerfile to be the same as in the document (#2563)
* fix: Change the CMD flowise command to be the same as in the documentation.

* Remove unnecessary npx command from dockerfile

* Command was changed by the entrypoint in Docker Composer

---------

Co-authored-by: Erasmo De Souza Pinheiro <erasmodesouzapinheiro@Erasmos-MacBook-Air.local>
2024-06-04 16:23:30 +01:00
Henry Heng 4ec8376efa
Bugfix/get rid of double quotes when replacing variable value (#2577)
get rid of double quotes when replacing variable value
2024-06-04 15:44:27 +01:00
Daniel D'Abate 5ba9493b30
Feature - Env variable to disable ChatFlow reuse (#2559) 2024-06-04 10:11:57 +01:00
Jared McQueen bdbb6f850a
adding a document loader returns the correct id (#2538) 2024-06-04 02:57:30 +01:00
Saurav Maheshkar 55f52c4d50
feat(ci): enable `pnpm` caching in CI (#2530) 2024-06-04 02:57:07 +01:00
Jared McQueen 7eb9341fdc
adding url to source for all web scrapers (#2537) 2024-06-04 02:50:43 +01:00
Daniel D'Abate a799ac8087
Improve OpenSearchURL credential storing user and password in separate fields from the URL (#2549)
OpenSearch - Improve OpenSearchURL credential storing user and password in separate fields from the URL
2024-06-04 02:31:36 +01:00
Henry 76abd20e85 🥳 flowise-components@1.8.2 minor bugfix release 2024-06-03 13:34:21 +01:00
Henry Heng 272fd914bd
Bugfix/supabase upsert ids (#2561)
* fix upserting same id with supabase

* remove dedicated addvectors logic for ids

* Update pnpm-lock.yaml

* add fix for null id column
2024-06-03 13:09:59 +01:00
Henry Heng f2a0ffe542
Bugfix/Check for proper thread id and avoid throwing error (#2551)
check for proper thread id and avoid throwing error
2024-06-02 02:41:48 +01:00
Henry Heng 8c66d2c735
Bugfix/Avoid passing runnable assign when worker agent has no input variables (#2550)
avoid passing runnable assign when worker agent has no input variables
2024-06-02 02:33:40 +01:00
Henry Heng e15e6fafdc
Bugfix/Disable output prediction from llmchain streaming (#2543)
disable output prediction from llmchain streaming
2024-06-01 12:37:00 +01:00
Daniel D'Abate e5f0ca2dd3
Fix OpenSearch vector store upsert (#2545) 2024-06-01 12:36:39 +01:00
Henry 1d9927027d 🥳 flowise@1.8.1 minor bugfix release 2024-06-01 00:09:38 +01:00
Henry c42ef95a15 🥳 flowise-ui@1.8.1 minor bugfix release 2024-06-01 00:09:10 +01:00
Henry f64931bfcc 🥳 flowise-components@1.8.1 minor bugfix release 2024-06-01 00:08:49 +01:00
Henry Heng 6a58ae4e80
Bugfix/In-mem VS not able to load document (#2542)
fix where in-mem VS not able to load document
2024-05-31 23:41:21 +01:00
Henry Heng 04e0ce1783
Chore/update flowise-embed version on lock file (#2535)
update flowise-embed version on lock file
2024-05-31 13:20:34 +01:00
Henry Heng b5b929e192
Feat/Exa Search Tool (#2524)
* add exa search tool

* add exa svg
2024-05-30 22:27:17 +01:00
Fahreddin Özcan 7706b3484a
Update Upstash Logo for White Background (#2511)
* update: upstash logo for white background

* update: upstash chat memory logo
2024-05-30 22:20:02 +01:00
YISH d50563765e
Fix JSON escaping (better) (#2498)
Fix JSON escaping
2024-05-30 12:58:20 +01:00
Rafael Reis eb738a1552
Fix for Whisper Error: 'File is not defined' when using Speech to Text (#2526)
* tested ok

* update localai stt file

* update toFile method for OpenAI Assistant uploads

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
Co-authored-by: Henry <hzj94@hotmail.com>
2024-05-30 12:54:55 +01:00
Henry Heng 059eae4268
Bugfix/model list unreachable (#2522)
* fallback to fetch local models json file

* enable user to specify local path to load models.json

* Update pnpm-lock.yaml
2024-05-30 11:59:19 +01:00
Henry Heng d734747ec0
Bugfix/Upserting same id with supabase (#2521)
fix upserting same id with supabase
2024-05-30 00:18:51 +01:00
Daniel D'Abate 912c8f3d5b
Feature: Support role-based authentication for AWS (#2470)
* Storage, DynamoDBChatMemory - Make AWS credentials optional to support role-based authentication

* Lint fix
2024-05-29 23:40:01 +01:00
Henry Heng 48ac815f8e
Bugfix/Restore Requests Tool (#2513)
restore requests tool
2024-05-29 23:39:11 +01:00
patrickreinan 2878af69e4
Added LOG_JSON_SPACES to control json beautify (#2483)
* Added: environment var LOG_JSON_SPACES with default value 2. Used to no beautify JSON on handler.ts
Fix: logger.verbose was not working because default log level was info

* Update handler.ts

---------

Co-authored-by: patrick <patrick.alves@br.experian.com>
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-05-29 18:25:32 +01:00
Henry 5d649b27cf update server ts config to exclude test.ts 2024-05-29 18:22:41 +01:00
jiabaow f5b08864b8
test validateKey (#2485)
add validateKey.test.ts
2024-05-29 18:01:35 +01:00
Henry Heng 97386bc3b2
Bugfix/Files not removed when doc store loader is deleted (#2502)
fix files not removed when doc store loader is deleted
2024-05-28 22:36:12 +01:00
Henry Heng 22f39692e5
Bugfix/Use vm2 for chatflow tool (#2482)
use vm2 for chatflow tool
2024-05-24 18:57:49 +01:00
YISH 82899d9d5d
Fix JSON escaping (#2461) 2024-05-24 02:26:34 +01:00
jiabaow 50c53de296
Together AI llms (#2454)
* add TogetherAI.ts skeleton

* add TogetherAI api links

* fix constructing final obj

* fix INodeData import

* fix category

* fix version & streaming

* add stop

* update constructor

* update constructor with default values

* add togetherAI logo

* add credential

* change logo

* update streaming

* disable streaming

* remove model

* clean up comment

* delete unused icon

* add togetherAI for streaming

* Update index.ts

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-05-24 01:22:41 +01:00
Henry Heng e32b643445
Bugfix/Regex check for auth middleware (#2469)
add regex check for auth middleware
2024-05-23 15:46:11 +01:00
patrickreinan 265de4e97e
Added Opensearch credential (#2458)
* Added Openserch credential

* fix issues detected by linter

* rename to camelcase openSearchUrl

* Update OpenSearch.ts

---------

Co-authored-by: patrick <patrick.alves@br.experian.com>
Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-05-23 11:49:15 +01:00
Vinod Kiran ff2381741e
Chore/upgrade llamaindex version (#2440)
* updates to loader to support file upload

* adding a todo

* upgrade llamaindex

* update groq icon

* update azure models

* update llamaindex version

---------

Co-authored-by: Henry <hzj94@hotmail.com>
2024-05-22 13:35:08 +01:00
Henry e83dcb01b8 🥳 flowise-ui@1.8.0 release 2024-05-21 16:50:53 +01:00
Henry 9d10dc4856 🥳 flowise@1.8.0 release 2024-05-21 16:50:29 +01:00
Henry 68625c0589 🥳 flowise-components@1.8.0 release 2024-05-21 16:49:59 +01:00
Henry Heng 8ebc4dcfd5
Feature/lang graph (#2319)
* add langgraph

* datasource: initial commit

* datasource: datasource details and chunks

* datasource: Document Store Node

* more changes

* Document Store - Base functionality

* Document Store Loader Component

* Document Store Loader Component

* before merging the modularity PR

* after merging the modularity PR

* preview mode

* initial draft PR

* fixes

* minor updates and  fixes

* preview with loader and splitter

* preview with credential

* show stored chunks

* preview update...

* edit config

* save, preview and other changes

* save, preview and other changes

* save, process and other changes

* save, process and other changes

* alpha1 - for internal testing

* rerouting urls

* bug fix on new leader create

* pagination support for chunks

* delete document store

* Update pnpm-lock.yaml

* doc store card view

* Update store files to use updated storage functions, Document Store Table View and other changes

* ui changes

* add expanded chunk dialog, improve ui

* change throw Error to InternalError

* Bug Fixes and removal of subFolder, adding of view chunks for store

* lint fixes

* merge changes

* DocumentStoreStatus component

* ui changes for doc store

* add remove metadata key field, add custom document loader

* add chatflows used doc store chips

* add types/interfaces to DocumentStore Services

* document loader list dialog title bar color change

* update interfaces

* Whereused Chatflow Name and Added chunkNo to retain order of created chunks.

* use typeorm order chunkNo, ui changes

* update tabler icons react

* cleanup agents

* add pysandbox tool

* add abort functionality, loading next agent

* add empty view svg

* update chatflow tool with chatId

* rename to agentflows

* update worker for prompt input values

* update dashboard to agentflows, agentcanvas

* fix marketplace use template

* add agentflow templates

* resolve merge conflict

* update baseURL

---------

Co-authored-by: vinodkiran <vinodkiran@usa.net>
Co-authored-by: Vinod Paidimarry <vinodkiran@outlook.in>
2024-05-21 16:36:42 +01:00
Vinod Kiran 95f1090bed
BugFix #2386: Double quotes are not escaped, flow crashes (#2448)
Fix for #2386
2024-05-21 12:10:30 +01:00
YISH 5733a8089e
Avoid .env packaging into docker images. (#2451) 2024-05-20 17:09:27 +01:00
Henry Heng 8caca472ba
Feature/Add prepend messages to memory (#2410)
add prepend messages to memory
2024-05-20 17:08:34 +01:00
Henry Heng 816436f8fa
Chore/upgrade lc version (#2422)
* update langchain version, openai, mistral, vertex, anthropic, introduced toolagent

* upgrade @google/generative-ai 0.7.0, replicate and faiss-node

* update cohere ver

* adding chatCohere to streaming

* update gemini to have image upload

* update google genai, remove aiplugin

* upgrade langchain version

* fix lint
2024-05-17 11:41:29 +01:00
Quinn 0521e6b3f9
Updated bedrock model list (#2426)
Added missing relevant chat and embed models. Added descriptions from aws docs.
2024-05-17 11:41:09 +01:00
Henry Heng 0365afbeeb
Feature/Add rpc filter to supabase (#2425)
add rpc filter to supabase
2024-05-17 11:40:17 +01:00
Henry Heng 0de7fb8509
feature/Add ChatOllama Function (#2403)
* add chat ollama function

* update description

* update tool system prompt description
2024-05-15 19:41:56 +01:00
Henry Heng b5e502f3b6
Feature/Multer to s3 (#2408)
* add ability to store files from multer to s3

* add check to bypass doc loader
2024-05-15 19:41:37 +01:00
Henry Heng c022972cf8
Bugfix/Local Models Json (#2416)
fallback to fetch local models json file
2024-05-15 19:39:05 +01:00
Octavian FlowiseAI b65487564a
Update CONTRIBUTING.md (#2414) 2024-05-15 13:33:12 +02:00
Octavian FlowiseAI 49c07552ce
Ignore keys and other secrets (#2409) 2024-05-15 01:14:02 +02:00
Henry Heng b4829275aa
Feat/add gemini flash (#2407)
* add gemini flash

* add gemin flash to vertex

* add gemin-1.5-flash-preview to vertex
2024-05-14 22:10:39 +01:00
Asharib Ali b3069932e1
Add new openai model (GPT-4o ) in the assistant and models.json (#2402)
* add openai new model (gpt-4o) in the assistant

* add openai new model (gpt-4o) in the models.json
2024-05-13 21:26:07 +01:00
Henry Heng b50103021c
Feature/Ability to omit all metadata keys using asterisk (#2401)
add ability to omit all metadata keys using asterisk
2024-05-13 16:30:57 +01:00
YISH 4fbc3f6cfe
Fix streaming not work (#2396) 2024-05-13 12:29:27 +01:00
clates d3f03e380e
[FEATURE] Added support for LocalAI Speech To Text configuration (#2376)
* Added support for LocalAI to the Speech To Text configuration. Added a few debug statements around speech to text conversion. Finally, refactored the speechToTextProviders a bit to try and remove some magic strings that have undocumented rules around naming.

* LocalAI STT - PR Feedback - Updated LocalAI Image, changed casing, and updated the default model to whisper-1.
2024-05-13 12:21:27 +01:00
Saurabh Gupta 823cefb5c5
fix initDatabase function by proper use of await in try catch (#2360)
* fix initDatabase function by proper use of await in try catch

* lint-fix

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
2024-05-13 12:15:22 +01:00
Octavian FlowiseAI ee9d3a33fa
Update autoSyncMergedPullRequest.yml 2024-05-10 22:01:05 +02:00
Octavian FlowiseAI cb0eb67df0
Use tabler icons react instead of tabler icons package (#2389)
* Use tabler icons react instead of tabler icons package

* Update package.json

---------

Co-authored-by: Octavian Cioaca <devtools@domselardi.com>
2024-05-10 21:17:12 +02:00
Octavian FlowiseAI 32ad3b1366
Update autoSyncMergedPullRequest.yml 2024-05-10 20:50:36 +02:00
Hariharan1828 b952350a7b
fix CORS issue in .env.example (#2370)
* fix CORS issue in .env.example

* Fixed the IFRAME_ORIGINS
2024-05-10 18:05:03 +01:00
Henry Heng 38ce851200
remove cmake from dockerfile 2024-05-09 16:52:19 +01:00
Henry 1ee6f1f88a 🥳 flowise@1.7.2 minor bugfix release 2024-05-09 14:30:23 +01:00
Henry dce84106ef 🥳 flowise-ui@1.7.2 minor bugfix release 2024-05-09 14:29:12 +01:00
Henry e851af90b1 🥳 flowise-components@1.7.2 minor bugfix release 2024-05-09 14:28:40 +01:00
Henry Heng 96d4ab66f2
Chore/Temporarily disable couchbase (#2373)
temporarily disable couchbase
2024-05-09 14:14:14 +01:00
Henry Heng 2048976545
Update vertex gemini 1.5 model 2024-05-08 18:04:50 +01:00
Henry Heng a9f9c8874c
Bugfix/Save chunk's metadata (#2366)
save metadata chunk
2024-05-08 17:24:03 +01:00
Henry Heng 26e7a1ac35
Bugfix/Credential mandatory field when preview chunks (#2356)
check if credential is mandatory field when preview chunks
2024-05-08 02:13:08 +01:00
automaton82 43b22476e3
Fixes 2343 CSV error with no text splitter (#2344) 2024-05-07 01:43:42 +01:00
Henry Heng d4a5474f48
Bugfix/Escape column name on postgres migration indexing (#2342)
escape column name on postgres migration indexing
2024-05-07 01:10:46 +01:00
Henry ef532866fd add docker image CI workflow 2024-05-07 00:13:27 +01:00
Henry a84eabbef2 update pnpm-lock.yaml file 2024-05-06 18:14:59 +01:00
Henry d34cef2dc7 🥳 flowise@1.7.1 release 2024-05-06 18:05:36 +01:00
Henry 40718bd77a 🥳 flowise-ui@1.7.1 release 2024-05-06 18:05:20 +01:00
Henry a6bcaba592 🥳 flowise-components@1.7.1 release 2024-05-06 18:04:45 +01:00
441 changed files with 55395 additions and 45571 deletions

View File

@ -5,3 +5,6 @@ build
**/node_modules
**/build
**/dist
packages/server/.env
packages/ui/.env

View File

@ -1,2 +0,0 @@
node_modules
dist

View File

@ -11,14 +11,14 @@ jobs:
permissions:
contents: write
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Show PR info
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo The PR #${{ github.event.pull_request.number }} was merged on main branch!
- name: Repository Dispatch
uses: peter-evans/repository-dispatch@v2
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.AUTOSYNC_TOKEN }}
repository: ${{ secrets.AUTOSYNC_CH_URL }}
@ -28,6 +28,6 @@ jobs:
"ref": "${{ github.ref }}",
"prNumber": "${{ github.event.pull_request.number }}",
"prTitle": "${{ github.event.pull_request.title }}",
"prDescription": "${{ toJSON(github.event.pull_request.description) }}",
"prDescription": "",
"sha": "${{ github.sha }}"
}

43
.github/workflows/docker-image.yml vendored Normal file
View File

@ -0,0 +1,43 @@
name: Docker Image CI
on:
workflow_dispatch:
inputs:
node_version:
description: 'Node.js version to build this image with.'
type: choice
required: true
default: '20'
options:
- '20'
tag_version:
description: 'Tag version of the image to be pushed.'
type: string
required: true
default: 'latest'
jobs:
docker:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4.1.1
- name: Set up QEMU
uses: docker/setup-qemu-action@v3.0.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3.0.0
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5.3.0
with:
context: .
file: ./docker/Dockerfile
build-args: |
NODE_VERSION=${{github.event.inputs.node_version}}
platforms: linux/amd64,linux/arm64
push: true
tags: flowiseai/flowise:${{github.event.inputs.tag_version}}

View File

@ -27,6 +27,7 @@ jobs:
with:
node-version: ${{ matrix.node-version }}
check-latest: false
cache: 'pnpm'
- run: npm i -g pnpm
- run: pnpm install
- run: ./node_modules/.bin/cypress install

60
.gitignore vendored
View File

@ -11,6 +11,9 @@
**/logs
**/*.log
## pnpm
.pnpm-store/
## build
**/dist
**/build
@ -44,3 +47,60 @@
## compressed
**/*.tgz
## vscode
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
!.vscode/*.code-snippets
# Local History for Visual Studio Code
.history/
## other keys
*.key
*.keys
*.priv
*.rsa
*.key.json
## ssh keys
*.ssh
*.ssh-key
.key-mrc
## Certificate Authority
*.ca
## Certificate
*.crt
## Certificate Sign Request
*.csr
## Certificate
*.der
## Key database file
*.kdb
## OSCP request data
*.org
## PKCS #12
*.p12
## PEM-encoded certificate data
*.pem
## Random number seed
*.rnd
## SSLeay data
*.ssleay
## S/MIME message
*.smime
*.vsix

View File

@ -1,3 +0,0 @@
**/node_modules
**/dist
**/build

View File

@ -1,9 +0,0 @@
module.exports = {
printWidth: 140,
singleQuote: true,
jsxSingleQuote: true,
trailingComma: 'none',
tabWidth: 4,
semi: false,
endOfLine: 'auto'
}

View File

@ -1,6 +1,6 @@
# Contributor Covenant Code of Conduct
English | [中文](<./CODE_OF_CONDUCT-ZH.md>)
English | [中文](./i18n/CODE_OF_CONDUCT-ZH.md)
## Our Pledge

View File

@ -2,7 +2,7 @@
# Contributing to Flowise
English | [中文](./CONTRIBUTING-ZH.md)
English | [中文](./i18n/CONTRIBUTING-ZH.md)
We appreciate any form of contributions.
@ -44,7 +44,7 @@ Flowise has 3 different modules in a single mono repository.
#### Prerequisite
- Install [PNPM](https://pnpm.io/installation)
- Install [PNPM](https://pnpm.io/installation). The project is configured to use pnpm v9.
```bash
npm i -g pnpm
```
@ -120,39 +120,41 @@ Flowise has 3 different modules in a single mono repository.
Flowise support different environment variables to configure your instance. You can specify the following variables in the `.env` file inside `packages/server` folder. Read [more](https://docs.flowiseai.com/environment-variables)
| Variable | Description | Type | Default |
| ---------------------------- | -------------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
| PORT | The HTTP port Flowise runs on | Number | 3000 |
| CORS_ORIGINS | The allowed origins for all cross-origin HTTP calls | String | |
| IFRAME_ORIGINS | The allowed origins for iframe src embedding | String | |
| FLOWISE_USERNAME | Username to login | String | |
| FLOWISE_PASSWORD | Password to login | String | |
| FLOWISE_FILE_SIZE_LIMIT | Upload File Size Limit | String | 50mb |
| DEBUG | Print logs from components | Boolean | |
| LOG_PATH | Location where log files are stored | String | `your-path/Flowise/logs` |
| LOG_LEVEL | Different levels of logs | Enum String: `error`, `info`, `verbose`, `debug` | `info` |
| APIKEY_PATH | Location where api keys are saved | String | `your-path/Flowise/packages/server` |
| TOOL_FUNCTION_BUILTIN_DEP | NodeJS built-in modules to be used for Tool Function | String | |
| TOOL_FUNCTION_EXTERNAL_DEP | External modules to be used for Tool Function | String | |
| DATABASE_TYPE | Type of database to store the flowise data | Enum String: `sqlite`, `mysql`, `postgres` | `sqlite` |
| DATABASE_PATH | Location where database is saved (When DATABASE_TYPE is sqlite) | String | `your-home-dir/.flowise` |
| DATABASE_HOST | Host URL or IP address (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_PORT | Database port (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_USER | Database username (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_PASSWORD | Database password (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_NAME | Database name (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_SSL_KEY_BASE64 | Database SSL client cert in base64 (takes priority over DATABASE_SSL) | Boolean | false |
| DATABASE_SSL | Database connection overssl (When DATABASE_TYPE is postgre) | Boolean | false |
| SECRETKEY_PATH | Location where encryption key (used to encrypt/decrypt credentials) is saved | String | `your-path/Flowise/packages/server` |
| FLOWISE_SECRETKEY_OVERWRITE | Encryption key to be used instead of the key stored in SECRETKEY_PATH | String |
| DISABLE_FLOWISE_TELEMETRY | Turn off telemetry | Boolean |
| MODEL_LIST_CONFIG_JSON | File path to load list of models from your local config file | String | `/your_model_list_config_file_path` |
| STORAGE_TYPE | Type of storage for uploaded files. default is `local` | Enum String: `s3`, `local` | `local` |
| BLOB_STORAGE_PATH | Local folder path where uploaded files are stored when `STORAGE_TYPE` is `local` | String | `your-home-dir/.flowise/storage` |
| S3_STORAGE_BUCKET_NAME | Bucket name to hold the uploaded files when `STORAGE_TYPE` is `s3` | String | |
| S3_STORAGE_ACCESS_KEY_ID | AWS Access Key | String | |
| S3_STORAGE_SECRET_ACCESS_KEY | AWS Secret Key | String | |
| S3_STORAGE_REGION | Region for S3 bucket | String | |
| Variable | Description | Type | Default |
| ---------------------------- | ----------------------------------------------------------------------------------------------- | ------------------------------------------------ | ----------------------------------- |
| PORT | The HTTP port Flowise runs on | Number | 3000 |
| CORS_ORIGINS | The allowed origins for all cross-origin HTTP calls | String | |
| IFRAME_ORIGINS | The allowed origins for iframe src embedding | String | |
| FLOWISE_USERNAME | Username to login | String | |
| FLOWISE_PASSWORD | Password to login | String | |
| FLOWISE_FILE_SIZE_LIMIT | Upload File Size Limit | String | 50mb |
| DISABLE_CHATFLOW_REUSE | Forces the creation of a new ChatFlow for each call instead of reusing existing ones from cache | Boolean | |
| DEBUG | Print logs from components | Boolean | |
| LOG_PATH | Location where log files are stored | String | `your-path/Flowise/logs` |
| LOG_LEVEL | Different levels of logs | Enum String: `error`, `info`, `verbose`, `debug` | `info` |
| LOG_JSON_SPACES | Spaces to beautify JSON logs | | 2 |
| APIKEY_PATH | Location where api keys are saved | String | `your-path/Flowise/packages/server` |
| TOOL_FUNCTION_BUILTIN_DEP | NodeJS built-in modules to be used for Tool Function | String | |
| TOOL_FUNCTION_EXTERNAL_DEP | External modules to be used for Tool Function | String | |
| DATABASE_TYPE | Type of database to store the flowise data | Enum String: `sqlite`, `mysql`, `postgres` | `sqlite` |
| DATABASE_PATH | Location where database is saved (When DATABASE_TYPE is sqlite) | String | `your-home-dir/.flowise` |
| DATABASE_HOST | Host URL or IP address (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_PORT | Database port (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_USER | Database username (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_PASSWORD | Database password (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_NAME | Database name (When DATABASE_TYPE is not sqlite) | String | |
| DATABASE_SSL_KEY_BASE64 | Database SSL client cert in base64 (takes priority over DATABASE_SSL) | Boolean | false |
| DATABASE_SSL | Database connection overssl (When DATABASE_TYPE is postgre) | Boolean | false |
| SECRETKEY_PATH | Location where encryption key (used to encrypt/decrypt credentials) is saved | String | `your-path/Flowise/packages/server` |
| FLOWISE_SECRETKEY_OVERWRITE | Encryption key to be used instead of the key stored in SECRETKEY_PATH | String |
| DISABLE_FLOWISE_TELEMETRY | Turn off telemetry | Boolean |
| MODEL_LIST_CONFIG_JSON | File path to load list of models from your local config file | String | `/your_model_list_config_file_path` |
| STORAGE_TYPE | Type of storage for uploaded files. default is `local` | Enum String: `s3`, `local` | `local` |
| BLOB_STORAGE_PATH | Local folder path where uploaded files are stored when `STORAGE_TYPE` is `local` | String | `your-home-dir/.flowise/storage` |
| S3_STORAGE_BUCKET_NAME | Bucket name to hold the uploaded files when `STORAGE_TYPE` is `s3` | String | |
| S3_STORAGE_ACCESS_KEY_ID | AWS Access Key | String | |
| S3_STORAGE_SECRET_ACCESS_KEY | AWS Secret Key | String | |
| S3_STORAGE_REGION | Region for S3 bucket | String | |
You can also specify the env variables when using `npx`. For example:

View File

@ -10,7 +10,7 @@
[![GitHub star chart](https://img.shields.io/github/stars/FlowiseAI/Flowise?style=social)](https://star-history.com/#FlowiseAI/Flowise)
[![GitHub fork](https://img.shields.io/github/forks/FlowiseAI/Flowise?style=social)](https://github.com/FlowiseAI/Flowise/fork)
English | [中文](./README-ZH.md) | [日本語](./README-JA.md) | [한국어](./README-KR.md)
English | [中文](./i18n/README-ZH.md) | [日本語](./i18n/README-JA.md) | [한국어](./i18n/README-KR.md)
<h3>Drag & drop UI to build your customized LLM flow</h3>
<a href="https://github.com/FlowiseAI/Flowise">
@ -44,9 +44,9 @@ Download and Install [NodeJS](https://nodejs.org/en/download) >= 18.15.0
1. Go to `docker` folder at the root of the project
2. Copy `.env.example` file, paste it into the same location, and rename to `.env`
3. `docker-compose up -d`
3. `docker compose up -d`
4. Open [http://localhost:3000](http://localhost:3000)
5. You can bring the containers down by `docker-compose stop`
5. You can bring the containers down by `docker compose stop`
### Docker Image

View File

@ -1,13 +0,0 @@
module.exports = {
presets: [
'@babel/preset-typescript',
[
'@babel/preset-env',
{
targets: {
node: 'current'
}
}
]
]
}

View File

@ -6,8 +6,8 @@ LOG_PATH=/root/.flowise/logs
BLOB_STORAGE_PATH=/root/.flowise/storage
# NUMBER_OF_PROXIES= 1
# CORS_ORIGINS="*"
# IFRAME_ORIGINS="*"
# CORS_ORIGINS=*
# IFRAME_ORIGINS=*
# DATABASE_TYPE=postgres
# DATABASE_PORT=5432
@ -23,8 +23,10 @@ BLOB_STORAGE_PATH=/root/.flowise/storage
# FLOWISE_SECRETKEY_OVERWRITE=myencryptionkey
# FLOWISE_FILE_SIZE_LIMIT=50mb
# DISABLE_CHATFLOW_REUSE=true
# DEBUG=true
# LOG_LEVEL=debug (error | warn | info | verbose | debug)
# LOG_LEVEL=info (error | warn | info | verbose | debug)
# TOOL_FUNCTION_BUILTIN_DEP=crypto,fs
# TOOL_FUNCTION_EXTERNAL_DEP=moment,lodash

View File

@ -1,21 +1,25 @@
FROM node:20-alpine
# Stage 1: Build stage
FROM node:20-alpine as build
USER root
RUN apk add --no-cache git
RUN apk add --no-cache python3 py3-pip make g++
# needed for pdfjs-dist
RUN apk add --no-cache build-base cairo-dev pango-dev
# Install Chromium
RUN apk add --no-cache chromium
# Skip downloading Chrome for Puppeteer (saves build time)
ENV PUPPETEER_SKIP_DOWNLOAD=true
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# You can install a specific version like: flowise@1.0.0
# Install latest Flowise globally (specific version can be set: flowise@1.0.0)
RUN npm install -g flowise
WORKDIR /data
# Stage 2: Runtime stage
FROM node:20-alpine
CMD "flowise"
# Install runtime dependencies
RUN apk add --no-cache chromium git python3 py3-pip make g++ build-base cairo-dev pango-dev
# Set the environment variable for Puppeteer to find Chromium
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# Copy Flowise from the build stage
COPY --from=build /usr/local/lib/node_modules /usr/local/lib/node_modules
COPY --from=build /usr/local/bin /usr/local/bin
ENTRYPOINT ["flowise", "start"]

View File

@ -5,9 +5,9 @@ Starts Flowise from [DockerHub Image](https://hub.docker.com/r/flowiseai/flowise
## Usage
1. Create `.env` file and specify the `PORT` (refer to `.env.example`)
2. `docker-compose up -d`
2. `docker compose up -d`
3. Open [http://localhost:3000](http://localhost:3000)
4. You can bring the containers down by `docker-compose stop`
4. You can bring the containers down by `docker compose stop`
## 🔒 Authentication
@ -19,9 +19,9 @@ Starts Flowise from [DockerHub Image](https://hub.docker.com/r/flowiseai/flowise
- FLOWISE_USERNAME=${FLOWISE_USERNAME}
- FLOWISE_PASSWORD=${FLOWISE_PASSWORD}
```
3. `docker-compose up -d`
3. `docker compose up -d`
4. Open [http://localhost:3000](http://localhost:3000)
5. You can bring the containers down by `docker-compose stop`
5. You can bring the containers down by `docker compose stop`
## 🌱 Env Variables

View File

@ -33,4 +33,4 @@ services:
- '${PORT}:${PORT}'
volumes:
- ~/.flowise:/root/.flowise
command: /bin/sh -c "sleep 3; flowise start"
entrypoint: /bin/sh -c "sleep 3; flowise start"

View File

@ -2,7 +2,7 @@
# 贡献者公约行为准则
[English](<./CODE_OF_CONDUCT.md>) | 中文
[English](../CODE_OF_CONDUCT.md) | 中文
## 我们的承诺
@ -44,6 +44,6 @@
## 归属
该行为准则的内容来自于[贡献者公约](http://contributor-covenant.org/)1.4版,可在[http://contributor-covenant.org/version/1/4](http://contributor-covenant.org/version/1/4)上获取。
该行为准则的内容来自于[贡献者公约](http://contributor-covenant.org/)1.4 版,可在[http://contributor-covenant.org/version/1/4](http://contributor-covenant.org/version/1/4)上获取。
[主页]: http://contributor-covenant.org

View File

@ -2,7 +2,7 @@
# 贡献给 Flowise
[English](./CONTRIBUTING.md) | 中文
[English](../CONTRIBUTING.md) | 中文
我们欢迎任何形式的贡献。
@ -118,35 +118,36 @@ Flowise 在一个单一的单体存储库中有 3 个不同的模块。
Flowise 支持不同的环境变量来配置您的实例。您可以在 `packages/server` 文件夹中的 `.env` 文件中指定以下变量。阅读[更多信息](https://docs.flowiseai.com/environment-variables)
| 变量名 | 描述 | 类型 | 默认值 |
| ---------------------------- | ------------------------------------------------------- | ----------------------------------------------- | ----------------------------------- |
| PORT | Flowise 运行的 HTTP 端口 | 数字 | 3000 |
| FLOWISE_USERNAME | 登录用户名 | 字符串 | |
| FLOWISE_PASSWORD | 登录密码 | 字符串 | |
| FLOWISE_FILE_SIZE_LIMIT | 上传文件大小限制 | 字符串 | 50mb |
| DEBUG | 打印组件的日志 | 布尔值 | |
| LOG_PATH | 存储日志文件的位置 | 字符串 | `your-path/Flowise/logs` |
| LOG_LEVEL | 日志的不同级别 | 枚举字符串: `error`, `info`, `verbose`, `debug` | `info` |
| APIKEY_PATH | 存储 API 密钥的位置 | 字符串 | `your-path/Flowise/packages/server` |
| TOOL_FUNCTION_BUILTIN_DEP | 用于工具函数的 NodeJS 内置模块 | 字符串 | |
| TOOL_FUNCTION_EXTERNAL_DEP | 用于工具函数的外部模块 | 字符串 | |
| DATABASE_TYPE | 存储 flowise 数据的数据库类型 | 枚举字符串: `sqlite`, `mysql`, `postgres` | `sqlite` |
| DATABASE_PATH | 数据库保存的位置(当 DATABASE_TYPE 是 sqlite 时) | 字符串 | `your-home-dir/.flowise` |
| DATABASE_HOST | 主机 URL 或 IP 地址(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_PORT | 数据库端口(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_USERNAME | 数据库用户名(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_PASSWORD | 数据库密码(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_NAME | 数据库名称(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| SECRETKEY_PATH | 保存加密密钥(用于加密/解密凭据)的位置 | 字符串 | `your-path/Flowise/packages/server` |
| FLOWISE_SECRETKEY_OVERWRITE | 加密密钥用于替代存储在 SECRETKEY_PATH 中的密钥 | 字符串 |
| DISABLE_FLOWISE_TELEMETRY | 关闭遥测 | 字符串 |
| MODEL_LIST_CONFIG_JSON | 加载模型的位置 | 字符 | `/your_model_list_config_file_path` |
| STORAGE_TYPE | 上传文件的存储类型 | 枚举字符串: `local`, `s3` | `local` |
| BLOB_STORAGE_PATH | 上传文件存储的本地文件夹路径, 当`STORAGE_TYPE`是`local` | 字符串 | `your-home-dir/.flowise/storage` |
| S3_STORAGE_BUCKET_NAME | S3 存储文件夹路径, 当`STORAGE_TYPE`是`s3` | 字符串 | |
| S3_STORAGE_ACCESS_KEY_ID | AWS 访问密钥 (Access Key) | 字符串 | |
| S3_STORAGE_SECRET_ACCESS_KEY | AWS 密钥 (Secret Key) | 字符串 | |
| S3_STORAGE_REGION | S3 存储地区 | 字符串 | |
| 变量名 | 描述 | 类型 | 默认值 |
| ---------------------------- | -------------------------------------------------------------------- | ----------------------------------------------- | ----------------------------------- |
| PORT | Flowise 运行的 HTTP 端口 | 数字 | 3000 |
| FLOWISE_USERNAME | 登录用户名 | 字符串 | |
| FLOWISE_PASSWORD | 登录密码 | 字符串 | |
| FLOWISE_FILE_SIZE_LIMIT | 上传文件大小限制 | 字符串 | 50mb |
| DISABLE_CHATFLOW_REUSE | 强制为每次调用创建一个新的 ChatFlow而不是重用缓存中的现有 ChatFlow | 布尔值 | |
| DEBUG | 打印组件的日志 | 布尔值 | |
| LOG_PATH | 存储日志文件的位置 | 字符串 | `your-path/Flowise/logs` |
| LOG_LEVEL | 日志的不同级别 | 枚举字符串: `error`, `info`, `verbose`, `debug` | `info` |
| APIKEY_PATH | 存储 API 密钥的位置 | 字符串 | `your-path/Flowise/packages/server` |
| TOOL_FUNCTION_BUILTIN_DEP | 用于工具函数的 NodeJS 内置模块 | 字符串 | |
| TOOL_FUNCTION_EXTERNAL_DEP | 用于工具函数的外部模块 | 字符串 | |
| DATABASE_TYPE | 存储 flowise 数据的数据库类型 | 枚举字符串: `sqlite`, `mysql`, `postgres` | `sqlite` |
| DATABASE_PATH | 数据库保存的位置(当 DATABASE_TYPE 是 sqlite 时) | 字符串 | `your-home-dir/.flowise` |
| DATABASE_HOST | 主机 URL 或 IP 地址(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_PORT | 数据库端口(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_USERNAME | 数据库用户名(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_PASSWORD | 数据库密码(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| DATABASE_NAME | 数据库名称(当 DATABASE_TYPE 不是 sqlite 时) | 字符串 | |
| SECRETKEY_PATH | 保存加密密钥(用于加密/解密凭据)的位置 | 字符串 | `your-path/Flowise/packages/server` |
| FLOWISE_SECRETKEY_OVERWRITE | 加密密钥用于替代存储在 SECRETKEY_PATH 中的密钥 | 字符串 |
| DISABLE_FLOWISE_TELEMETRY | 关闭遥测 | 字符串 |
| MODEL_LIST_CONFIG_JSON | 加载模型的位置 | 字符 | `/your_model_list_config_file_path` |
| STORAGE_TYPE | 上传文件的存储类型 | 枚举字符串: `local`, `s3` | `local` |
| BLOB_STORAGE_PATH | 上传文件存储的本地文件夹路径, 当`STORAGE_TYPE`是`local` | 字符串 | `your-home-dir/.flowise/storage` |
| S3_STORAGE_BUCKET_NAME | S3 存储文件夹路径, 当`STORAGE_TYPE`是`s3` | 字符串 | |
| S3_STORAGE_ACCESS_KEY_ID | AWS 访问密钥 (Access Key) | 字符串 | |
| S3_STORAGE_SECRET_ACCESS_KEY | AWS 密钥 (Secret Key) | 字符串 | |
| S3_STORAGE_REGION | S3 存储地区 | 字符串 | |
您也可以在使用 `npx` 时指定环境变量。例如:

View File

@ -10,7 +10,7 @@
[![GitHub star chart](https://img.shields.io/github/stars/FlowiseAI/Flowise?style=social)](https://star-history.com/#FlowiseAI/Flowise)
[![GitHub fork](https://img.shields.io/github/forks/FlowiseAI/Flowise?style=social)](https://github.com/FlowiseAI/Flowise/fork)
[English](./README.md) | [中文](./README-ZH.md) | 日本語 | [한국어](./README-KR.md)
[English](../README.md) | [中文](./README-ZH.md) | 日本語 | [한국어](./README-KR.md)
<h3>ドラッグ&ドロップでカスタマイズした LLM フローを構築できる UI</h3>
<a href="https://github.com/FlowiseAI/Flowise">
@ -44,9 +44,9 @@
1. プロジェクトのルートにある `docker` フォルダに移動する
2. `.env.example` ファイルをコピーして同じ場所に貼り付け、名前を `.env` に変更する
3. `docker-compose up -d`
3. `docker compose up -d`
4. [http://localhost:3000](http://localhost:3000) を開く
5. コンテナを停止するには、`docker-compose stop` を使用します
5. コンテナを停止するには、`docker compose stop` を使用します
### Docker Image

View File

@ -10,7 +10,7 @@
[![GitHub star chart](https://img.shields.io/github/stars/FlowiseAI/Flowise?style=social)](https://star-history.com/#FlowiseAI/Flowise)
[![GitHub fork](https://img.shields.io/github/forks/FlowiseAI/Flowise?style=social)](https://github.com/FlowiseAI/Flowise/fork)
English | [中文](./README-ZH.md) | [日本語](./README-JA.md) | 한국어
[English](../README.md) | [中文](./README-ZH.md) | [日本語](./README-JA.md) | 한국어
<h3>드래그 앤 드롭 UI로 맞춤형 LLM 플로우 구축하기</h3>
<a href="https://github.com/FlowiseAI/Flowise">
@ -44,9 +44,9 @@ English | [中文](./README-ZH.md) | [日本語](./README-JA.md) | 한국어
1. 프로젝트의 최상위(root) 디렉토리에 있는 `docker` 폴더로 이동하세요.
2. `.env.example` 파일을 복사한 후, 같은 경로에 붙여넣기 한 다음, `.env`로 이름을 변경합니다.
3. `docker-compose up -d` 실행
3. `docker compose up -d` 실행
4. [http://localhost:3000](http://localhost:3000) URL 열기
5. `docker-compose stop` 명령어를 통해 컨테이너를 종료시킬 수 있습니다.
5. `docker compose stop` 명령어를 통해 컨테이너를 종료시킬 수 있습니다.
### 도커 이미지 활용

View File

@ -10,7 +10,7 @@
[![GitHub星图](https://img.shields.io/github/stars/FlowiseAI/Flowise?style=social)](https://star-history.com/#FlowiseAI/Flowise)
[![GitHub分支](https://img.shields.io/github/forks/FlowiseAI/Flowise?style=social)](https://github.com/FlowiseAI/Flowise/fork)
[English](./README.md) | 中文 | [日本語](./README-JA.md) | [한국어](./README-KR.md)
[English](../README.md) | 中文 | [日本語](./README-JA.md) | [한국어](./README-KR.md)
<h3>拖放界面构建定制化的LLM流程</h3>
<a href="https://github.com/FlowiseAI/Flowise">
@ -44,9 +44,9 @@
1. 进入项目根目录下的 `docker` 文件夹
2. 创建 `.env` 文件并指定 `PORT`(参考 `.env.example`
3. 运行 `docker-compose up -d`
3. 运行 `docker compose up -d`
4. 打开 [http://localhost:3000](http://localhost:3000)
5. 可以通过 `docker-compose stop` 停止容器
5. 可以通过 `docker compose stop` 停止容器
### Docker 镜像

View File

@ -1,6 +1,6 @@
{
"name": "flowise",
"version": "1.6.6",
"version": "1.8.4",
"private": true,
"homepage": "https://flowiseai.com",
"workspaces": [
@ -65,6 +65,34 @@
"resolutions": {
"@qdrant/openapi-typescript-fetch": "1.2.1",
"@google/generative-ai": "^0.7.0",
"openai": "4.38.3"
"openai": "4.51.0"
},
"eslintIgnore": [
"**/dist",
"**/node_modules",
"**/build",
"**/package-lock.json"
],
"prettier": {
"printWidth": 140,
"singleQuote": true,
"jsxSingleQuote": true,
"trailingComma": "none",
"tabWidth": 4,
"semi": false,
"endOfLine": "auto"
},
"babel": {
"presets": [
"@babel/preset-typescript",
[
"@babel/preset-env",
{
"targets": {
"node": "current"
}
}
]
]
}
}

View File

@ -0,0 +1,28 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class BaiduApi implements INodeCredential {
label: string
name: string
version: number
inputs: INodeParams[]
constructor() {
this.label = 'Baidu API'
this.name = 'baiduApi'
this.version = 1.0
this.inputs = [
{
label: 'Baidu Api Key',
name: 'baiduApiKey',
type: 'password'
},
{
label: 'Baidu Secret Key',
name: 'baiduSecretKey',
type: 'password'
}
]
}
}
module.exports = { credClass: BaiduApi }

View File

@ -0,0 +1,23 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class ChatflowApi implements INodeCredential {
label: string
name: string
version: number
inputs: INodeParams[]
constructor() {
this.label = 'Chatflow API'
this.name = 'chatflowApi'
this.version = 1.0
this.inputs = [
{
label: 'Chatflow Api Key',
name: 'chatflowApiKey',
type: 'password'
}
]
}
}
module.exports = { credClass: ChatflowApi }

View File

@ -1,3 +1,7 @@
/*
* Temporary disabled due to the incompatibility with the docker node-alpine:
* https://github.com/FlowiseAI/Flowise/pull/2303
import { INodeParams, INodeCredential } from '../src/Interface'
class CouchbaseApi implements INodeCredential {
@ -32,3 +36,4 @@ class CouchbaseApi implements INodeCredential {
}
module.exports = { credClass: CouchbaseApi }
*/

View File

@ -0,0 +1,26 @@
/*
* TODO: Implement codeInterpreter column to chat_message table
import { INodeParams, INodeCredential } from '../src/Interface'
class E2BApi implements INodeCredential {
label: string
name: string
version: number
inputs: INodeParams[]
constructor() {
this.label = 'E2B API'
this.name = 'E2BApi'
this.version = 1.0
this.inputs = [
{
label: 'E2B Api Key',
name: 'e2bApiKey',
type: 'password'
}
]
}
}
module.exports = { credClass: E2BApi }
*/

View File

@ -0,0 +1,26 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class ExaSearchApi implements INodeCredential {
label: string
name: string
version: number
description: string
inputs: INodeParams[]
constructor() {
this.label = 'Exa Search API'
this.name = 'exaSearchApi'
this.version = 1.0
this.description =
'Refer to <a target="_blank" href="https://docs.exa.ai/reference/getting-started#getting-access">official guide</a> on how to get an API Key from Exa'
this.inputs = [
{
label: 'ExaSearch Api Key',
name: 'exaSearchApiKey',
type: 'password'
}
]
}
}
module.exports = { credClass: ExaSearchApi }

View File

@ -0,0 +1,26 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class FireCrawlApiCredential implements INodeCredential {
label: string
name: string
version: number
description: string
inputs: INodeParams[]
constructor() {
this.label = 'FireCrawl API'
this.name = 'fireCrawlApi'
this.version = 1.0
this.description =
'You can find the FireCrawl API token on your <a target="_blank" href="https://www.firecrawl.dev/">FireCrawl account</a> page.'
this.inputs = [
{
label: 'FireCrawl API',
name: 'firecrawlApiToken',
type: 'password'
}
]
}
}
module.exports = { credClass: FireCrawlApiCredential }

View File

@ -0,0 +1,23 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class FireworksApi implements INodeCredential {
label: string
name: string
version: number
inputs: INodeParams[]
constructor() {
this.label = 'Fireworks API'
this.name = 'fireworksApi'
this.version = 1.0
this.inputs = [
{
label: 'Fireworks Api Key',
name: 'fireworksApiKey',
type: 'password'
}
]
}
}
module.exports = { credClass: FireworksApi }

View File

@ -0,0 +1,33 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class LangWatchApi implements INodeCredential {
label: string
name: string
version: number
description: string
inputs: INodeParams[]
constructor() {
this.label = 'LangWatch API'
this.name = 'langwatchApi'
this.version = 1.0
this.description =
'Refer to <a target="_blank" href="https://docs.langwatch.ai/integration/python/guide">integration guide</a> on how to get API keys on LangWatch'
this.inputs = [
{
label: 'API Key',
name: 'langWatchApiKey',
type: 'password',
placeholder: '<LANGWATCH_API_KEY>'
},
{
label: 'Endpoint',
name: 'langWatchEndpoint',
type: 'string',
default: 'https://app.langwatch.ai'
}
]
}
}
module.exports = { credClass: LangWatchApi }

View File

@ -0,0 +1,38 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class OpenSearchUrl implements INodeCredential {
label: string
name: string
version: number
description: string
inputs: INodeParams[]
constructor() {
this.label = 'OpenSearch'
this.name = 'openSearchUrl'
this.version = 2.0
this.inputs = [
{
label: 'OpenSearch Url',
name: 'openSearchUrl',
type: 'string'
},
{
label: 'User',
name: 'user',
type: 'string',
placeholder: '<OPENSEARCH_USERNAME>',
optional: true
},
{
label: 'Password',
name: 'password',
type: 'password',
placeholder: '<OPENSEARCH_PASSWORD>',
optional: true
}
]
}
}
module.exports = { credClass: OpenSearchUrl }

View File

@ -0,0 +1,25 @@
import { INodeParams, INodeCredential } from '../src/Interface'
class SpiderApiCredential implements INodeCredential {
label: string
name: string
version: number
description: string
inputs: INodeParams[]
constructor() {
this.label = 'Spider API'
this.name = 'spiderApi'
this.version = 1.0
this.description = 'Get your API key from the <a target="_blank" href="https://spider.cloud">Spider</a> dashboard.'
this.inputs = [
{
label: 'Spider API Key',
name: 'spiderApiKey',
type: 'password'
}
]
}
}
module.exports = { credClass: SpiderApiCredential }

View File

@ -5,47 +5,73 @@
"models": [
{
"label": "anthropic.claude-3-haiku",
"name": "anthropic.claude-3-haiku-20240307-v1:0"
"name": "anthropic.claude-3-haiku-20240307-v1:0",
"description": "Image to text, conversation, chat optimized"
},
{
"label": "anthropic.claude-3.5-sonnet",
"name": "anthropic.claude-3-5-sonnet-20240620-v1:0",
"description": "3.5 version of Claude Sonnet model"
},
{
"label": "anthropic.claude-3-sonnet",
"name": "anthropic.claude-3-sonnet-20240229-v1:0"
"name": "anthropic.claude-3-sonnet-20240229-v1:0",
"description": "Image to text and code, multilingual conversation, complex reasoning and analysis"
},
{
"label": "anthropic.claude-3-opus",
"name": "anthropic.claude-3-opus-20240229-v1:0",
"description": "Image to text and code, multilingual conversation, complex reasoning and analysis"
},
{
"label": "anthropic.claude-instant-v1",
"name": "anthropic.claude-instant-v1"
"name": "anthropic.claude-instant-v1",
"description": "Text generation, conversation"
},
{
"label": "anthropic.claude-v2:1",
"name": "anthropic.claude-v2:1"
"name": "anthropic.claude-v2:1",
"description": "Text generation, conversation, complex reasoning and analysis"
},
{
"label": "anthropic.claude-v2",
"name": "anthropic.claude-v2"
"name": "anthropic.claude-v2",
"description": "Text generation, conversation, complex reasoning and analysis"
},
{
"label": "meta.llama2-13b-chat-v1",
"name": "meta.llama2-13b-chat-v1"
"name": "meta.llama2-13b-chat-v1",
"description": "Text generation, conversation"
},
{
"label": "meta.llama2-70b-chat-v1",
"name": "meta.llama2-70b-chat-v1"
"name": "meta.llama2-70b-chat-v1",
"description": "Text generation, conversation"
},
{
"label": "meta.llama3-8b-instruct-v1:0",
"name": "meta.llama3-8b-instruct-v1:0"
"name": "meta.llama3-8b-instruct-v1:0",
"description": "Text summarization, text classification, sentiment analysis"
},
{
"label": "meta.llama3-70b-instruct-v1:0",
"name": "meta.llama3-70b-instruct-v1:0"
"name": "meta.llama3-70b-instruct-v1:0",
"description": "Language modeling, dialog systems, code generation, text summarization, text classification, sentiment analysis"
},
{
"label": "mistral.mistral-7b-instruct-v0:2",
"name": "mistral.mistral-7b-instruct-v0:2"
"name": "mistral.mistral-7b-instruct-v0:2",
"description": "Classification, text generation, code generation"
},
{
"label": "mistral.mixtral-8x7b-instruct-v0:1",
"name": "mistral.mixtral-8x7b-instruct-v0:1"
"name": "mistral.mixtral-8x7b-instruct-v0:1",
"description": "Complex reasoning and analysis, text generation, code generation"
},
{
"label": "mistral.mistral-large-2402-v1:0",
"name": "mistral.mistral-large-2402-v1:0",
"description": "Complex reasoning and analysis, text generation, code generation, RAG, agents"
}
],
"regions": [
@ -194,6 +220,10 @@
{
"name": "azureChatOpenAI",
"models": [
{
"label": "gpt-4o",
"name": "gpt-4o"
},
{
"label": "gpt-4",
"name": "gpt-4"
@ -219,25 +249,37 @@
{
"name": "azureChatOpenAI_LlamaIndex",
"models": [
{
"label": "gpt-4o",
"name": "gpt-4o"
},
{
"label": "gpt-4",
"name": "gpt-4"
},
{
"label": "gpt-4-turbo",
"name": "gpt-4-turbo"
},
{
"label": "gpt-4-32k",
"name": "gpt-4-32k"
},
{
"label": "gpt-35-turbo",
"name": "gpt-35-turbo"
"label": "gpt-3.5-turbo",
"name": "gpt-3.5-turbo"
},
{
"label": "gpt-35-turbo-16k",
"name": "gpt-35-turbo-16k"
"label": "gpt-3.5-turbo-16k",
"name": "gpt-3.5-turbo-16k"
},
{
"label": "gpt-4-vision-preview",
"name": "gpt-4-vision-preview"
},
{
"label": "gpt-4-1106-preview",
"name": "gpt-4-1106-preview"
}
]
},
@ -254,6 +296,11 @@
"name": "claude-3-opus-20240229",
"description": "Most powerful model for highly complex tasks"
},
{
"label": "claude-3.5-sonnet",
"name": "claude-3-5-sonnet-20240620",
"description": "3.5 version of Claude Sonnet model"
},
{
"label": "claude-3-sonnet",
"name": "claude-3-sonnet-20240229",
@ -309,6 +356,10 @@
{
"name": "chatGoogleGenerativeAI",
"models": [
{
"label": "gemini-1.5-flash-latest",
"name": "gemini-1.5-flash-latest"
},
{
"label": "gemini-1.5-pro-latest",
"name": "gemini-1.5-pro-latest"
@ -335,9 +386,13 @@
{
"name": "chatGoogleVertexAI",
"models": [
{
"label": "gemini-1.5-flash",
"name": "gemini-1.5-flash-preview-0514"
},
{
"label": "gemini-1.5-pro",
"name": "gemini-1.5-pro"
"name": "gemini-1.5-pro-preview-0409"
},
{
"label": "gemini-1.0-pro",
@ -402,6 +457,10 @@
{
"name": "chatOpenAI",
"models": [
{
"label": "gpt-4o",
"name": "gpt-4o"
},
{
"label": "gpt-4",
"name": "gpt-4"
@ -471,6 +530,10 @@
{
"name": "chatOpenAI_LlamaIndex",
"models": [
{
"label": "gpt-4o",
"name": "gpt-4o"
},
{
"label": "gpt-4",
"name": "gpt-4"
@ -589,6 +652,23 @@
"name": "mistral-large-2402"
}
]
},
{
"name": "chatMistral_LlamaIndex",
"models": [
{
"label": "mistral-tiny",
"name": "mistral-tiny"
},
{
"label": "mistral-small",
"name": "mistral-small"
},
{
"label": "mistral-medium",
"name": "mistral-medium"
}
]
}
],
"llm": [
@ -967,22 +1047,42 @@
{
"label": "voyage-2",
"name": "voyage-2",
"description": "Base generalist embedding model optimized for both latency and quality"
"description": "General-purpose embedding model optimized for a balance between cost, latency, and retrieval quality."
},
{
"label": "voyage-code-2",
"name": "voyage-code-2",
"description": "Optimized for code retrieval"
"description": "Optimized for code retrieval."
},
{
"label": "voyage-finance-2",
"name": "voyage-finance-2",
"description": "Optimized for finance retrieval and RAG."
},
{
"label": "voyage-large-2",
"name": "voyage-large-2",
"description": "Powerful generalist embedding model"
"description": "General-purpose embedding model that is optimized for retrieval quality."
},
{
"label": "voyage-large-2-instruct",
"name": "voyage-large-2-instruct",
"description": "Instruction-tuned general-purpose embedding model optimized for clustering, classification, and retrieval."
},
{
"label": "voyage-law-2",
"name": "voyage-law-2",
"description": "Optimized for legal and long-context retrieval and RAG. Also improved performance across all domains."
},
{
"label": "voyage-lite-02-instruct",
"name": "voyage-lite-02-instruct",
"description": "Instruction-tuned for classification, clustering, and sentence textual similarity tasks"
},
{
"label": "voyage-multilingual-2",
"name": "voyage-multilingual-2",
"description": "Optimized for multilingual retrieval and RAG."
}
]
},
@ -1070,19 +1170,28 @@
"models": [
{
"label": "amazon.titan-embed-text-v1",
"name": "amazon.titan-embed-text-v1"
"name": "amazon.titan-embed-text-v1",
"description": "Embedding Dimensions: 1536"
},
{
"label": "amazon.titan-embed-text-v2",
"name": "amazon.titan-embed-text-v2:0",
"description": "Embedding Dimensions: 1024"
},
{
"label": "amazon.titan-embed-g1-text-02",
"name": "amazon.titan-embed-g1-text-02"
"name": "amazon.titan-embed-g1-text-02",
"description": "Embedding Dimensions: 1536"
},
{
"label": "cohere.embed-english-v3",
"name": "cohere.embed-english-v3"
"name": "cohere.embed-english-v3",
"description": "Embedding Dimensions: 1024"
},
{
"label": "cohere.embed-multilingual-v3",
"name": "cohere.embed-multilingual-v3"
"name": "cohere.embed-multilingual-v3",
"description": "Embedding Dimensions: 1024"
}
],
"regions": [

View File

@ -190,6 +190,7 @@ const prepareAgent = async (
const systemMessage = nodeData.inputs?.systemMessage as string
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
const inputKey = memory.inputKey ? memory.inputKey : 'input'
const prependMessages = options?.prependMessages
const outputParser = ChatConversationalAgent.getDefaultOutputParser({
llm: model,
@ -240,7 +241,7 @@ const prepareAgent = async (
[inputKey]: (i: { input: string; steps: AgentStep[] }) => i.input,
agent_scratchpad: async (i: { input: string; steps: AgentStep[] }) => await constructScratchPad(i.steps),
[memoryKey]: async (_: { input: string; steps: AgentStep[] }) => {
const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
const messages = (await memory.getChatMessages(flowObj?.sessionId, true, prependMessages)) as BaseMessage[]
return messages ?? []
}
},

View File

@ -1,186 +0,0 @@
import { flatten } from 'lodash'
import { BaseMessage } from '@langchain/core/messages'
import { ChainValues } from '@langchain/core/utils/types'
import { AgentStep } from '@langchain/core/agents'
import { RunnableSequence } from '@langchain/core/runnables'
import { ChatOpenAI, formatToOpenAIFunction } from '@langchain/openai'
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts'
import { OpenAIFunctionsAgentOutputParser } from 'langchain/agents/openai/output_parser'
import { FlowiseMemory, ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses } from '../../../src/utils'
import { ConsoleCallbackHandler, CustomChainHandler, additionalCallbacks } from '../../../src/handler'
import { AgentExecutor, formatAgentSteps } from '../../../src/agents'
import { checkInputs, Moderation } from '../../moderation/Moderation'
import { formatResponse } from '../../outputparsers/OutputParserHelpers'
const defaultMessage = `Do your best to answer the questions. Feel free to use any tools available to look up relevant information, only if necessary.`
class ConversationalRetrievalAgent_Agents implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
baseClasses: string[]
inputs: INodeParams[]
badge?: string
sessionId?: string
constructor(fields?: { sessionId?: string }) {
this.label = 'Conversational Retrieval Agent'
this.name = 'conversationalRetrievalAgent'
this.version = 4.0
this.type = 'AgentExecutor'
this.category = 'Agents'
this.badge = 'DEPRECATING'
this.icon = 'agent.svg'
this.description = `An agent optimized for retrieval during conversation, answering questions based on past dialogue, all using OpenAI's Function Calling`
this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
this.inputs = [
{
label: 'Allowed Tools',
name: 'tools',
type: 'Tool',
list: true
},
{
label: 'Memory',
name: 'memory',
type: 'BaseChatMemory'
},
{
label: 'OpenAI/Azure Chat Model',
name: 'model',
type: 'BaseChatModel'
},
{
label: 'System Message',
name: 'systemMessage',
type: 'string',
default: defaultMessage,
rows: 4,
optional: true,
additionalParams: true
},
{
label: 'Input Moderation',
description: 'Detect text that could generate harmful output and prevent it from being sent to the language model',
name: 'inputModeration',
type: 'Moderation',
optional: true,
list: true
},
{
label: 'Max Iterations',
name: 'maxIterations',
type: 'number',
optional: true,
additionalParams: true
}
]
this.sessionId = fields?.sessionId
}
async init(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
return prepareAgent(nodeData, { sessionId: this.sessionId, chatId: options.chatId, input })
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | object> {
const memory = nodeData.inputs?.memory as FlowiseMemory
const moderations = nodeData.inputs?.inputModeration as Moderation[]
if (moderations && moderations.length > 0) {
try {
// Use the output of the moderation chain as input for the BabyAGI agent
input = await checkInputs(moderations, input)
} catch (e) {
await new Promise((resolve) => setTimeout(resolve, 500))
//streamResponse(options.socketIO && options.socketIOClientId, e.message, options.socketIO, options.socketIOClientId)
return formatResponse(e.message)
}
}
const executor = prepareAgent(nodeData, { sessionId: this.sessionId, chatId: options.chatId, input })
const loggerHandler = new ConsoleCallbackHandler(options.logger)
const callbacks = await additionalCallbacks(nodeData, options)
let res: ChainValues = {}
if (options.socketIO && options.socketIOClientId) {
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
res = await executor.invoke({ input }, { callbacks: [loggerHandler, handler, ...callbacks] })
} else {
res = await executor.invoke({ input }, { callbacks: [loggerHandler, ...callbacks] })
}
await memory.addChatMessages(
[
{
text: input,
type: 'userMessage'
},
{
text: res?.output,
type: 'apiMessage'
}
],
this.sessionId
)
return res?.output
}
}
const prepareAgent = (nodeData: INodeData, flowObj: { sessionId?: string; chatId?: string; input?: string }) => {
const model = nodeData.inputs?.model as ChatOpenAI
const memory = nodeData.inputs?.memory as FlowiseMemory
const systemMessage = nodeData.inputs?.systemMessage as string
const maxIterations = nodeData.inputs?.maxIterations as string
let tools = nodeData.inputs?.tools
tools = flatten(tools)
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
const inputKey = memory.inputKey ? memory.inputKey : 'input'
const prompt = ChatPromptTemplate.fromMessages([
['ai', systemMessage ? systemMessage : defaultMessage],
new MessagesPlaceholder(memoryKey),
['human', `{${inputKey}}`],
new MessagesPlaceholder('agent_scratchpad')
])
const modelWithFunctions = model.bind({
functions: [...tools.map((tool: any) => formatToOpenAIFunction(tool))]
})
const runnableAgent = RunnableSequence.from([
{
[inputKey]: (i: { input: string; steps: AgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: AgentStep[] }) => formatAgentSteps(i.steps),
[memoryKey]: async (_: { input: string; steps: AgentStep[] }) => {
const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
return messages ?? []
}
},
prompt,
modelWithFunctions,
new OpenAIFunctionsAgentOutputParser()
])
const executor = AgentExecutor.fromAgentAndTools({
agent: runnableAgent,
tools,
sessionId: flowObj?.sessionId,
chatId: flowObj?.chatId,
input: flowObj?.input,
returnIntermediateSteps: true,
verbose: process.env.DEBUG === 'true' ? true : false,
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
})
return executor
}
module.exports = { nodeClass: ConversationalRetrievalAgent_Agents }

View File

@ -1,7 +0,0 @@
<svg width="32" height="32" viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10 6C10 5.44772 10.4477 5 11 5H21C21.5523 5 22 5.44772 22 6V11C22 13.2091 20.2091 15 18 15H14C11.7909 15 10 13.2091 10 11V6Z" stroke="black" stroke-width="2" stroke-linejoin="round"/>
<path d="M16 5V3" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<circle cx="14" cy="9" r="1.5" fill="black"/>
<circle cx="18" cy="9" r="1.5" fill="black"/>
<path d="M26 27C26 22.0294 21.5228 18 16 18C10.4772 18 6 22.0294 6 27" stroke="black" stroke-width="2" stroke-linecap="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 616 B

View File

@ -0,0 +1 @@
<svg width="32" height="32" fill="none" xmlns="http://www.w3.org/2000/svg"><circle cx="16" cy="16" r="14" fill="#CC9B7A"/><path d="m10 21 4.5-10L19 21m-7.2-2.857h5.4M18.5 11 23 21" stroke="#1F1F1E" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/></svg>

After

Width:  |  Height:  |  Size: 269 B

View File

@ -1,9 +1,9 @@
import { flatten } from 'lodash'
import { ChatMessage, OpenAI, OpenAIAgent } from 'llamaindex'
import { getBaseClasses } from '../../../src/utils'
import { FlowiseMemory, ICommonObject, IMessage, INode, INodeData, INodeParams, IUsedTool } from '../../../src/Interface'
import { MessageContentTextDetail, ChatMessage, AnthropicAgent, Anthropic } from 'llamaindex'
import { getBaseClasses } from '../../../../src/utils'
import { FlowiseMemory, ICommonObject, IMessage, INode, INodeData, INodeParams, IUsedTool } from '../../../../src/Interface'
class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
class AnthropicAgent_LlamaIndex_Agents implements INode {
label: string
name: string
version: number
@ -18,16 +18,15 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
badge?: string
constructor(fields?: { sessionId?: string }) {
this.label = 'OpenAI Tool Agent'
this.name = 'openAIToolAgentLlamaIndex'
this.label = 'Anthropic Agent'
this.name = 'anthropicAgentLlamaIndex'
this.version = 1.0
this.type = 'OpenAIToolAgent'
this.type = 'AnthropicAgent'
this.category = 'Agents'
this.icon = 'function.svg'
this.description = `Agent that uses OpenAI Function Calling to pick the tools and args to call using LlamaIndex`
this.baseClasses = [this.type, ...getBaseClasses(OpenAIAgent)]
this.icon = 'Anthropic.svg'
this.description = `Agent that uses Anthropic Claude Function Calling to pick the tools and args to call using LlamaIndex`
this.baseClasses = [this.type, ...getBaseClasses(AnthropicAgent)]
this.tags = ['LlamaIndex']
this.badge = 'NEW'
this.inputs = [
{
label: 'Tools',
@ -41,7 +40,7 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
type: 'BaseChatMemory'
},
{
label: 'OpenAI/Azure Chat Model',
label: 'Anthropic Claude Model',
name: 'model',
type: 'BaseChatModel_LlamaIndex'
},
@ -61,10 +60,12 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
return null
}
async run(nodeData: INodeData, input: string): Promise<string | ICommonObject> {
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
const memory = nodeData.inputs?.memory as FlowiseMemory
const model = nodeData.inputs?.model as OpenAI
const model = nodeData.inputs?.model as Anthropic
const systemMessage = nodeData.inputs?.systemMessage as string
const prependMessages = options?.prependMessages
let tools = nodeData.inputs?.tools
tools = flatten(tools)
@ -77,7 +78,7 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
})
}
const msgs = (await memory.getChatMessages(this.sessionId, false)) as IMessage[]
const msgs = (await memory.getChatMessages(this.sessionId, false, prependMessages)) as IMessage[]
for (const message of msgs) {
if (message.type === 'apiMessage') {
chatHistory.push({
@ -92,31 +93,33 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
}
}
const agent = new OpenAIAgent({
const agent = new AnthropicAgent({
tools,
llm: model,
prefixMessages: chatHistory,
chatHistory: chatHistory,
verbose: process.env.DEBUG === 'true' ? true : false
})
let text = ''
const usedTools: IUsedTool[] = []
const response = await agent.chat({
message: input
})
const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' ? true : false })
if (response.sources.length) {
for (const sourceTool of response.sources) {
usedTools.push({
tool: sourceTool.toolName,
toolInput: sourceTool.rawInput,
toolOutput: sourceTool.rawOutput
tool: sourceTool.tool?.metadata.name ?? '',
toolInput: sourceTool.input,
toolOutput: sourceTool.output as any
})
}
}
text = String(response)
if (Array.isArray(response.response.message.content) && response.response.message.content.length > 0) {
text = (response.response.message.content[0] as MessageContentTextDetail).text
} else {
text = response.response.message.content as string
}
await memory.addChatMessages(
[
@ -136,4 +139,4 @@ class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
}
}
module.exports = { nodeClass: OpenAIFunctionAgent_LlamaIndex_Agents }
module.exports = { nodeClass: AnthropicAgent_LlamaIndex_Agents }

View File

@ -0,0 +1,167 @@
import { flatten } from 'lodash'
import { ChatMessage, OpenAI, OpenAIAgent } from 'llamaindex'
import { getBaseClasses } from '../../../../src/utils'
import { FlowiseMemory, ICommonObject, IMessage, INode, INodeData, INodeParams, IUsedTool } from '../../../../src/Interface'
class OpenAIFunctionAgent_LlamaIndex_Agents implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
baseClasses: string[]
tags: string[]
inputs: INodeParams[]
sessionId?: string
badge?: string
constructor(fields?: { sessionId?: string }) {
this.label = 'OpenAI Tool Agent'
this.name = 'openAIToolAgentLlamaIndex'
this.version = 2.0
this.type = 'OpenAIToolAgent'
this.category = 'Agents'
this.icon = 'function.svg'
this.description = `Agent that uses OpenAI Function Calling to pick the tools and args to call using LlamaIndex`
this.baseClasses = [this.type, ...getBaseClasses(OpenAIAgent)]
this.tags = ['LlamaIndex']
this.inputs = [
{
label: 'Tools',
name: 'tools',
type: 'Tool_LlamaIndex',
list: true
},
{
label: 'Memory',
name: 'memory',
type: 'BaseChatMemory'
},
{
label: 'OpenAI/Azure Chat Model',
name: 'model',
type: 'BaseChatModel_LlamaIndex'
},
{
label: 'System Message',
name: 'systemMessage',
type: 'string',
rows: 4,
optional: true,
additionalParams: true
}
]
this.sessionId = fields?.sessionId
}
async init(): Promise<any> {
return null
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
const memory = nodeData.inputs?.memory as FlowiseMemory
const model = nodeData.inputs?.model as OpenAI
const systemMessage = nodeData.inputs?.systemMessage as string
let tools = nodeData.inputs?.tools
tools = flatten(tools)
const isStreamingEnabled = options.socketIO && options.socketIOClientId
const chatHistory = [] as ChatMessage[]
if (systemMessage) {
chatHistory.push({
content: systemMessage,
role: 'system'
})
}
const msgs = (await memory.getChatMessages(this.sessionId, false)) as IMessage[]
for (const message of msgs) {
if (message.type === 'apiMessage') {
chatHistory.push({
content: message.message,
role: 'assistant'
})
} else if (message.type === 'userMessage') {
chatHistory.push({
content: message.message,
role: 'user'
})
}
}
const agent = new OpenAIAgent({
tools,
llm: model,
chatHistory: chatHistory,
verbose: process.env.DEBUG === 'true' ? true : false
})
let text = ''
let isStreamingStarted = false
const usedTools: IUsedTool[] = []
if (isStreamingEnabled) {
const stream = await agent.chat({
message: input,
chatHistory,
stream: true,
verbose: process.env.DEBUG === 'true' ? true : false
})
for await (const chunk of stream) {
//console.log('chunk', chunk)
text += chunk.response.delta
if (!isStreamingStarted) {
isStreamingStarted = true
options.socketIO.to(options.socketIOClientId).emit('start', chunk.response.delta)
if (chunk.sources.length) {
for (const sourceTool of chunk.sources) {
usedTools.push({
tool: sourceTool.tool?.metadata.name ?? '',
toolInput: sourceTool.input,
toolOutput: sourceTool.output as any
})
}
options.socketIO.to(options.socketIOClientId).emit('usedTools', usedTools)
}
}
options.socketIO.to(options.socketIOClientId).emit('token', chunk.response.delta)
}
} else {
const response = await agent.chat({ message: input, chatHistory, verbose: process.env.DEBUG === 'true' ? true : false })
if (response.sources.length) {
for (const sourceTool of response.sources) {
usedTools.push({
tool: sourceTool.tool?.metadata.name ?? '',
toolInput: sourceTool.input,
toolOutput: sourceTool.output as any
})
}
}
text = response.response.message.content as string
}
await memory.addChatMessages(
[
{
text: input,
type: 'userMessage'
},
{
text: text,
type: 'apiMessage'
}
],
this.sessionId
)
return usedTools.length ? { text: text, usedTools } : text
}
}
module.exports = { nodeClass: OpenAIFunctionAgent_LlamaIndex_Agents }

View File

@ -1 +0,0 @@
<svg width="32" height="32" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M5 6H4v19.5h1m8-7.5v3h1m7-11.5V6h1m-5 7.5V10h1" stroke="#000" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/><mask id="MistralAI__a" style="mask-type:alpha" maskUnits="userSpaceOnUse" x="5" y="6" width="22" height="20"><path d="M5 6v19.5h5v-8h4V21h4v-3.5h4V25h5V6h-4.5v4H18v3.5h-4v-4h-4V6H5Z" fill="#FD7000"/></mask><g mask="url(#MistralAI__a)"><path fill="#FFCD00" d="M4 6h25v4H4z"/></g><mask id="MistralAI__b" style="mask-type:alpha" maskUnits="userSpaceOnUse" x="5" y="6" width="22" height="20"><path d="M5 6v19.5h5v-8h4V21h4v-3.5h4V25h5V6h-4.5v4H18v3.5h-4v-4h-4V6H5Z" fill="#FD7000"/></mask><g mask="url(#MistralAI__b)"><path fill="#FFA200" d="M4 10h25v4H4z"/></g><mask id="MistralAI__c" style="mask-type:alpha" maskUnits="userSpaceOnUse" x="5" y="6" width="22" height="20"><path d="M5 6v19.5h5v-8h4V21h4v-3.5h4V25h5V6h-4.5v4H18v3.5h-4v-4h-4V6H5Z" fill="#FD7000"/></mask><g mask="url(#MistralAI__c)"><path fill="#FF6E00" d="M4 14h25v4H4z"/></g><mask id="MistralAI__d" style="mask-type:alpha" maskUnits="userSpaceOnUse" x="5" y="6" width="22" height="20"><path d="M5 6v19.5h5v-8h4V21h4v-3.5h4V25h5V6h-4.5v4H18v3.5h-4v-4h-4V6H5Z" fill="#FD7000"/></mask><g mask="url(#MistralAI__d)"><path fill="#FF4A09" d="M4 18h25v4H4z"/></g><mask id="MistralAI__e" style="mask-type:alpha" maskUnits="userSpaceOnUse" x="5" y="6" width="22" height="20"><path d="M5 6v19.5h5v-8h4V21h4v-3.5h4V25h5V6h-4.5v4H18v3.5h-4v-4h-4V6H5Z" fill="#FD7000"/></mask><g mask="url(#MistralAI__e)"><path fill="#FE060F" d="M4 22h25v4H4z"/></g><path d="M21 18v7h1" stroke="#000" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/><path d="M5 6v19.5h5v-8h4V21h4v-3.5h4V25h5V6h-4.5v4H18v3.5h-4v-4h-4V6H5Z" stroke="#000" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/></svg>

Before

Width:  |  Height:  |  Size: 1.8 KiB

View File

@ -1,212 +0,0 @@
import { flatten } from 'lodash'
import { BaseMessage } from '@langchain/core/messages'
import { ChainValues } from '@langchain/core/utils/types'
import { AgentStep } from '@langchain/core/agents'
import { RunnableSequence } from '@langchain/core/runnables'
import { ChatOpenAI } from '@langchain/openai'
import { convertToOpenAITool } from '@langchain/core/utils/function_calling'
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts'
import { OpenAIToolsAgentOutputParser } from 'langchain/agents/openai/output_parser'
import { getBaseClasses } from '../../../src/utils'
import { FlowiseMemory, ICommonObject, INode, INodeData, INodeParams, IUsedTool } from '../../../src/Interface'
import { ConsoleCallbackHandler, CustomChainHandler, additionalCallbacks } from '../../../src/handler'
import { AgentExecutor, formatAgentSteps } from '../../../src/agents'
import { Moderation, checkInputs, streamResponse } from '../../moderation/Moderation'
import { formatResponse } from '../../outputparsers/OutputParserHelpers'
class MistralAIToolAgent_Agents implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
baseClasses: string[]
inputs: INodeParams[]
sessionId?: string
badge?: string
constructor(fields?: { sessionId?: string }) {
this.label = 'MistralAI Tool Agent'
this.name = 'mistralAIToolAgent'
this.version = 1.0
this.type = 'AgentExecutor'
this.category = 'Agents'
this.icon = 'MistralAI.svg'
this.badge = 'DEPRECATING'
this.description = `Agent that uses MistralAI Function Calling to pick the tools and args to call`
this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
this.inputs = [
{
label: 'Tools',
name: 'tools',
type: 'Tool',
list: true
},
{
label: 'Memory',
name: 'memory',
type: 'BaseChatMemory'
},
{
label: 'MistralAI Chat Model',
name: 'model',
type: 'BaseChatModel'
},
{
label: 'System Message',
name: 'systemMessage',
type: 'string',
rows: 4,
optional: true,
additionalParams: true
},
{
label: 'Input Moderation',
description: 'Detect text that could generate harmful output and prevent it from being sent to the language model',
name: 'inputModeration',
type: 'Moderation',
optional: true,
list: true
},
{
label: 'Max Iterations',
name: 'maxIterations',
type: 'number',
optional: true,
additionalParams: true
}
]
this.sessionId = fields?.sessionId
}
async init(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
return prepareAgent(nodeData, { sessionId: this.sessionId, chatId: options.chatId, input })
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
const memory = nodeData.inputs?.memory as FlowiseMemory
const moderations = nodeData.inputs?.inputModeration as Moderation[]
if (moderations && moderations.length > 0) {
try {
// Use the output of the moderation chain as input for the OpenAI Function Agent
input = await checkInputs(moderations, input)
} catch (e) {
await new Promise((resolve) => setTimeout(resolve, 500))
streamResponse(options.socketIO && options.socketIOClientId, e.message, options.socketIO, options.socketIOClientId)
return formatResponse(e.message)
}
}
const executor = prepareAgent(nodeData, { sessionId: this.sessionId, chatId: options.chatId, input })
const loggerHandler = new ConsoleCallbackHandler(options.logger)
const callbacks = await additionalCallbacks(nodeData, options)
let res: ChainValues = {}
let sourceDocuments: ICommonObject[] = []
let usedTools: IUsedTool[] = []
if (options.socketIO && options.socketIOClientId) {
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
res = await executor.invoke({ input }, { callbacks: [loggerHandler, handler, ...callbacks] })
if (res.sourceDocuments) {
options.socketIO.to(options.socketIOClientId).emit('sourceDocuments', flatten(res.sourceDocuments))
sourceDocuments = res.sourceDocuments
}
if (res.usedTools) {
options.socketIO.to(options.socketIOClientId).emit('usedTools', res.usedTools)
usedTools = res.usedTools
}
} else {
res = await executor.invoke({ input }, { callbacks: [loggerHandler, ...callbacks] })
if (res.sourceDocuments) {
sourceDocuments = res.sourceDocuments
}
if (res.usedTools) {
usedTools = res.usedTools
}
}
await memory.addChatMessages(
[
{
text: input,
type: 'userMessage'
},
{
text: res?.output,
type: 'apiMessage'
}
],
this.sessionId
)
let finalRes = res?.output
if (sourceDocuments.length || usedTools.length) {
finalRes = { text: res?.output }
if (sourceDocuments.length) {
finalRes.sourceDocuments = flatten(sourceDocuments)
}
if (usedTools.length) {
finalRes.usedTools = usedTools
}
return finalRes
}
return finalRes
}
}
const prepareAgent = (nodeData: INodeData, flowObj: { sessionId?: string; chatId?: string; input?: string }) => {
const model = nodeData.inputs?.model as ChatOpenAI
const memory = nodeData.inputs?.memory as FlowiseMemory
const maxIterations = nodeData.inputs?.maxIterations as string
const systemMessage = nodeData.inputs?.systemMessage as string
let tools = nodeData.inputs?.tools
tools = flatten(tools)
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
const inputKey = memory.inputKey ? memory.inputKey : 'input'
const prompt = ChatPromptTemplate.fromMessages([
['system', systemMessage ? systemMessage : `You are a helpful AI assistant.`],
new MessagesPlaceholder(memoryKey),
['human', `{${inputKey}}`],
new MessagesPlaceholder('agent_scratchpad')
])
const llmWithTools = model.bind({
tools: tools.map(convertToOpenAITool)
})
const runnableAgent = RunnableSequence.from([
{
[inputKey]: (i: { input: string; steps: AgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: AgentStep[] }) => formatAgentSteps(i.steps),
[memoryKey]: async (_: { input: string; steps: AgentStep[] }) => {
const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
return messages ?? []
}
},
prompt,
llmWithTools,
new OpenAIToolsAgentOutputParser()
])
const executor = AgentExecutor.fromAgentAndTools({
agent: runnableAgent,
tools,
sessionId: flowObj?.sessionId,
chatId: flowObj?.chatId,
input: flowObj?.input,
verbose: process.env.DEBUG === 'true' ? true : false,
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
})
return executor
}
module.exports = { nodeClass: MistralAIToolAgent_Agents }

View File

@ -8,7 +8,7 @@ import { zodToJsonSchema } from 'zod-to-json-schema'
import { AnalyticHandler } from '../../../src/handler'
import { Moderation, checkInputs, streamResponse } from '../../moderation/Moderation'
import { formatResponse } from '../../outputparsers/OutputParserHelpers'
import { addFileToStorage } from '../../../src/storageUtils'
import { addSingleFileToStorage } from '../../../src/storageUtils'
const lenticularBracketRegex = /【[^】]*】/g
const imageRegex = /<img[^>]*\/>/g
@ -27,7 +27,7 @@ class OpenAIAssistant_Agents implements INode {
constructor() {
this.label = 'OpenAI Assistant'
this.name = 'openAIAssistant'
this.version = 3.0
this.version = 4.0
this.type = 'OpenAIAssistant'
this.category = 'Agents'
this.icon = 'assistant.svg'
@ -54,6 +54,25 @@ class OpenAIAssistant_Agents implements INode {
optional: true,
list: true
},
{
label: 'Tool Choice',
name: 'toolChoice',
type: 'string',
description:
'Controls which (if any) tool is called by the model. Can be "none", "auto", "required", or the name of a tool. Refer <a href="https://platform.openai.com/docs/api-reference/runs/createRun#runs-createrun-tool_choice" target="_blank">here</a> for more information',
placeholder: 'file_search',
optional: true,
additionalParams: true
},
{
label: 'Parallel Tool Calls',
name: 'parallelToolCalls',
type: 'boolean',
description: 'Whether to enable parallel function calling during tool use. Defaults to true',
default: true,
optional: true,
additionalParams: true
},
{
label: 'Disable File Download',
name: 'disableFileDownload',
@ -138,10 +157,14 @@ class OpenAIAssistant_Agents implements INode {
const openai = new OpenAI({ apiKey: openAIApiKey })
options.logger.info(`Clearing OpenAI Thread ${sessionId}`)
try {
if (sessionId) await openai.beta.threads.del(sessionId)
options.logger.info(`Successfully cleared OpenAI Thread ${sessionId}`)
if (sessionId && sessionId.startsWith('thread_')) {
await openai.beta.threads.del(sessionId)
options.logger.info(`Successfully cleared OpenAI Thread ${sessionId}`)
} else {
options.logger.error(`Error clearing OpenAI Thread ${sessionId}`)
}
} catch (e) {
throw new Error(e)
options.logger.error(`Error clearing OpenAI Thread ${sessionId}`)
}
}
@ -151,6 +174,8 @@ class OpenAIAssistant_Agents implements INode {
const databaseEntities = options.databaseEntities as IDatabaseEntity
const disableFileDownload = nodeData.inputs?.disableFileDownload as boolean
const moderations = nodeData.inputs?.inputModeration as Moderation[]
const _toolChoice = nodeData.inputs?.toolChoice as string
const parallelToolCalls = nodeData.inputs?.parallelToolCalls as boolean
const isStreaming = options.socketIO && options.socketIOClientId
const socketIO = isStreaming ? options.socketIO : undefined
const socketIOClientId = isStreaming ? options.socketIOClientId : ''
@ -269,10 +294,25 @@ class OpenAIAssistant_Agents implements INode {
let runThreadId = ''
let isStreamingStarted = false
let toolChoice: any
if (_toolChoice) {
if (_toolChoice === 'file_search') {
toolChoice = { type: 'file_search' }
} else if (_toolChoice === 'code_interpreter') {
toolChoice = { type: 'code_interpreter' }
} else if (_toolChoice === 'none' || _toolChoice === 'auto' || _toolChoice === 'required') {
toolChoice = _toolChoice
} else {
toolChoice = { type: 'function', function: { name: _toolChoice } }
}
}
if (isStreaming) {
const streamThread = await openai.beta.threads.runs.create(threadId, {
assistant_id: retrievedAssistant.id,
stream: true
stream: true,
tool_choice: toolChoice,
parallel_tool_calls: parallelToolCalls
})
for await (const event of streamThread) {
@ -595,7 +635,9 @@ class OpenAIAssistant_Agents implements INode {
// Polling run status
const runThread = await openai.beta.threads.runs.create(threadId, {
assistant_id: retrievedAssistant.id
assistant_id: retrievedAssistant.id,
tool_choice: toolChoice,
parallel_tool_calls: parallelToolCalls
})
runThreadId = runThread.id
let state = await promise(threadId, runThread.id)
@ -608,7 +650,9 @@ class OpenAIAssistant_Agents implements INode {
if (retries > 0) {
retries -= 1
const newRunThread = await openai.beta.threads.runs.create(threadId, {
assistant_id: retrievedAssistant.id
assistant_id: retrievedAssistant.id,
tool_choice: toolChoice,
parallel_tool_calls: parallelToolCalls
})
runThreadId = newRunThread.id
state = await promise(threadId, newRunThread.id)
@ -731,7 +775,7 @@ const downloadImg = async (openai: OpenAI, fileId: string, fileName: string, ...
const image_data_buffer = Buffer.from(image_data)
const mime = 'image/png'
await addFileToStorage(mime, image_data_buffer, fileName, ...paths)
await addSingleFileToStorage(mime, image_data_buffer, fileName, ...paths)
return image_data_buffer
}
@ -754,7 +798,7 @@ const downloadFile = async (openAIApiKey: string, fileObj: any, fileName: string
const data_buffer = Buffer.from(data)
const mime = 'application/octet-stream'
return await addFileToStorage(mime, data_buffer, fileName, ...paths)
return await addSingleFileToStorage(mime, data_buffer, fileName, ...paths)
} catch (error) {
console.error('Error downloading or writing the file:', error)
return ''

View File

@ -1,211 +0,0 @@
import { flatten } from 'lodash'
import { BaseMessage } from '@langchain/core/messages'
import { ChainValues } from '@langchain/core/utils/types'
import { AgentStep } from '@langchain/core/agents'
import { RunnableSequence } from '@langchain/core/runnables'
import { ChatOpenAI, formatToOpenAIFunction } from '@langchain/openai'
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts'
import { OpenAIFunctionsAgentOutputParser } from 'langchain/agents/openai/output_parser'
import { getBaseClasses } from '../../../src/utils'
import { FlowiseMemory, ICommonObject, INode, INodeData, INodeParams, IUsedTool } from '../../../src/Interface'
import { ConsoleCallbackHandler, CustomChainHandler, additionalCallbacks } from '../../../src/handler'
import { AgentExecutor, formatAgentSteps } from '../../../src/agents'
import { Moderation, checkInputs } from '../../moderation/Moderation'
import { formatResponse } from '../../outputparsers/OutputParserHelpers'
class OpenAIFunctionAgent_Agents implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
baseClasses: string[]
inputs: INodeParams[]
badge?: string
sessionId?: string
constructor(fields?: { sessionId?: string }) {
this.label = 'OpenAI Function Agent'
this.name = 'openAIFunctionAgent'
this.version = 4.0
this.type = 'AgentExecutor'
this.category = 'Agents'
this.icon = 'function.svg'
this.description = `An agent that uses OpenAI Function Calling to pick the tool and args to call`
this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
this.badge = 'DEPRECATING'
this.inputs = [
{
label: 'Allowed Tools',
name: 'tools',
type: 'Tool',
list: true
},
{
label: 'Memory',
name: 'memory',
type: 'BaseChatMemory'
},
{
label: 'OpenAI/Azure Chat Model',
name: 'model',
type: 'BaseChatModel'
},
{
label: 'System Message',
name: 'systemMessage',
type: 'string',
rows: 4,
optional: true,
additionalParams: true
},
{
label: 'Input Moderation',
description: 'Detect text that could generate harmful output and prevent it from being sent to the language model',
name: 'inputModeration',
type: 'Moderation',
optional: true,
list: true
},
{
label: 'Max Iterations',
name: 'maxIterations',
type: 'number',
optional: true,
additionalParams: true
}
]
this.sessionId = fields?.sessionId
}
async init(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
return prepareAgent(nodeData, { sessionId: this.sessionId, chatId: options.chatId, input })
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
const memory = nodeData.inputs?.memory as FlowiseMemory
const moderations = nodeData.inputs?.inputModeration as Moderation[]
if (moderations && moderations.length > 0) {
try {
// Use the output of the moderation chain as input for the OpenAI Function Agent
input = await checkInputs(moderations, input)
} catch (e) {
await new Promise((resolve) => setTimeout(resolve, 500))
//streamResponse(options.socketIO && options.socketIOClientId, e.message, options.socketIO, options.socketIOClientId)
return formatResponse(e.message)
}
}
const executor = prepareAgent(nodeData, { sessionId: this.sessionId, chatId: options.chatId, input })
const loggerHandler = new ConsoleCallbackHandler(options.logger)
const callbacks = await additionalCallbacks(nodeData, options)
let res: ChainValues = {}
let sourceDocuments: ICommonObject[] = []
let usedTools: IUsedTool[] = []
if (options.socketIO && options.socketIOClientId) {
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
res = await executor.invoke({ input }, { callbacks: [loggerHandler, handler, ...callbacks] })
if (res.sourceDocuments) {
options.socketIO.to(options.socketIOClientId).emit('sourceDocuments', flatten(res.sourceDocuments))
sourceDocuments = res.sourceDocuments
}
if (res.usedTools) {
options.socketIO.to(options.socketIOClientId).emit('usedTools', res.usedTools)
usedTools = res.usedTools
}
} else {
res = await executor.invoke({ input }, { callbacks: [loggerHandler, ...callbacks] })
if (res.sourceDocuments) {
sourceDocuments = res.sourceDocuments
}
if (res.usedTools) {
usedTools = res.usedTools
}
}
await memory.addChatMessages(
[
{
text: input,
type: 'userMessage'
},
{
text: res?.output,
type: 'apiMessage'
}
],
this.sessionId
)
let finalRes = res?.output
if (sourceDocuments.length || usedTools.length) {
finalRes = { text: res?.output }
if (sourceDocuments.length) {
finalRes.sourceDocuments = flatten(sourceDocuments)
}
if (usedTools.length) {
finalRes.usedTools = usedTools
}
return finalRes
}
return finalRes
}
}
const prepareAgent = (nodeData: INodeData, flowObj: { sessionId?: string; chatId?: string; input?: string }) => {
const model = nodeData.inputs?.model as ChatOpenAI
const maxIterations = nodeData.inputs?.maxIterations as string
const memory = nodeData.inputs?.memory as FlowiseMemory
const systemMessage = nodeData.inputs?.systemMessage as string
let tools = nodeData.inputs?.tools
tools = flatten(tools)
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
const inputKey = memory.inputKey ? memory.inputKey : 'input'
const prompt = ChatPromptTemplate.fromMessages([
['system', systemMessage ? systemMessage : `You are a helpful AI assistant.`],
new MessagesPlaceholder(memoryKey),
['human', `{${inputKey}}`],
new MessagesPlaceholder('agent_scratchpad')
])
const modelWithFunctions = model.bind({
functions: [...tools.map((tool: any) => formatToOpenAIFunction(tool))]
})
const runnableAgent = RunnableSequence.from([
{
[inputKey]: (i: { input: string; steps: AgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: AgentStep[] }) => formatAgentSteps(i.steps),
[memoryKey]: async (_: { input: string; steps: AgentStep[] }) => {
const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
return messages ?? []
}
},
prompt,
modelWithFunctions,
new OpenAIFunctionsAgentOutputParser()
])
const executor = AgentExecutor.fromAgentAndTools({
agent: runnableAgent,
tools,
sessionId: flowObj?.sessionId,
chatId: flowObj?.chatId,
input: flowObj?.input,
verbose: process.env.DEBUG === 'true' ? true : false,
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
})
return executor
}
module.exports = { nodeClass: OpenAIFunctionAgent_Agents }

View File

@ -1,210 +0,0 @@
import { flatten } from 'lodash'
import { BaseMessage } from '@langchain/core/messages'
import { ChainValues } from '@langchain/core/utils/types'
import { RunnableSequence } from '@langchain/core/runnables'
import { ChatOpenAI } from '@langchain/openai'
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts'
import { convertToOpenAITool } from '@langchain/core/utils/function_calling'
import { formatToOpenAIToolMessages } from 'langchain/agents/format_scratchpad/openai_tools'
import { OpenAIToolsAgentOutputParser, type ToolsAgentStep } from 'langchain/agents/openai/output_parser'
import { getBaseClasses } from '../../../src/utils'
import { FlowiseMemory, ICommonObject, INode, INodeData, INodeParams, IUsedTool } from '../../../src/Interface'
import { ConsoleCallbackHandler, CustomChainHandler, additionalCallbacks } from '../../../src/handler'
import { AgentExecutor } from '../../../src/agents'
import { Moderation, checkInputs } from '../../moderation/Moderation'
import { formatResponse } from '../../outputparsers/OutputParserHelpers'
class OpenAIToolAgent_Agents implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
baseClasses: string[]
inputs: INodeParams[]
sessionId?: string
badge?: string
constructor(fields?: { sessionId?: string }) {
this.label = 'OpenAI Tool Agent'
this.name = 'openAIToolAgent'
this.version = 1.0
this.type = 'AgentExecutor'
this.category = 'Agents'
this.icon = 'function.svg'
this.description = `Agent that uses OpenAI Function Calling to pick the tools and args to call`
this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
this.badge = 'DEPRECATING'
this.inputs = [
{
label: 'Tools',
name: 'tools',
type: 'Tool',
list: true
},
{
label: 'Memory',
name: 'memory',
type: 'BaseChatMemory'
},
{
label: 'OpenAI/Azure Chat Model',
name: 'model',
type: 'BaseChatModel'
},
{
label: 'System Message',
name: 'systemMessage',
type: 'string',
rows: 4,
optional: true,
additionalParams: true
},
{
label: 'Input Moderation',
description: 'Detect text that could generate harmful output and prevent it from being sent to the language model',
name: 'inputModeration',
type: 'Moderation',
optional: true,
list: true
},
{
label: 'Max Iterations',
name: 'maxIterations',
type: 'number',
optional: true,
additionalParams: true
}
]
this.sessionId = fields?.sessionId
}
async init(nodeData: INodeData, input: string, options: ICommonObject): Promise<any> {
return prepareAgent(nodeData, { sessionId: this.sessionId, chatId: options.chatId, input })
}
async run(nodeData: INodeData, input: string, options: ICommonObject): Promise<string | ICommonObject> {
const memory = nodeData.inputs?.memory as FlowiseMemory
const moderations = nodeData.inputs?.inputModeration as Moderation[]
if (moderations && moderations.length > 0) {
try {
// Use the output of the moderation chain as input for the OpenAI Function Agent
input = await checkInputs(moderations, input)
} catch (e) {
await new Promise((resolve) => setTimeout(resolve, 500))
//streamResponse(options.socketIO && options.socketIOClientId, e.message, options.socketIO, options.socketIOClientId)
return formatResponse(e.message)
}
}
const executor = prepareAgent(nodeData, { sessionId: this.sessionId, chatId: options.chatId, input })
const loggerHandler = new ConsoleCallbackHandler(options.logger)
const callbacks = await additionalCallbacks(nodeData, options)
let res: ChainValues = {}
let sourceDocuments: ICommonObject[] = []
let usedTools: IUsedTool[] = []
if (options.socketIO && options.socketIOClientId) {
const handler = new CustomChainHandler(options.socketIO, options.socketIOClientId)
res = await executor.invoke({ input }, { callbacks: [loggerHandler, handler, ...callbacks] })
if (res.sourceDocuments) {
options.socketIO.to(options.socketIOClientId).emit('sourceDocuments', flatten(res.sourceDocuments))
sourceDocuments = res.sourceDocuments
}
if (res.usedTools) {
options.socketIO.to(options.socketIOClientId).emit('usedTools', res.usedTools)
usedTools = res.usedTools
}
} else {
res = await executor.invoke({ input }, { callbacks: [loggerHandler, ...callbacks] })
if (res.sourceDocuments) {
sourceDocuments = res.sourceDocuments
}
if (res.usedTools) {
usedTools = res.usedTools
}
}
await memory.addChatMessages(
[
{
text: input,
type: 'userMessage'
},
{
text: res?.output,
type: 'apiMessage'
}
],
this.sessionId
)
let finalRes = res?.output
if (sourceDocuments.length || usedTools.length) {
finalRes = { text: res?.output }
if (sourceDocuments.length) {
finalRes.sourceDocuments = flatten(sourceDocuments)
}
if (usedTools.length) {
finalRes.usedTools = usedTools
}
return finalRes
}
return finalRes
}
}
const prepareAgent = (nodeData: INodeData, flowObj: { sessionId?: string; chatId?: string; input?: string }) => {
const model = nodeData.inputs?.model as ChatOpenAI
const maxIterations = nodeData.inputs?.maxIterations as string
const memory = nodeData.inputs?.memory as FlowiseMemory
const systemMessage = nodeData.inputs?.systemMessage as string
let tools = nodeData.inputs?.tools
tools = flatten(tools)
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
const inputKey = memory.inputKey ? memory.inputKey : 'input'
const prompt = ChatPromptTemplate.fromMessages([
['system', systemMessage ? systemMessage : `You are a helpful AI assistant.`],
new MessagesPlaceholder(memoryKey),
['human', `{${inputKey}}`],
new MessagesPlaceholder('agent_scratchpad')
])
const modelWithTools = model.bind({ tools: tools.map(convertToOpenAITool) })
const runnableAgent = RunnableSequence.from([
{
[inputKey]: (i: { input: string; steps: ToolsAgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: ToolsAgentStep[] }) => formatToOpenAIToolMessages(i.steps),
[memoryKey]: async (_: { input: string; steps: ToolsAgentStep[] }) => {
const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
return messages ?? []
}
},
prompt,
modelWithTools,
new OpenAIToolsAgentOutputParser()
])
const executor = AgentExecutor.fromAgentAndTools({
agent: runnableAgent,
tools,
sessionId: flowObj?.sessionId,
chatId: flowObj?.chatId,
input: flowObj?.input,
verbose: process.env.DEBUG === 'true' ? true : false,
maxIterations: maxIterations ? parseFloat(maxIterations) : undefined
})
return executor
}
module.exports = { nodeClass: OpenAIToolAgent_Agents }

View File

@ -1,9 +0,0 @@
<svg width="32" height="32" viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M16 12.6108L22 15.9608" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M7.17701 19.5848C6.49568 20.4069 6.12505 21.4424 6.12993 22.5101C6.13481 23.5779 6.51489 24.6099 7.2037 25.4258C7.89252 26.2416 8.84622 26.7893 9.89802 26.9732C10.9498 27.157 12.0328 26.9653 12.9575 26.4314L15.4787 24.9657M18.6002 14.106V19.5848" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M8.19877 9.98459C6.39026 9.67775 4.57524 10.4982 3.60403 12.1806C3.00524 13.2178 2.84295 14.4504 3.15284 15.6073C3.46273 16.7642 4.21943 17.7507 5.25652 18.3498L10.3049 21.3269C10.6109 21.5074 10.9898 21.5119 11.3001 21.3388L18.6 17.2655" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M17.0172 6.06585C16.6456 5.06522 15.9342 4.227 15.0072 3.6977C14.0803 3.1684 12.9969 2.98168 11.9462 3.17018C10.8956 3.35869 9.94464 3.91042 9.25954 4.72895C8.57444 5.54747 8.19879 6.58074 8.19824 7.64814V13.6575C8.19824 14.0154 8.38951 14.346 8.69977 14.5244L15.9992 18.7215" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M24.8216 11.7476C25.5029 10.9255 25.8735 9.89004 25.8687 8.8223C25.8638 7.75457 25.4837 6.72253 24.7949 5.90667C24.1061 5.09082 23.1524 4.54308 22.1006 4.35924C21.0488 4.17541 19.9658 4.36718 19.0411 4.90101L13.8942 7.90613C13.5872 8.08539 13.3984 8.41418 13.3984 8.76971V17.2265" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M28.3944 19.0635C28.9932 18.0263 29.1555 16.7937 28.8456 15.6368C28.5357 14.4799 27.779 13.4934 26.7419 12.8943L21.6409 9.91752C21.3316 9.73703 20.9494 9.7357 20.6388 9.91405L13.3984 14.0723" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M18 28.9997H18.8071C19.6909 28.9997 20.4526 28.3921 20.6297 27.546L21.991 21.4537C22.1681 20.6076 22.9299 20 23.8136 20H24.6207M20.0929 22.7023H23.8136M24 25.0214H24.5014C24.8438 25.0214 25.1586 25.2052 25.3207 25.5L27.3429 28.5213C27.5051 28.8161 27.8198 29 28.1622 29H28.6997M24.049 29C24.6261 29 25.1609 28.7041 25.4578 28.2205L27.2424 25.8009C27.5393 25.3173 28.0741 25.0214 28.6512 25.0214" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 2.3 KiB

View File

@ -80,6 +80,7 @@ class ReActAgentChat_Agents implements INode {
const model = nodeData.inputs?.model as BaseChatModel
let tools = nodeData.inputs?.tools as Tool[]
const moderations = nodeData.inputs?.inputModeration as Moderation[]
const prependMessages = options?.prependMessages
if (moderations && moderations.length > 0) {
try {
@ -134,7 +135,7 @@ class ReActAgentChat_Agents implements INode {
const callbacks = await additionalCallbacks(nodeData, options)
const chatHistory = ((await memory.getChatMessages(this.sessionId, false)) as IMessage[]) ?? []
const chatHistory = ((await memory.getChatMessages(this.sessionId, false, prependMessages)) as IMessage[]) ?? []
const chatHistoryString = chatHistory.map((hist) => hist.message).join('\\n')
const result = await executor.invoke({ input, chat_history: chatHistoryString }, { callbacks })

View File

@ -36,7 +36,6 @@ class ToolAgent_Agents implements INode {
this.icon = 'toolAgent.png'
this.description = `Agent that uses Function Calling to pick the tools and args to call`
this.baseClasses = [this.type, ...getBaseClasses(AgentExecutor)]
this.badge = 'NEW'
this.inputs = [
{
label: 'Tools',
@ -54,7 +53,7 @@ class ToolAgent_Agents implements INode {
name: 'model',
type: 'BaseChatModel',
description:
'Only compatible with models that are capable of function calling. ChatOpenAI, ChatMistral, ChatAnthropic, ChatVertexAI'
'Only compatible with models that are capable of function calling: ChatOpenAI, ChatMistral, ChatAnthropic, ChatGoogleGenerativeAI, ChatVertexAI, GroqChat'
},
{
label: 'System Message',
@ -191,6 +190,7 @@ const prepareAgent = async (
tools = flatten(tools)
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
const inputKey = memory.inputKey ? memory.inputKey : 'input'
const prependMessages = options?.prependMessages
const prompt = ChatPromptTemplate.fromMessages([
['system', systemMessage],
@ -239,7 +239,7 @@ const prepareAgent = async (
[inputKey]: (i: { input: string; steps: ToolsAgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: ToolsAgentStep[] }) => formatToOpenAIToolMessages(i.steps),
[memoryKey]: async (_: { input: string; steps: ToolsAgentStep[] }) => {
const messages = (await memory.getChatMessages(flowObj?.sessionId, true)) as BaseMessage[]
const messages = (await memory.getChatMessages(flowObj?.sessionId, true, prependMessages)) as BaseMessage[]
return messages ?? []
}
},

View File

@ -122,7 +122,7 @@ class XMLAgent_Agents implements INode {
return formatResponse(e.message)
}
}
const executor = await prepareAgent(nodeData, { sessionId: this.sessionId, chatId: options.chatId, input })
const executor = await prepareAgent(nodeData, options, { sessionId: this.sessionId, chatId: options.chatId, input })
const loggerHandler = new ConsoleCallbackHandler(options.logger)
const callbacks = await additionalCallbacks(nodeData, options)
@ -183,7 +183,11 @@ class XMLAgent_Agents implements INode {
}
}
const prepareAgent = async (nodeData: INodeData, flowObj: { sessionId?: string; chatId?: string; input?: string }) => {
const prepareAgent = async (
nodeData: INodeData,
options: ICommonObject,
flowObj: { sessionId?: string; chatId?: string; input?: string }
) => {
const model = nodeData.inputs?.model as BaseChatModel
const maxIterations = nodeData.inputs?.maxIterations as string
const memory = nodeData.inputs?.memory as FlowiseMemory
@ -192,6 +196,7 @@ const prepareAgent = async (nodeData: INodeData, flowObj: { sessionId?: string;
tools = flatten(tools)
const inputKey = memory.inputKey ? memory.inputKey : 'input'
const memoryKey = memory.memoryKey ? memory.memoryKey : 'chat_history'
const prependMessages = options?.prependMessages
let promptMessage = systemMessage ? systemMessage : defaultSystemMessage
if (memory.memoryKey) promptMessage = promptMessage.replaceAll('{chat_history}', `{${memory.memoryKey}}`)
@ -210,7 +215,7 @@ const prepareAgent = async (nodeData: INodeData, flowObj: { sessionId?: string;
const llmWithStop = model.bind({ stop: ['</tool_input>', '</final_answer>'] })
const messages = (await memory.getChatMessages(flowObj.sessionId, false)) as IMessage[]
const messages = (await memory.getChatMessages(flowObj.sessionId, false, prependMessages)) as IMessage[]
let chatHistoryMsgTxt = ''
for (const message of messages) {
if (message.type === 'apiMessage') {

View File

@ -0,0 +1,3 @@
<svg width="38" height="52" viewBox="0 0 38 52" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M0 12.383V41.035C0 41.392 0.190002 41.723 0.500002 41.901L17.095 51.481C17.25 51.571 17.422 51.616 17.595 51.616C17.768 51.616 17.94 51.571 18.095 51.481L37.279 40.409C37.589 40.23 37.779 39.9 37.779 39.543V10.887C37.779 10.53 37.589 10.199 37.279 10.021L31.168 6.49498C31.014 6.40598 30.841 6.36098 30.669 6.36098C30.496 6.36098 30.323 6.40498 30.169 6.49498L27.295 8.15398V4.83698C27.295 4.47998 27.105 4.14898 26.795 3.97098L20.684 0.441982C20.529 0.352982 20.357 0.307983 20.184 0.307983C20.011 0.307983 19.839 0.352982 19.684 0.441982L13.781 3.85098C13.471 4.02998 13.281 4.35998 13.281 4.71698V12.157L12.921 12.365V11.872C12.921 11.515 12.731 11.185 12.421 11.006L7.405 8.10698C7.25 8.01798 7.077 7.97298 6.905 7.97298C6.733 7.97298 6.56 8.01798 6.405 8.10698L0.501001 11.517C0.191001 11.695 0 12.025 0 12.383ZM1.5 13.248L5.519 15.566V23.294C5.519 23.304 5.524 23.313 5.525 23.323C5.526 23.345 5.529 23.366 5.534 23.388C5.538 23.411 5.544 23.433 5.552 23.455C5.559 23.476 5.567 23.496 5.577 23.516C5.582 23.525 5.581 23.535 5.587 23.544C5.591 23.551 5.6 23.554 5.604 23.561C5.617 23.581 5.63 23.6 5.646 23.618C5.669 23.644 5.695 23.665 5.724 23.686C5.741 23.698 5.751 23.716 5.77 23.727L11.236 26.886C11.243 26.89 11.252 26.888 11.26 26.892C11.328 26.927 11.402 26.952 11.484 26.952C11.566 26.952 11.641 26.928 11.709 26.893C11.728 26.883 11.743 26.87 11.761 26.858C11.812 26.823 11.855 26.781 11.89 26.731C11.898 26.719 11.911 26.715 11.919 26.702C11.924 26.693 11.924 26.682 11.929 26.674C11.944 26.644 11.951 26.613 11.96 26.58C11.969 26.547 11.978 26.515 11.98 26.481C11.98 26.471 11.986 26.462 11.986 26.452V20.138V19.302L17.096 22.251V49.749L1.5 40.747V13.248ZM35.778 10.887L30.879 13.718L25.768 10.766L26.544 10.317L30.668 7.93698L35.778 10.887ZM25.293 4.83598L20.391 7.66498L15.281 4.71598L20.183 1.88398L25.293 4.83598ZM10.92 11.872L6.019 14.701L2.001 12.383L6.904 9.55098L10.92 11.872ZM20.956 16.51L24.268 14.601V18.788C24.268 18.809 24.278 18.827 24.28 18.848C24.284 18.883 24.29 18.917 24.301 18.95C24.311 18.98 24.325 19.007 24.342 19.034C24.358 19.061 24.373 19.088 24.395 19.112C24.417 19.138 24.444 19.159 24.471 19.18C24.489 19.193 24.499 19.21 24.518 19.221L29.878 22.314L23.998 25.708V18.557C23.998 18.547 23.993 18.538 23.992 18.528C23.991 18.506 23.988 18.485 23.984 18.463C23.979 18.44 23.973 18.418 23.965 18.396C23.958 18.375 23.95 18.355 23.941 18.336C23.936 18.327 23.937 18.316 23.931 18.308C23.925 18.299 23.917 18.294 23.911 18.286C23.898 18.267 23.886 18.251 23.871 18.234C23.855 18.216 23.84 18.2 23.822 18.185C23.805 18.17 23.788 18.157 23.769 18.144C23.76 18.138 23.756 18.129 23.747 18.124L20.956 16.51ZM25.268 11.633L30.379 14.585V21.448L25.268 18.499V13.736V11.633ZM12.486 18.437L17.389 15.604L22.498 18.556L17.595 21.385L12.486 18.437ZM10.985 25.587L7.019 23.295L10.985 21.005V25.587ZM12.42 14.385L14.28 13.311L16.822 14.777L12.42 17.32V14.385ZM14.78 5.58198L19.891 8.53098V15.394L14.78 12.445V5.58198Z" fill="#213B41"/>
</svg>

After

Width:  |  Height:  |  Size: 3.0 KiB

View File

@ -0,0 +1,33 @@
import { INode, INodeParams } from '../../../src/Interface'
class LangWatch_Analytic implements INode {
label: string
name: string
version: number
description: string
type: string
icon: string
category: string
baseClasses: string[]
inputs?: INodeParams[]
credential: INodeParams
constructor() {
this.label = 'LangWatch'
this.name = 'LangWatch'
this.version = 1.0
this.type = 'LangWatch'
this.icon = 'LangWatch.svg'
this.category = 'Analytic'
this.baseClasses = [this.type]
this.inputs = []
this.credential = {
label: 'Connect Credential',
name: 'credential',
type: 'credential',
credentialNames: ['langwatchApi']
}
}
}
module.exports = { nodeClass: LangWatch_Analytic }

View File

@ -220,6 +220,7 @@ const prepareChain = async (nodeData: INodeData, options: ICommonObject, session
let model = nodeData.inputs?.model as BaseChatModel
const memory = nodeData.inputs?.memory as FlowiseMemory
const memoryKey = memory.memoryKey ?? 'chat_history'
const prependMessages = options?.prependMessages
let messageContent: MessageContentImageUrl[] = []
if (llmSupportsVision(model)) {
@ -252,7 +253,7 @@ const prepareChain = async (nodeData: INodeData, options: ICommonObject, session
{
[inputKey]: (input: { input: string }) => input.input,
[memoryKey]: async () => {
const history = await memory.getChatMessages(sessionId, true)
const history = await memory.getChatMessages(sessionId, true, prependMessages)
return history
},
...promptVariables

View File

@ -175,6 +175,7 @@ class ConversationalRetrievalQAChain_Chains implements INode {
const rephrasePrompt = nodeData.inputs?.rephrasePrompt as string
const responsePrompt = nodeData.inputs?.responsePrompt as string
const returnSourceDocuments = nodeData.inputs?.returnSourceDocuments as boolean
const prependMessages = options?.prependMessages
const appDataSource = options.appDataSource as DataSource
const databaseEntities = options.databaseEntities as IDatabaseEntity
@ -210,7 +211,7 @@ class ConversationalRetrievalQAChain_Chains implements INode {
}
const answerChain = createChain(model, vectorStoreRetriever, rephrasePrompt, customResponsePrompt)
const history = ((await memory.getChatMessages(this.sessionId, false)) as IMessage[]) ?? []
const history = ((await memory.getChatMessages(this.sessionId, false, prependMessages)) as IMessage[]) ?? []
const loggerHandler = new ConsoleCallbackHandler(options.logger)
const additionalCallback = await additionalCallbacks(nodeData, options)
@ -401,7 +402,11 @@ class BufferMemory extends FlowiseMemory implements MemoryMethods {
this.chatflowid = fields.chatflowid
}
async getChatMessages(overrideSessionId = '', returnBaseMessages = false): Promise<IMessage[] | BaseMessage[]> {
async getChatMessages(
overrideSessionId = '',
returnBaseMessages = false,
prependMessages?: IMessage[]
): Promise<IMessage[] | BaseMessage[]> {
if (!overrideSessionId) return []
const chatMessage = await this.appDataSource.getRepository(this.databaseEntities['ChatMessage']).find({
@ -414,6 +419,10 @@ class BufferMemory extends FlowiseMemory implements MemoryMethods {
}
})
if (prependMessages?.length) {
chatMessage.unshift(...prependMessages)
}
if (returnBaseMessages) {
return mapChatMessageToBaseMessage(chatMessage)
}

View File

@ -110,7 +110,9 @@ class LLMChain_Chains implements INode {
})
const inputVariables = chain.prompt.inputVariables as string[] // ["product"]
promptValues = injectOutputParser(this.outputParser, chain, promptValues)
const res = await runPrediction(inputVariables, chain, input, promptValues, options, nodeData)
// Disable streaming because its not final chain
const disableStreaming = true
const res = await runPrediction(inputVariables, chain, input, promptValues, options, nodeData, disableStreaming)
// eslint-disable-next-line no-console
console.log('\x1b[92m\x1b[1m\n*****OUTPUT PREDICTION*****\n\x1b[0m\x1b[0m')
// eslint-disable-next-line no-console
@ -154,12 +156,13 @@ const runPrediction = async (
input: string,
promptValuesRaw: ICommonObject | undefined,
options: ICommonObject,
nodeData: INodeData
nodeData: INodeData,
disableStreaming?: boolean
) => {
const loggerHandler = new ConsoleCallbackHandler(options.logger)
const callbacks = await additionalCallbacks(nodeData, options)
const isStreaming = options.socketIO && options.socketIOClientId
const isStreaming = !disableStreaming && options.socketIO && options.socketIOClientId
const socketIO = isStreaming ? options.socketIO : undefined
const socketIOClientId = isStreaming ? options.socketIOClientId : ''
const moderations = nodeData.inputs?.inputModeration as Moderation[]

View File

@ -1,6 +1,6 @@
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { OpenAI, ALL_AVAILABLE_OPENAI_MODELS } from 'llamaindex'
import { OpenAI } from 'llamaindex'
import { getModels, MODEL_TYPE } from '../../../src/modelLoader'
interface AzureOpenAIConfig {
@ -10,6 +10,28 @@ interface AzureOpenAIConfig {
deploymentName?: string
}
const ALL_AZURE_OPENAI_CHAT_MODELS = {
'gpt-35-turbo': { contextWindow: 4096, openAIModel: 'gpt-3.5-turbo' },
'gpt-35-turbo-16k': {
contextWindow: 16384,
openAIModel: 'gpt-3.5-turbo-16k'
},
'gpt-4': { contextWindow: 8192, openAIModel: 'gpt-4' },
'gpt-4-32k': { contextWindow: 32768, openAIModel: 'gpt-4-32k' },
'gpt-4-turbo': {
contextWindow: 128000,
openAIModel: 'gpt-4-turbo'
},
'gpt-4-vision-preview': {
contextWindow: 128000,
openAIModel: 'gpt-4-vision-preview'
},
'gpt-4-1106-preview': {
contextWindow: 128000,
openAIModel: 'gpt-4-1106-preview'
}
}
class AzureChatOpenAI_LlamaIndex_ChatModels implements INode {
label: string
name: string
@ -90,7 +112,7 @@ class AzureChatOpenAI_LlamaIndex_ChatModels implements INode {
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const modelName = nodeData.inputs?.modelName as keyof typeof ALL_AVAILABLE_OPENAI_MODELS
const modelName = nodeData.inputs?.modelName as keyof typeof ALL_AZURE_OPENAI_CHAT_MODELS
const temperature = nodeData.inputs?.temperature as string
const maxTokens = nodeData.inputs?.maxTokens as string
const topP = nodeData.inputs?.topP as string

View File

@ -36,7 +36,7 @@ class ChatAnthropic_LlamaIndex_ChatModels implements INode {
{
label: 'Model Name',
name: 'modelName',
type: 'options',
type: 'asyncOptions',
loadMethod: 'listModels',
default: 'claude-3-haiku'
},

View File

@ -0,0 +1,80 @@
import { BaseCache } from '@langchain/core/caches'
import { ChatBaiduWenxin } from '@langchain/community/chat_models/baiduwenxin'
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
class ChatBaiduWenxin_ChatModels implements INode {
label: string
name: string
version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'ChatBaiduWenxin'
this.name = 'chatBaiduWenxin'
this.version = 1.0
this.type = 'ChatBaiduWenxin'
this.icon = 'baiduwenxin.svg'
this.category = 'Chat Models'
this.description = 'Wrapper around BaiduWenxin Chat Endpoints'
this.baseClasses = [this.type, ...getBaseClasses(ChatBaiduWenxin)]
this.credential = {
label: 'Connect Credential',
name: 'credential',
type: 'credential',
credentialNames: ['baiduApi']
}
this.inputs = [
{
label: 'Cache',
name: 'cache',
type: 'BaseCache',
optional: true
},
{
label: 'Model',
name: 'modelName',
type: 'string',
placeholder: 'ERNIE-Bot-turbo'
},
{
label: 'Temperature',
name: 'temperature',
type: 'number',
step: 0.1,
default: 0.9,
optional: true
}
]
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const cache = nodeData.inputs?.cache as BaseCache
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const baiduApiKey = getCredentialParam('baiduApiKey', credentialData, nodeData)
const baiduSecretKey = getCredentialParam('baiduSecretKey', credentialData, nodeData)
const obj: Partial<ChatBaiduWenxin> = {
streaming: true,
baiduApiKey,
baiduSecretKey,
modelName,
temperature: temperature ? parseFloat(temperature) : undefined
}
if (cache) obj.cache = cache
const model = new ChatBaiduWenxin(obj)
return model
}
}
module.exports = { nodeClass: ChatBaiduWenxin_ChatModels }

View File

@ -0,0 +1,7 @@
<?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg xmlns="http://www.w3.org/2000/svg"
aria-label="Baidu" role="img"
viewBox="0 0 512 512"><rect
width="512" height="512"
rx="15%"
fill="#ffffff"/><path d="m131 251c41-9 35-58 34-68-2-17-21-45-48-43-33 3-37 50-37 50-5 22 10 70 51 61m76-82c22 0 40-26 40-58s-18-58-40-58c-23 0-41 26-41 58s18 58 41 58m96 4c31 4 50-28 54-53 4-24-16-52-37-57s-48 29-50 52c-3 27 3 54 33 58m120 41c0-12-10-47-46-47s-41 33-41 57c0 22 2 53 47 52s40-51 40-62m-46 102s-46-36-74-75c-36-57-89-34-106-5-18 29-45 48-49 53-4 4-56 33-44 84 11 52 52 51 52 51s30 3 65-5 65 2 65 2 81 27 104-25c22-53-13-80-13-80" fill="#2319dc"/><path d="m214 266v34h-28s-29 3-39 35c-3 21 4 34 5 36 1 3 10 19 33 23h53v-128zm-1 107h-21s-15-1-19-18c-3-7 0-16 1-20 1-3 6-11 17-14h22zm38-70v68s1 17 24 23h61v-91h-26v68h-25s-8-1-10-7v-61z" fill="#ffffff"/></svg>

After

Width:  |  Height:  |  Size: 924 B

View File

@ -0,0 +1,79 @@
import { BaseCache } from '@langchain/core/caches'
import { ChatFireworks } from '@langchain/community/chat_models/fireworks'
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
class ChatFireworks_ChatModels implements INode {
label: string
name: string
version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'ChatFireworks'
this.name = 'chatFireworks'
this.version = 1.0
this.type = 'ChatFireworks'
this.icon = 'Fireworks.png'
this.category = 'Chat Models'
this.description = 'Wrapper around Fireworks Chat Endpoints'
this.baseClasses = [this.type, ...getBaseClasses(ChatFireworks)]
this.credential = {
label: 'Connect Credential',
name: 'credential',
type: 'credential',
credentialNames: ['fireworksApi']
}
this.inputs = [
{
label: 'Cache',
name: 'cache',
type: 'BaseCache',
optional: true
},
{
label: 'Model',
name: 'modelName',
type: 'string',
default: 'accounts/fireworks/models/llama-v2-13b-chat',
placeholder: 'accounts/fireworks/models/llama-v2-13b-chat'
},
{
label: 'Temperature',
name: 'temperature',
type: 'number',
step: 0.1,
default: 0.9,
optional: true
}
]
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const cache = nodeData.inputs?.cache as BaseCache
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const fireworksApiKey = getCredentialParam('fireworksApiKey', credentialData, nodeData)
const obj: Partial<ChatFireworks> = {
fireworksApiKey,
model: modelName,
modelName,
temperature: temperature ? parseFloat(temperature) : undefined
}
if (cache) obj.cache = cache
const model = new ChatFireworks(obj)
return model
}
}
module.exports = { nodeClass: ChatFireworks_ChatModels }

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

View File

@ -206,7 +206,8 @@ class LangchainChatGoogleGenerativeAI extends BaseChatModel implements GoogleGen
options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun
): Promise<ChatResult> {
const prompt = convertBaseMessagesToContent(messages, this._isMultimodalModel)
let prompt = convertBaseMessagesToContent(messages, this._isMultimodalModel)
prompt = checkIfEmptyContentAndSameRole(prompt)
// Handle streaming
if (this.streaming) {
@ -235,7 +236,9 @@ class LangchainChatGoogleGenerativeAI extends BaseChatModel implements GoogleGen
options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun
): AsyncGenerator<ChatGenerationChunk> {
const prompt = convertBaseMessagesToContent(messages, this._isMultimodalModel)
let prompt = convertBaseMessagesToContent(messages, this._isMultimodalModel)
prompt = checkIfEmptyContentAndSameRole(prompt)
//@ts-ignore
if (options.tools !== undefined && options.tools.length > 0) {
const result = await this._generateNonStreaming(prompt, options, runManager)
@ -333,7 +336,9 @@ function convertAuthorToRole(author: string) {
case 'tool':
return 'function'
default:
throw new Error(`Unknown / unsupported author: ${author}`)
// Instead of throwing, we return model
// throw new Error(`Unknown / unsupported author: ${author}`)
return 'model'
}
}
@ -396,6 +401,25 @@ function convertMessageContentToParts(content: MessageContent, isMultimodalModel
})
}
/*
* This is a dedicated logic for Multi Agent Supervisor to handle the case where the content is empty, and the role is the same
*/
function checkIfEmptyContentAndSameRole(contents: Content[]) {
let prevRole = ''
const removedContents: Content[] = []
for (const content of contents) {
const role = content.role
if (content.parts.length && content.parts[0].text === '' && role === prevRole) {
removedContents.push(content)
}
prevRole = role
}
return contents.filter((content) => !removedContents.includes(content))
}
function convertBaseMessagesToContent(messages: BaseMessage[], isMultimodalModel: boolean) {
return messages.reduce<{
content: Content[]
@ -528,6 +552,13 @@ function zodToGeminiParameters(zodObj: any) {
const jsonSchema: any = zodToJsonSchema(zodObj)
// eslint-disable-next-line unused-imports/no-unused-vars
const { $schema, additionalProperties, ...rest } = jsonSchema
if (rest.properties) {
Object.keys(rest.properties).forEach((key) => {
if (rest.properties[key].enum?.length) {
rest.properties[key] = { type: 'string', format: 'enum', enum: rest.properties[key].enum }
}
})
}
return rest
}

View File

@ -18,7 +18,7 @@ class ChatHuggingFace_ChatModels implements INode {
constructor() {
this.label = 'ChatHuggingFace'
this.name = 'chatHuggingFace'
this.version = 2.0
this.version = 3.0
this.type = 'ChatHuggingFace'
this.icon = 'HuggingFace.svg'
this.category = 'Chat Models'
@ -96,6 +96,16 @@ class ChatHuggingFace_ChatModels implements INode {
description: 'Frequency Penalty parameter may not apply to certain model. Please check available model parameters',
optional: true,
additionalParams: true
},
{
label: 'Stop Sequence',
name: 'stop',
type: 'string',
rows: 4,
placeholder: 'AI assistant:',
description: 'Sets the stop sequences to use. Use comma to seperate different sequences.',
optional: true,
additionalParams: true
}
]
}
@ -109,6 +119,7 @@ class ChatHuggingFace_ChatModels implements INode {
const frequencyPenalty = nodeData.inputs?.frequencyPenalty as string
const endpoint = nodeData.inputs?.endpoint as string
const cache = nodeData.inputs?.cache as BaseCache
const stop = nodeData.inputs?.stop as string
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const huggingFaceApiKey = getCredentialParam('huggingFaceApiKey', credentialData, nodeData)
@ -123,7 +134,11 @@ class ChatHuggingFace_ChatModels implements INode {
if (topP) obj.topP = parseFloat(topP)
if (hfTopK) obj.topK = parseFloat(hfTopK)
if (frequencyPenalty) obj.frequencyPenalty = parseFloat(frequencyPenalty)
if (endpoint) obj.endpoint = endpoint
if (endpoint) obj.endpointUrl = endpoint
if (stop) {
const stopSequences = stop.split(',')
obj.stopSequences = stopSequences
}
const huggingFace = new HuggingFaceInference(obj)
if (cache) huggingFace.cache = cache

View File

@ -1,32 +1,19 @@
import { LLM, BaseLLMParams } from '@langchain/core/language_models/llms'
import { getEnvironmentVariable } from '../../../src/utils'
import { GenerationChunk } from '@langchain/core/outputs'
import { CallbackManagerForLLMRun } from '@langchain/core/callbacks/manager'
export interface HFInput {
/** Model to use */
model: string
/** Sampling temperature to use */
temperature?: number
/**
* Maximum number of tokens to generate in the completion.
*/
maxTokens?: number
/** Total probability mass of tokens to consider at each step */
stopSequences?: string[]
topP?: number
/** Integer to define the top tokens considered within the sample operation to create new text. */
topK?: number
/** Penalizes repeated tokens according to frequency */
frequencyPenalty?: number
/** API key to use. */
apiKey?: string
/** Private endpoint to use. */
endpoint?: string
endpointUrl?: string
includeCredentials?: string | boolean
}
export class HuggingFaceInference extends LLM implements HFInput {
@ -40,6 +27,8 @@ export class HuggingFaceInference extends LLM implements HFInput {
temperature: number | undefined = undefined
stopSequences: string[] | undefined = undefined
maxTokens: number | undefined = undefined
topP: number | undefined = undefined
@ -50,7 +39,9 @@ export class HuggingFaceInference extends LLM implements HFInput {
apiKey: string | undefined = undefined
endpoint: string | undefined = undefined
endpointUrl: string | undefined = undefined
includeCredentials: string | boolean | undefined = undefined
constructor(fields?: Partial<HFInput> & BaseLLMParams) {
super(fields ?? {})
@ -58,11 +49,13 @@ export class HuggingFaceInference extends LLM implements HFInput {
this.model = fields?.model ?? this.model
this.temperature = fields?.temperature ?? this.temperature
this.maxTokens = fields?.maxTokens ?? this.maxTokens
this.stopSequences = fields?.stopSequences ?? this.stopSequences
this.topP = fields?.topP ?? this.topP
this.topK = fields?.topK ?? this.topK
this.frequencyPenalty = fields?.frequencyPenalty ?? this.frequencyPenalty
this.endpoint = fields?.endpoint ?? ''
this.apiKey = fields?.apiKey ?? getEnvironmentVariable('HUGGINGFACEHUB_API_KEY')
this.endpointUrl = fields?.endpointUrl
this.includeCredentials = fields?.includeCredentials
if (!this.apiKey) {
throw new Error(
'Please set an API key for HuggingFace Hub in the environment variable HUGGINGFACEHUB_API_KEY or in the apiKey field of the HuggingFaceInference constructor.'
@ -74,31 +67,65 @@ export class HuggingFaceInference extends LLM implements HFInput {
return 'hf'
}
/** @ignore */
async _call(prompt: string, options: this['ParsedCallOptions']): Promise<string> {
const { HfInference } = await HuggingFaceInference.imports()
const hf = new HfInference(this.apiKey)
const obj: any = {
invocationParams(options?: this['ParsedCallOptions']) {
return {
model: this.model,
parameters: {
// make it behave similar to openai, returning only the generated text
return_full_text: false,
temperature: this.temperature,
max_new_tokens: this.maxTokens,
stop: options?.stop ?? this.stopSequences,
top_p: this.topP,
top_k: this.topK,
repetition_penalty: this.frequencyPenalty
},
inputs: prompt
}
}
if (this.endpoint) {
hf.endpoint(this.endpoint)
} else {
obj.model = this.model
}
async *_streamResponseChunks(
prompt: string,
options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun
): AsyncGenerator<GenerationChunk> {
const hfi = await this._prepareHFInference()
const stream = await this.caller.call(async () =>
hfi.textGenerationStream({
...this.invocationParams(options),
inputs: prompt
})
)
for await (const chunk of stream) {
const token = chunk.token.text
yield new GenerationChunk({ text: token, generationInfo: chunk })
await runManager?.handleLLMNewToken(token ?? '')
// stream is done
if (chunk.generated_text)
yield new GenerationChunk({
text: '',
generationInfo: { finished: true }
})
}
const res = await this.caller.callWithOptions({ signal: options.signal }, hf.textGeneration.bind(hf), obj)
}
/** @ignore */
async _call(prompt: string, options: this['ParsedCallOptions']): Promise<string> {
const hfi = await this._prepareHFInference()
const args = { ...this.invocationParams(options), inputs: prompt }
const res = await this.caller.callWithOptions({ signal: options.signal }, hfi.textGeneration.bind(hfi), args)
return res.generated_text
}
/** @ignore */
private async _prepareHFInference() {
const { HfInference } = await HuggingFaceInference.imports()
const hfi = new HfInference(this.apiKey, {
includeCredentials: this.includeCredentials
})
return this.endpointUrl ? hfi.endpoint(this.endpointUrl) : hfi
}
/** @ignore */
static async imports(): Promise<{
HfInference: typeof import('@huggingface/inference').HfInference

View File

@ -0,0 +1,100 @@
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
import { MODEL_TYPE, getModels } from '../../../src/modelLoader'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { ALL_AVAILABLE_MISTRAL_MODELS, MistralAI } from 'llamaindex'
class ChatMistral_LlamaIndex_ChatModels implements INode {
label: string
name: string
version: number
type: string
icon: string
category: string
description: string
tags: string[]
baseClasses: string[]
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'ChatMistral'
this.name = 'chatMistral_LlamaIndex'
this.version = 1.0
this.type = 'ChatMistral'
this.icon = 'MistralAI.svg'
this.category = 'Chat Models'
this.description = 'Wrapper around ChatMistral LLM specific for LlamaIndex'
this.baseClasses = [this.type, 'BaseChatModel_LlamaIndex', ...getBaseClasses(MistralAI)]
this.tags = ['LlamaIndex']
this.credential = {
label: 'Connect Credential',
name: 'credential',
type: 'credential',
credentialNames: ['mistralAIApi']
}
this.inputs = [
{
label: 'Model Name',
name: 'modelName',
type: 'asyncOptions',
loadMethod: 'listModels',
default: 'mistral-tiny'
},
{
label: 'Temperature',
name: 'temperature',
type: 'number',
step: 0.1,
default: 0.9,
optional: true
},
{
label: 'Max Tokens',
name: 'maxTokensToSample',
type: 'number',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Top P',
name: 'topP',
type: 'number',
step: 0.1,
optional: true,
additionalParams: true
}
]
}
//@ts-ignore
loadMethods = {
async listModels(): Promise<INodeOptionsValue[]> {
return await getModels(MODEL_TYPE.CHAT, 'chatMistral_LlamaIndex')
}
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as keyof typeof ALL_AVAILABLE_MISTRAL_MODELS
const maxTokensToSample = nodeData.inputs?.maxTokensToSample as string
const topP = nodeData.inputs?.topP as string
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const apiKey = getCredentialParam('mistralAIAPIKey', credentialData, nodeData)
const obj: Partial<MistralAI> = {
temperature: parseFloat(temperature),
model: modelName,
apiKey: apiKey
}
if (maxTokensToSample) obj.maxTokens = parseInt(maxTokensToSample, 10)
if (topP) obj.topP = parseFloat(topP)
const model = new MistralAI(obj)
return model
}
}
module.exports = { nodeClass: ChatMistral_LlamaIndex_ChatModels }

View File

@ -0,0 +1,221 @@
import { INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses } from '../../../src/utils'
import { OllamaParams, Ollama } from 'llamaindex'
class ChatOllama_LlamaIndex_ChatModels implements INode {
label: string
name: string
version: number
type: string
icon: string
category: string
description: string
tags: string[]
baseClasses: string[]
inputs: INodeParams[]
constructor() {
this.label = 'ChatOllama'
this.name = 'chatOllama_LlamaIndex'
this.version = 1.0
this.type = 'ChatOllama'
this.icon = 'Ollama.svg'
this.category = 'Chat Models'
this.description = 'Wrapper around ChatOllama LLM specific for LlamaIndex'
this.baseClasses = [this.type, 'BaseChatModel_LlamaIndex', ...getBaseClasses(Ollama)]
this.tags = ['LlamaIndex']
this.inputs = [
{
label: 'Base URL',
name: 'baseUrl',
type: 'string',
default: 'http://localhost:11434'
},
{
label: 'Model Name',
name: 'modelName',
type: 'string',
placeholder: 'llama3'
},
{
label: 'Temperature',
name: 'temperature',
type: 'number',
description:
'The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
default: 0.9,
optional: true
},
{
label: 'Top P',
name: 'topP',
type: 'number',
description:
'Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
},
{
label: 'Top K',
name: 'topK',
type: 'number',
description:
'Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Mirostat',
name: 'mirostat',
type: 'number',
description:
'Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Mirostat ETA',
name: 'mirostatEta',
type: 'number',
description:
'Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1) Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
},
{
label: 'Mirostat TAU',
name: 'mirostatTau',
type: 'number',
description:
'Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0) Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
},
{
label: 'Context Window Size',
name: 'numCtx',
type: 'number',
description:
'Sets the size of the context window used to generate the next token. (Default: 2048) Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Number of GPU',
name: 'numGpu',
type: 'number',
description:
'The number of layers to send to the GPU(s). On macOS it defaults to 1 to enable metal support, 0 to disable. Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Number of Thread',
name: 'numThread',
type: 'number',
description:
'Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Repeat Last N',
name: 'repeatLastN',
type: 'number',
description:
'Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Repeat Penalty',
name: 'repeatPenalty',
type: 'number',
description:
'Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
},
{
label: 'Stop Sequence',
name: 'stop',
type: 'string',
rows: 4,
placeholder: 'AI assistant:',
description:
'Sets the stop sequences to use. Use comma to seperate different sequences. Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
optional: true,
additionalParams: true
},
{
label: 'Tail Free Sampling',
name: 'tfsZ',
type: 'number',
description:
'Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (Default: 1). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
}
]
}
async init(nodeData: INodeData): Promise<any> {
const temperature = nodeData.inputs?.temperature as string
const baseUrl = nodeData.inputs?.baseUrl as string
const modelName = nodeData.inputs?.modelName as string
const topP = nodeData.inputs?.topP as string
const topK = nodeData.inputs?.topK as string
const mirostat = nodeData.inputs?.mirostat as string
const mirostatEta = nodeData.inputs?.mirostatEta as string
const mirostatTau = nodeData.inputs?.mirostatTau as string
const numCtx = nodeData.inputs?.numCtx as string
const numGpu = nodeData.inputs?.numGpu as string
const numThread = nodeData.inputs?.numThread as string
const repeatLastN = nodeData.inputs?.repeatLastN as string
const repeatPenalty = nodeData.inputs?.repeatPenalty as string
const stop = nodeData.inputs?.stop as string
const tfsZ = nodeData.inputs?.tfsZ as string
const obj: OllamaParams = {
model: modelName,
options: {},
config: {
host: baseUrl
}
}
if (temperature) obj.options.temperature = parseFloat(temperature)
if (topP) obj.options.top_p = parseFloat(topP)
if (topK) obj.options.top_k = parseFloat(topK)
if (mirostat) obj.options.mirostat = parseFloat(mirostat)
if (mirostatEta) obj.options.mirostat_eta = parseFloat(mirostatEta)
if (mirostatTau) obj.options.mirostat_tau = parseFloat(mirostatTau)
if (numCtx) obj.options.num_ctx = parseFloat(numCtx)
if (numGpu) obj.options.main_gpu = parseFloat(numGpu)
if (numThread) obj.options.num_thread = parseFloat(numThread)
if (repeatLastN) obj.options.repeat_last_n = parseFloat(repeatLastN)
if (repeatPenalty) obj.options.repeat_penalty = parseFloat(repeatPenalty)
if (tfsZ) obj.options.tfs_z = parseFloat(tfsZ)
if (stop) {
const stopSequences = stop.split(',')
obj.options.stop = stopSequences
}
const model = new Ollama(obj)
return model
}
}
module.exports = { nodeClass: ChatOllama_LlamaIndex_ChatModels }

View File

@ -0,0 +1,808 @@
import { HumanMessage, AIMessage, BaseMessage, AIMessageChunk, ChatMessage } from '@langchain/core/messages'
import { ChatResult } from '@langchain/core/outputs'
import { SimpleChatModel, BaseChatModel, BaseChatModelParams } from '@langchain/core/language_models/chat_models'
import { SystemMessagePromptTemplate } from '@langchain/core/prompts'
import { BaseCache } from '@langchain/core/caches'
import { type StructuredToolInterface } from '@langchain/core/tools'
import type { BaseFunctionCallOptions, BaseLanguageModelInput } from '@langchain/core/language_models/base'
import { convertToOpenAIFunction } from '@langchain/core/utils/function_calling'
import { RunnableInterface } from '@langchain/core/runnables'
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses } from '../../../src/utils'
import type { BaseLanguageModelCallOptions } from '@langchain/core/language_models/base'
import { CallbackManagerForLLMRun } from '@langchain/core/callbacks/manager'
import { ChatGenerationChunk } from '@langchain/core/outputs'
import type { StringWithAutocomplete } from '@langchain/core/utils/types'
import { createOllamaChatStream, createOllamaGenerateStream, type OllamaInput, type OllamaMessage } from './utils'
const DEFAULT_TOOL_SYSTEM_TEMPLATE = `You have access to the following tools:
{tools}
You must always select one of the above tools and respond with only a JSON object matching the following schema:
{{
"tool": <name of the selected tool>,
"tool_input": <parameters for the selected tool, matching the tool's JSON schema>
}}`
class ChatOllamaFunction_ChatModels implements INode {
label: string
name: string
version: number
type: string
icon: string
category: string
description: string
baseClasses: string[]
credential: INodeParams
badge?: string
inputs: INodeParams[]
constructor() {
this.label = 'ChatOllama Function'
this.name = 'chatOllamaFunction'
this.version = 1.0
this.type = 'ChatOllamaFunction'
this.icon = 'Ollama.svg'
this.category = 'Chat Models'
this.description = 'Run open-source function-calling compatible LLM on Ollama'
this.baseClasses = [this.type, ...getBaseClasses(OllamaFunctions)]
this.inputs = [
{
label: 'Cache',
name: 'cache',
type: 'BaseCache',
optional: true
},
{
label: 'Base URL',
name: 'baseUrl',
type: 'string',
default: 'http://localhost:11434'
},
{
label: 'Model Name',
name: 'modelName',
type: 'string',
description: 'Only compatible with function calling model like mistral',
placeholder: 'mistral'
},
{
label: 'Temperature',
name: 'temperature',
type: 'number',
description:
'The temperature of the model. Increasing the temperature will make the model answer more creatively. (Default: 0.8). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
default: 0.9,
optional: true
},
{
label: 'Tool System Prompt',
name: 'toolSystemPromptTemplate',
type: 'string',
rows: 4,
description: `Under the hood, Ollama's JSON mode is being used to constrain output to JSON. Output JSON will contains two keys: tool and tool_input fields. We then parse it to execute the tool. Because different models have different strengths, it may be helpful to pass in your own system prompt.`,
warning: `Prompt must always contains {tools} and instructions to respond with a JSON object with tool and tool_input fields`,
default: DEFAULT_TOOL_SYSTEM_TEMPLATE,
placeholder: DEFAULT_TOOL_SYSTEM_TEMPLATE,
additionalParams: true,
optional: true
},
{
label: 'Top P',
name: 'topP',
type: 'number',
description:
'Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
},
{
label: 'Top K',
name: 'topK',
type: 'number',
description:
'Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. (Default: 40). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Mirostat',
name: 'mirostat',
type: 'number',
description:
'Enable Mirostat sampling for controlling perplexity. (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Mirostat ETA',
name: 'mirostatEta',
type: 'number',
description:
'Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. (Default: 0.1) Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
},
{
label: 'Mirostat TAU',
name: 'mirostatTau',
type: 'number',
description:
'Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. (Default: 5.0) Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
},
{
label: 'Context Window Size',
name: 'numCtx',
type: 'number',
description:
'Sets the size of the context window used to generate the next token. (Default: 2048) Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Number of GQA groups',
name: 'numGqa',
type: 'number',
description:
'The number of GQA groups in the transformer layer. Required for some models, for example it is 8 for llama2:70b. Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Number of GPU',
name: 'numGpu',
type: 'number',
description:
'The number of layers to send to the GPU(s). On macOS it defaults to 1 to enable metal support, 0 to disable. Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Number of Thread',
name: 'numThread',
type: 'number',
description:
'Sets the number of threads to use during computation. By default, Ollama will detect this for optimal performance. It is recommended to set this value to the number of physical CPU cores your system has (as opposed to the logical number of cores). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Repeat Last N',
name: 'repeatLastN',
type: 'number',
description:
'Sets how far back for the model to look back to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 1,
optional: true,
additionalParams: true
},
{
label: 'Repeat Penalty',
name: 'repeatPenalty',
type: 'number',
description:
'Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. (Default: 1.1). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
},
{
label: 'Stop Sequence',
name: 'stop',
type: 'string',
rows: 4,
placeholder: 'AI assistant:',
description:
'Sets the stop sequences to use. Use comma to seperate different sequences. Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
optional: true,
additionalParams: true
},
{
label: 'Tail Free Sampling',
name: 'tfsZ',
type: 'number',
description:
'Tail free sampling is used to reduce the impact of less probable tokens from the output. A higher value (e.g., 2.0) will reduce the impact more, while a value of 1.0 disables this setting. (Default: 1). Refer to <a target="_blank" href="https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values">docs</a> for more details',
step: 0.1,
optional: true,
additionalParams: true
}
]
}
async init(nodeData: INodeData): Promise<any> {
const temperature = nodeData.inputs?.temperature as string
const baseUrl = nodeData.inputs?.baseUrl as string
const modelName = nodeData.inputs?.modelName as string
const topP = nodeData.inputs?.topP as string
const topK = nodeData.inputs?.topK as string
const mirostat = nodeData.inputs?.mirostat as string
const mirostatEta = nodeData.inputs?.mirostatEta as string
const mirostatTau = nodeData.inputs?.mirostatTau as string
const numCtx = nodeData.inputs?.numCtx as string
const numGqa = nodeData.inputs?.numGqa as string
const numGpu = nodeData.inputs?.numGpu as string
const numThread = nodeData.inputs?.numThread as string
const repeatLastN = nodeData.inputs?.repeatLastN as string
const repeatPenalty = nodeData.inputs?.repeatPenalty as string
const stop = nodeData.inputs?.stop as string
const tfsZ = nodeData.inputs?.tfsZ as string
const toolSystemPromptTemplate = nodeData.inputs?.toolSystemPromptTemplate as string
const cache = nodeData.inputs?.cache as BaseCache
const obj: OllamaFunctionsInput = {
baseUrl,
temperature: parseFloat(temperature),
model: modelName,
toolSystemPromptTemplate: toolSystemPromptTemplate ? toolSystemPromptTemplate : DEFAULT_TOOL_SYSTEM_TEMPLATE
}
if (topP) obj.topP = parseFloat(topP)
if (topK) obj.topK = parseFloat(topK)
if (mirostat) obj.mirostat = parseFloat(mirostat)
if (mirostatEta) obj.mirostatEta = parseFloat(mirostatEta)
if (mirostatTau) obj.mirostatTau = parseFloat(mirostatTau)
if (numCtx) obj.numCtx = parseFloat(numCtx)
if (numGqa) obj.numGqa = parseFloat(numGqa)
if (numGpu) obj.numGpu = parseFloat(numGpu)
if (numThread) obj.numThread = parseFloat(numThread)
if (repeatLastN) obj.repeatLastN = parseFloat(repeatLastN)
if (repeatPenalty) obj.repeatPenalty = parseFloat(repeatPenalty)
if (tfsZ) obj.tfsZ = parseFloat(tfsZ)
if (stop) {
const stopSequences = stop.split(',')
obj.stop = stopSequences
}
if (cache) obj.cache = cache
const model = new OllamaFunctions(obj)
return model
}
}
interface ChatOllamaFunctionsCallOptions extends BaseFunctionCallOptions {}
type OllamaFunctionsInput = Partial<ChatOllamaInput> &
BaseChatModelParams & {
llm?: OllamaChat
toolSystemPromptTemplate?: string
}
class OllamaFunctions extends BaseChatModel<ChatOllamaFunctionsCallOptions> {
llm: OllamaChat
fields?: OllamaFunctionsInput
toolSystemPromptTemplate: string = DEFAULT_TOOL_SYSTEM_TEMPLATE
protected defaultResponseFunction = {
name: '__conversational_response',
description: 'Respond conversationally if no other tools should be called for a given query.',
parameters: {
type: 'object',
properties: {
response: {
type: 'string',
description: 'Conversational response to the user.'
}
},
required: ['response']
}
}
static lc_name(): string {
return 'OllamaFunctions'
}
constructor(fields?: OllamaFunctionsInput) {
super(fields ?? {})
this.fields = fields
this.llm = fields?.llm ?? new OllamaChat({ ...fields, format: 'json' })
this.toolSystemPromptTemplate = fields?.toolSystemPromptTemplate ?? this.toolSystemPromptTemplate
}
invocationParams() {
return this.llm.invocationParams()
}
/** @ignore */
_identifyingParams() {
return this.llm._identifyingParams()
}
async _generate(
messages: BaseMessage[],
options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun | undefined
): Promise<ChatResult> {
let functions = options.functions ?? []
if (options.function_call !== undefined) {
functions = functions.filter((fn) => fn.name === options.function_call?.name)
if (!functions.length) {
throw new Error(`If "function_call" is specified, you must also pass a matching function in "functions".`)
}
} else if (functions.length === 0) {
functions.push(this.defaultResponseFunction)
}
const systemPromptTemplate = SystemMessagePromptTemplate.fromTemplate(this.toolSystemPromptTemplate)
const systemMessage = await systemPromptTemplate.format({
tools: JSON.stringify(functions, null, 2)
})
let generatedMessages = [systemMessage, ...messages]
let isToolResponse = false
if (
messages.length > 3 &&
messages[messages.length - 1]._getType() === 'tool' &&
functions.length &&
messages[messages.length - 1].additional_kwargs?.name === functions[0].name
) {
const lastToolQuestion = messages[messages.length - 3].content
const lastToolResp = messages.pop()?.content
// Pop the message again to get rid of tool call message
messages.pop()?.content
const humanMessage = new HumanMessage({
content: `Given user question: ${lastToolQuestion} and answer: ${lastToolResp}\n\nWrite a natural language response`
})
generatedMessages = [...messages, humanMessage]
isToolResponse = true
this.llm = new OllamaChat({ ...this.fields })
}
const chatResult = await this.llm._generate(generatedMessages, options, runManager)
const chatGenerationContent = chatResult.generations[0].message.content
if (typeof chatGenerationContent !== 'string') {
throw new Error('OllamaFunctions does not support non-string output.')
}
if (isToolResponse) {
return {
generations: [
{
message: new AIMessage({
content: chatGenerationContent
}),
text: chatGenerationContent
}
]
}
}
let parsedChatResult
try {
parsedChatResult = JSON.parse(chatGenerationContent)
} catch (e) {
throw new Error(`"${this.llm.model}" did not respond with valid JSON. Please try again.`)
}
const calledToolName = parsedChatResult.tool
const calledToolArguments = parsedChatResult.tool_input
const calledTool = functions.find((fn) => fn.name === calledToolName)
if (calledTool === undefined) {
throw new Error(`Failed to parse a function call from ${this.llm.model} output: ${chatGenerationContent}`)
}
if (calledTool.name === this.defaultResponseFunction.name) {
return {
generations: [
{
message: new AIMessage({
content: calledToolArguments.response
}),
text: calledToolArguments.response
}
]
}
}
const responseMessageWithFunctions = new AIMessage({
content: '',
tool_calls: [
{
name: calledToolName,
args: calledToolArguments || {}
}
],
invalid_tool_calls: [],
additional_kwargs: {
function_call: {
name: calledToolName,
arguments: calledToolArguments ? JSON.stringify(calledToolArguments) : ''
},
tool_calls: [
{
id: Date.now().toString(),
type: 'function',
function: {
name: calledToolName,
arguments: calledToolArguments ? JSON.stringify(calledToolArguments) : ''
}
}
]
}
})
return {
generations: [{ message: responseMessageWithFunctions, text: '' }]
}
}
override bindTools(
tools: StructuredToolInterface[],
kwargs?: Partial<ICommonObject>
): RunnableInterface<BaseLanguageModelInput, AIMessageChunk, ICommonObject> {
return this.bind({
functions: tools.map((tool) => convertToOpenAIFunction(tool)),
...kwargs
} as Partial<ICommonObject>)
}
_llmType(): string {
return 'ollama_functions'
}
/** @ignore */
_combineLLMOutput() {
return []
}
}
export interface ChatOllamaInput extends OllamaInput {}
interface ChatOllamaCallOptions extends BaseLanguageModelCallOptions {}
class OllamaChat extends SimpleChatModel<ChatOllamaCallOptions> implements ChatOllamaInput {
static lc_name() {
return 'ChatOllama'
}
lc_serializable = true
model = 'llama2'
baseUrl = 'http://localhost:11434'
keepAlive = '5m'
embeddingOnly?: boolean
f16KV?: boolean
frequencyPenalty?: number
headers?: Record<string, string>
logitsAll?: boolean
lowVram?: boolean
mainGpu?: number
mirostat?: number
mirostatEta?: number
mirostatTau?: number
numBatch?: number
numCtx?: number
numGpu?: number
numGqa?: number
numKeep?: number
numPredict?: number
numThread?: number
penalizeNewline?: boolean
presencePenalty?: number
repeatLastN?: number
repeatPenalty?: number
ropeFrequencyBase?: number
ropeFrequencyScale?: number
temperature?: number
stop?: string[]
tfsZ?: number
topK?: number
topP?: number
typicalP?: number
useMLock?: boolean
useMMap?: boolean
vocabOnly?: boolean
format?: StringWithAutocomplete<'json'>
constructor(fields: OllamaInput & BaseChatModelParams) {
super(fields)
this.model = fields.model ?? this.model
this.baseUrl = fields.baseUrl?.endsWith('/') ? fields.baseUrl.slice(0, -1) : fields.baseUrl ?? this.baseUrl
this.keepAlive = fields.keepAlive ?? this.keepAlive
this.embeddingOnly = fields.embeddingOnly
this.f16KV = fields.f16KV
this.frequencyPenalty = fields.frequencyPenalty
this.headers = fields.headers
this.logitsAll = fields.logitsAll
this.lowVram = fields.lowVram
this.mainGpu = fields.mainGpu
this.mirostat = fields.mirostat
this.mirostatEta = fields.mirostatEta
this.mirostatTau = fields.mirostatTau
this.numBatch = fields.numBatch
this.numCtx = fields.numCtx
this.numGpu = fields.numGpu
this.numGqa = fields.numGqa
this.numKeep = fields.numKeep
this.numPredict = fields.numPredict
this.numThread = fields.numThread
this.penalizeNewline = fields.penalizeNewline
this.presencePenalty = fields.presencePenalty
this.repeatLastN = fields.repeatLastN
this.repeatPenalty = fields.repeatPenalty
this.ropeFrequencyBase = fields.ropeFrequencyBase
this.ropeFrequencyScale = fields.ropeFrequencyScale
this.temperature = fields.temperature
this.stop = fields.stop
this.tfsZ = fields.tfsZ
this.topK = fields.topK
this.topP = fields.topP
this.typicalP = fields.typicalP
this.useMLock = fields.useMLock
this.useMMap = fields.useMMap
this.vocabOnly = fields.vocabOnly
this.format = fields.format
}
_llmType() {
return 'ollama'
}
/**
* A method that returns the parameters for an Ollama API call. It
* includes model and options parameters.
* @param options Optional parsed call options.
* @returns An object containing the parameters for an Ollama API call.
*/
invocationParams(options?: this['ParsedCallOptions']) {
return {
model: this.model,
format: this.format,
keep_alive: this.keepAlive,
options: {
embedding_only: this.embeddingOnly,
f16_kv: this.f16KV,
frequency_penalty: this.frequencyPenalty,
logits_all: this.logitsAll,
low_vram: this.lowVram,
main_gpu: this.mainGpu,
mirostat: this.mirostat,
mirostat_eta: this.mirostatEta,
mirostat_tau: this.mirostatTau,
num_batch: this.numBatch,
num_ctx: this.numCtx,
num_gpu: this.numGpu,
num_gqa: this.numGqa,
num_keep: this.numKeep,
num_predict: this.numPredict,
num_thread: this.numThread,
penalize_newline: this.penalizeNewline,
presence_penalty: this.presencePenalty,
repeat_last_n: this.repeatLastN,
repeat_penalty: this.repeatPenalty,
rope_frequency_base: this.ropeFrequencyBase,
rope_frequency_scale: this.ropeFrequencyScale,
temperature: this.temperature,
stop: options?.stop ?? this.stop,
tfs_z: this.tfsZ,
top_k: this.topK,
top_p: this.topP,
typical_p: this.typicalP,
use_mlock: this.useMLock,
use_mmap: this.useMMap,
vocab_only: this.vocabOnly
}
}
}
_combineLLMOutput() {
return {}
}
/** @deprecated */
async *_streamResponseChunksLegacy(
input: BaseMessage[],
options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun
): AsyncGenerator<ChatGenerationChunk> {
const stream = createOllamaGenerateStream(
this.baseUrl,
{
...this.invocationParams(options),
prompt: this._formatMessagesAsPrompt(input)
},
{
...options,
headers: this.headers
}
)
for await (const chunk of stream) {
if (!chunk.done) {
yield new ChatGenerationChunk({
text: chunk.response,
message: new AIMessageChunk({ content: chunk.response })
})
await runManager?.handleLLMNewToken(chunk.response ?? '')
} else {
yield new ChatGenerationChunk({
text: '',
message: new AIMessageChunk({ content: '' }),
generationInfo: {
model: chunk.model,
total_duration: chunk.total_duration,
load_duration: chunk.load_duration,
prompt_eval_count: chunk.prompt_eval_count,
prompt_eval_duration: chunk.prompt_eval_duration,
eval_count: chunk.eval_count,
eval_duration: chunk.eval_duration
}
})
}
}
}
async *_streamResponseChunks(
input: BaseMessage[],
options: this['ParsedCallOptions'],
runManager?: CallbackManagerForLLMRun
): AsyncGenerator<ChatGenerationChunk> {
try {
const stream = await this.caller.call(async () =>
createOllamaChatStream(
this.baseUrl,
{
...this.invocationParams(options),
messages: this._convertMessagesToOllamaMessages(input)
},
{
...options,
headers: this.headers
}
)
)
for await (const chunk of stream) {
if (!chunk.done) {
yield new ChatGenerationChunk({
text: chunk.message.content,
message: new AIMessageChunk({ content: chunk.message.content })
})
await runManager?.handleLLMNewToken(chunk.message.content ?? '')
} else {
yield new ChatGenerationChunk({
text: '',
message: new AIMessageChunk({ content: '' }),
generationInfo: {
model: chunk.model,
total_duration: chunk.total_duration,
load_duration: chunk.load_duration,
prompt_eval_count: chunk.prompt_eval_count,
prompt_eval_duration: chunk.prompt_eval_duration,
eval_count: chunk.eval_count,
eval_duration: chunk.eval_duration
}
})
}
}
} catch (e: any) {
if (e.response?.status === 404) {
console.warn(
'[WARNING]: It seems you are using a legacy version of Ollama. Please upgrade to a newer version for better chat support.'
)
yield* this._streamResponseChunksLegacy(input, options, runManager)
} else {
throw e
}
}
}
protected _convertMessagesToOllamaMessages(messages: BaseMessage[]): OllamaMessage[] {
return messages.map((message) => {
let role
if (message._getType() === 'human') {
role = 'user'
} else if (message._getType() === 'ai' || message._getType() === 'tool') {
role = 'assistant'
} else if (message._getType() === 'system') {
role = 'system'
} else {
throw new Error(`Unsupported message type for Ollama: ${message._getType()}`)
}
let content = ''
const images = []
if (typeof message.content === 'string') {
content = message.content
} else {
for (const contentPart of message.content) {
if (contentPart.type === 'text') {
content = `${content}\n${contentPart.text}`
} else if (contentPart.type === 'image_url' && typeof contentPart.image_url === 'string') {
const imageUrlComponents = contentPart.image_url.split(',')
// Support both data:image/jpeg;base64,<image> format as well
images.push(imageUrlComponents[1] ?? imageUrlComponents[0])
} else {
throw new Error(
`Unsupported message content type. Must either have type "text" or type "image_url" with a string "image_url" field.`
)
}
}
}
return {
role,
content,
images
}
})
}
/** @deprecated */
protected _formatMessagesAsPrompt(messages: BaseMessage[]): string {
const formattedMessages = messages
.map((message) => {
let messageText
if (message._getType() === 'human') {
messageText = `[INST] ${message.content} [/INST]`
} else if (message._getType() === 'ai') {
messageText = message.content
} else if (message._getType() === 'system') {
messageText = `<<SYS>> ${message.content} <</SYS>>`
} else if (ChatMessage.isInstance(message)) {
messageText = `\n\n${message.role[0].toUpperCase()}${message.role.slice(1)}: ${message.content}`
} else {
console.warn(`Unsupported message type passed to Ollama: "${message._getType()}"`)
messageText = ''
}
return messageText
})
.join('\n')
return formattedMessages
}
/** @ignore */
async _call(messages: BaseMessage[], options: this['ParsedCallOptions'], runManager?: CallbackManagerForLLMRun): Promise<string> {
const chunks = []
for await (const chunk of this._streamResponseChunks(messages, options, runManager)) {
chunks.push(chunk.message.content)
}
return chunks.join('')
}
}
module.exports = { nodeClass: ChatOllamaFunction_ChatModels }

View File

@ -0,0 +1 @@
<svg width="32" height="32" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M7 27.5c0-1.273.388-2.388.97-3-.582-.612-.97-1.727-.97-3 0-1.293.4-2.422.996-3.028A4.818 4.818 0 0 1 7 15.5c0-2.485 1.79-4.5 4-4.5l.1.001a5.002 5.002 0 0 1 9.8 0L21 11c2.21 0 4 2.015 4 4.5 0 1.139-.376 2.18-.996 2.972.595.606.996 1.735.996 3.028 0 1.273-.389 2.388-.97 3 .581.612.97 1.727.97 3" stroke="#000" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/><path d="M9.5 11C9.167 8.5 9 4 11 4c1.5 0 1.667 2.667 2 4m9.5 3c.333-2.5.5-7-1.5-7-1.5 0-1.667 2.667-2 4" stroke="#000" stroke-width="2" stroke-linecap="round"/><circle cx="11" cy="15" r="1" fill="#000"/><circle cx="21" cy="15" r="1" fill="#000"/><path d="M13 17c0-2 2-2.5 3-2.5s3 .5 3 2.5-2 2.5-3 2.5-3-.5-3-2.5Z" stroke="#000" stroke-width="2" stroke-linecap="round"/></svg>

After

Width:  |  Height:  |  Size: 834 B

View File

@ -0,0 +1,185 @@
import { IterableReadableStream } from '@langchain/core/utils/stream'
import type { StringWithAutocomplete } from '@langchain/core/utils/types'
import { BaseLanguageModelCallOptions } from '@langchain/core/language_models/base'
export interface OllamaInput {
embeddingOnly?: boolean
f16KV?: boolean
frequencyPenalty?: number
headers?: Record<string, string>
keepAlive?: string
logitsAll?: boolean
lowVram?: boolean
mainGpu?: number
model?: string
baseUrl?: string
mirostat?: number
mirostatEta?: number
mirostatTau?: number
numBatch?: number
numCtx?: number
numGpu?: number
numGqa?: number
numKeep?: number
numPredict?: number
numThread?: number
penalizeNewline?: boolean
presencePenalty?: number
repeatLastN?: number
repeatPenalty?: number
ropeFrequencyBase?: number
ropeFrequencyScale?: number
temperature?: number
stop?: string[]
tfsZ?: number
topK?: number
topP?: number
typicalP?: number
useMLock?: boolean
useMMap?: boolean
vocabOnly?: boolean
format?: StringWithAutocomplete<'json'>
}
export interface OllamaRequestParams {
model: string
format?: StringWithAutocomplete<'json'>
images?: string[]
options: {
embedding_only?: boolean
f16_kv?: boolean
frequency_penalty?: number
logits_all?: boolean
low_vram?: boolean
main_gpu?: number
mirostat?: number
mirostat_eta?: number
mirostat_tau?: number
num_batch?: number
num_ctx?: number
num_gpu?: number
num_gqa?: number
num_keep?: number
num_thread?: number
num_predict?: number
penalize_newline?: boolean
presence_penalty?: number
repeat_last_n?: number
repeat_penalty?: number
rope_frequency_base?: number
rope_frequency_scale?: number
temperature?: number
stop?: string[]
tfs_z?: number
top_k?: number
top_p?: number
typical_p?: number
use_mlock?: boolean
use_mmap?: boolean
vocab_only?: boolean
}
}
export type OllamaMessage = {
role: StringWithAutocomplete<'user' | 'assistant' | 'system'>
content: string
images?: string[]
}
export interface OllamaGenerateRequestParams extends OllamaRequestParams {
prompt: string
}
export interface OllamaChatRequestParams extends OllamaRequestParams {
messages: OllamaMessage[]
}
export type BaseOllamaGenerationChunk = {
model: string
created_at: string
done: boolean
total_duration?: number
load_duration?: number
prompt_eval_count?: number
prompt_eval_duration?: number
eval_count?: number
eval_duration?: number
}
export type OllamaGenerationChunk = BaseOllamaGenerationChunk & {
response: string
}
export type OllamaChatGenerationChunk = BaseOllamaGenerationChunk & {
message: OllamaMessage
}
export type OllamaCallOptions = BaseLanguageModelCallOptions & {
headers?: Record<string, string>
}
async function* createOllamaStream(url: string, params: OllamaRequestParams, options: OllamaCallOptions) {
let formattedUrl = url
if (formattedUrl.startsWith('http://localhost:')) {
// Node 18 has issues with resolving "localhost"
// See https://github.com/node-fetch/node-fetch/issues/1624
formattedUrl = formattedUrl.replace('http://localhost:', 'http://127.0.0.1:')
}
const response = await fetch(formattedUrl, {
method: 'POST',
body: JSON.stringify(params),
headers: {
'Content-Type': 'application/json',
...options.headers
},
signal: options.signal
})
if (!response.ok) {
let error
const responseText = await response.text()
try {
const json = JSON.parse(responseText)
error = new Error(`Ollama call failed with status code ${response.status}: ${json.error}`)
} catch (e) {
error = new Error(`Ollama call failed with status code ${response.status}: ${responseText}`)
}
;(error as any).response = response
throw error
}
if (!response.body) {
throw new Error('Could not begin Ollama stream. Please check the given URL and try again.')
}
const stream = IterableReadableStream.fromReadableStream(response.body)
const decoder = new TextDecoder()
let extra = ''
for await (const chunk of stream) {
const decoded = extra + decoder.decode(chunk)
const lines = decoded.split('\n')
extra = lines.pop() || ''
for (const line of lines) {
try {
yield JSON.parse(line)
} catch (e) {
console.warn(`Received a non-JSON parseable chunk: ${line}`)
}
}
}
}
export async function* createOllamaGenerateStream(
baseUrl: string,
params: OllamaGenerateRequestParams,
options: OllamaCallOptions
): AsyncGenerator<OllamaGenerationChunk> {
yield* createOllamaStream(`${baseUrl}/api/generate`, params, options)
}
export async function* createOllamaChatStream(
baseUrl: string,
params: OllamaChatRequestParams,
options: OllamaCallOptions
): AsyncGenerator<OllamaChatGenerationChunk> {
yield* createOllamaStream(`${baseUrl}/api/chat`, params, options)
}

View File

@ -1,6 +1,6 @@
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { OpenAI, ALL_AVAILABLE_OPENAI_MODELS } from 'llamaindex'
import { OpenAI, OpenAISession, ALL_AVAILABLE_OPENAI_MODELS } from 'llamaindex'
import { getModels, MODEL_TYPE } from '../../../src/modelLoader'
class ChatOpenAI_LlamaIndex_LLMs implements INode {
@ -115,8 +115,9 @@ class ChatOpenAI_LlamaIndex_LLMs implements INode {
if (maxTokens) obj.maxTokens = parseInt(maxTokens, 10)
if (topP) obj.topP = parseFloat(topP)
if (timeout) obj.timeout = parseInt(timeout, 10)
const openai = new OpenAISession(obj)
const model = new OpenAI(obj)
const model = new OpenAI({ ...obj, session: openai })
return model
}
}

View File

@ -0,0 +1,71 @@
import { ICommonObject, INode, INodeData, INodeParams } from '../../../src/Interface'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { TogetherLLM, OpenAI } from 'llamaindex'
class ChatTogetherAI_LlamaIndex_ChatModels implements INode {
label: string
name: string
version: number
type: string
icon: string
category: string
description: string
tags: string[]
baseClasses: string[]
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'ChatTogetherAI'
this.name = 'chatTogetherAI_LlamaIndex'
this.version = 1.0
this.type = 'ChatTogetherAI'
this.icon = 'togetherai.png'
this.category = 'Chat Models'
this.description = 'Wrapper around ChatTogetherAI LLM specific for LlamaIndex'
this.baseClasses = [this.type, 'BaseChatModel_LlamaIndex', ...getBaseClasses(TogetherLLM)]
this.tags = ['LlamaIndex']
this.credential = {
label: 'Connect Credential',
name: 'credential',
type: 'credential',
credentialNames: ['togetherAIApi']
}
this.inputs = [
{
label: 'Model Name',
name: 'modelName',
type: 'string',
placeholder: 'mixtral-8x7b-32768',
description: 'Refer to <a target="_blank" href="https://docs.together.ai/docs/inference-models">models</a> page'
},
{
label: 'Temperature',
name: 'temperature',
type: 'number',
step: 0.1,
default: 0.9,
optional: true
}
]
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const togetherAIApiKey = getCredentialParam('togetherAIApiKey', credentialData, nodeData)
const obj: Partial<OpenAI> = {
temperature: parseFloat(temperature),
model: modelName,
apiKey: togetherAIApiKey
}
const model = new TogetherLLM(obj)
return model
}
}
module.exports = { nodeClass: ChatTogetherAI_LlamaIndex_ChatModels }

View File

@ -0,0 +1,80 @@
import { ICommonObject, INode, INodeData, INodeOptionsValue, INodeParams } from '../../../src/Interface'
import { MODEL_TYPE, getModels } from '../../../src/modelLoader'
import { getBaseClasses, getCredentialData, getCredentialParam } from '../../../src/utils'
import { Groq, OpenAI } from 'llamaindex'
class ChatGroq_LlamaIndex_ChatModels implements INode {
label: string
name: string
version: number
type: string
icon: string
category: string
description: string
tags: string[]
baseClasses: string[]
credential: INodeParams
inputs: INodeParams[]
constructor() {
this.label = 'ChatGroq'
this.name = 'chatGroq_LlamaIndex'
this.version = 1.0
this.type = 'ChatGroq'
this.icon = 'groq.png'
this.category = 'Chat Models'
this.description = 'Wrapper around Groq LLM specific for LlamaIndex'
this.baseClasses = [this.type, 'BaseChatModel_LlamaIndex', ...getBaseClasses(Groq)]
this.tags = ['LlamaIndex']
this.credential = {
label: 'Connect Credential',
name: 'credential',
type: 'credential',
credentialNames: ['groqApi'],
optional: true
}
this.inputs = [
{
label: 'Model Name',
name: 'modelName',
type: 'asyncOptions',
loadMethod: 'listModels',
placeholder: 'llama3-70b-8192'
},
{
label: 'Temperature',
name: 'temperature',
type: 'number',
step: 0.1,
default: 0.9,
optional: true
}
]
}
//@ts-ignore
loadMethods = {
async listModels(): Promise<INodeOptionsValue[]> {
return await getModels(MODEL_TYPE.CHAT, 'groqChat')
}
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const temperature = nodeData.inputs?.temperature as string
const modelName = nodeData.inputs?.modelName as string
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const groqApiKey = getCredentialParam('groqApiKey', credentialData, nodeData)
const obj: Partial<OpenAI> = {
temperature: parseFloat(temperature),
model: modelName,
apiKey: groqApiKey
}
const model = new Groq(obj)
return model
}
}
module.exports = { nodeClass: ChatGroq_LlamaIndex_ChatModels }

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 KiB

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@ -82,7 +82,7 @@ class API_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -132,23 +132,31 @@ class API_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -107,7 +107,7 @@ class Airtable_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -162,23 +162,31 @@ class Airtable_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -106,7 +106,7 @@ class ApifyWebsiteContentCrawler_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -174,23 +174,31 @@ class ApifyWebsiteContentCrawler_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -93,7 +93,7 @@ class Cheerio_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -131,7 +131,11 @@ class Cheerio_DocumentLoaders implements INode {
async function cheerioLoader(url: string): Promise<any> {
try {
let docs = []
let docs: IDocument[] = []
if (url.endsWith('.pdf')) {
if (process.env.DEBUG === 'true') options.logger.info(`CheerioWebBaseLoader does not support PDF files: ${url}`)
return docs
}
const loader = new CheerioWebBaseLoader(url, params)
if (textSplitter) {
docs = await loader.loadAndSplit(textSplitter)
@ -141,6 +145,7 @@ class Cheerio_DocumentLoaders implements INode {
return docs
} catch (err) {
if (process.env.DEBUG === 'true') options.logger.error(`error in CheerioWebBaseLoader: ${err.message}, on page: ${url}`)
return []
}
}
@ -178,23 +183,31 @@ class Cheerio_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -73,7 +73,7 @@ class Confluence_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -128,23 +128,31 @@ class Confluence_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -1,8 +1,8 @@
import { omit } from 'lodash'
import { ICommonObject, IDocument, INode, INodeData, INodeParams } from '../../../src/Interface'
import { ICommonObject, IDocument, INode, INodeData, INodeOutputsValue, INodeParams } from '../../../src/Interface'
import { TextSplitter } from 'langchain/text_splitter'
import { CSVLoader } from 'langchain/document_loaders/fs/csv'
import { getFileFromStorage } from '../../../src'
import { getFileFromStorage, handleEscapeCharacters } from '../../../src'
class Csv_DocumentLoaders implements INode {
label: string
@ -14,11 +14,12 @@ class Csv_DocumentLoaders implements INode {
category: string
baseClasses: string[]
inputs: INodeParams[]
outputs: INodeOutputsValue[]
constructor() {
this.label = 'Csv File'
this.name = 'csvFile'
this.version = 1.0
this.version = 2.0
this.type = 'Document'
this.icon = 'csv.svg'
this.category = 'Document Loaders'
@ -59,12 +60,26 @@ class Csv_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
}
]
this.outputs = [
{
label: 'Document',
name: 'document',
description: 'Array of document objects containing metadata and pageContent',
baseClasses: [...this.baseClasses, 'json']
},
{
label: 'Text',
name: 'text',
description: 'Concatenated string from pageContent of documents',
baseClasses: ['string', 'json']
}
]
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
@ -72,6 +87,7 @@ class Csv_DocumentLoaders implements INode {
const csvFileBase64 = nodeData.inputs?.csvFile as string
const columnName = nodeData.inputs?.columnName as string
const metadata = nodeData.inputs?.metadata
const output = nodeData.outputs?.output as string
const _omitMetadataKeys = nodeData.inputs?.omitMetadataKeys as string
let omitMetadataKeys: string[] = []
@ -99,7 +115,7 @@ class Csv_DocumentLoaders implements INode {
if (textSplitter) {
docs.push(...(await loader.loadAndSplit(textSplitter)))
} else {
docs.push(...(await loader.loadAndSplit(textSplitter)))
docs.push(...(await loader.load()))
}
}
} else {
@ -128,27 +144,43 @@ class Csv_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}
return docs
if (output === 'document') {
return docs
} else {
let finaltext = ''
for (const doc of docs) {
finaltext += `${doc.pageContent}\n`
}
return handleEscapeCharacters(finaltext, false)
}
}
}

View File

@ -23,7 +23,6 @@ class CustomDocumentLoader_DocumentLoaders implements INode {
this.type = 'Document'
this.icon = 'customDocLoader.svg'
this.category = 'Document Loaders'
this.badge = 'NEW'
this.description = `Custom function for loading documents`
this.baseClasses = [this.type]
this.inputs = [

View File

@ -1,6 +1,7 @@
import { ICommonObject, IDatabaseEntity, INode, INodeData, INodeOptionsValue, INodeOutputsValue, INodeParams } from '../../../src/Interface'
import { DataSource } from 'typeorm'
import { Document } from '@langchain/core/documents'
import { handleEscapeCharacters } from '../../../src'
class DocStore_DocumentLoaders implements INode {
label: string
@ -21,7 +22,6 @@ class DocStore_DocumentLoaders implements INode {
this.version = 1.0
this.type = 'Document'
this.icon = 'dstore.svg'
this.badge = 'NEW'
this.category = 'Document Loaders'
this.description = `Load data from pre-configured document stores`
this.baseClasses = [this.type]
@ -83,12 +83,22 @@ class DocStore_DocumentLoaders implements INode {
const chunks = await appDataSource
.getRepository(databaseEntities['DocumentStoreFileChunk'])
.find({ where: { storeId: selectedStore } })
const output = nodeData.outputs?.output as string
const finalDocs = []
for (const chunk of chunks) {
finalDocs.push(new Document({ pageContent: chunk.pageContent, metadata: JSON.parse(chunk.metadata) }))
}
return finalDocs
if (output === 'document') {
return finalDocs
} else {
let finaltext = ''
for (const doc of finalDocs) {
finaltext += `${doc.pageContent}\n`
}
return handleEscapeCharacters(finaltext, false)
}
}
}

View File

@ -51,7 +51,7 @@ class Docx_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -119,23 +119,31 @@ class Docx_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -74,7 +74,7 @@ class Figma_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -111,23 +111,31 @@ class Figma_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -0,0 +1,378 @@
import { TextSplitter } from 'langchain/text_splitter'
import { Document, DocumentInterface } from '@langchain/core/documents'
import { BaseDocumentLoader } from 'langchain/document_loaders/base'
import { INode, INodeData, INodeParams, ICommonObject } from '../../../src/Interface'
import { getCredentialData, getCredentialParam } from '../../../src/utils'
import axios, { AxiosResponse, AxiosRequestHeaders } from 'axios'
import { z } from 'zod'
import { zodToJsonSchema } from 'zod-to-json-schema'
// FirecrawlApp interfaces
interface FirecrawlAppConfig {
apiKey?: string | null
apiUrl?: string | null
}
interface FirecrawlDocumentMetadata {
title?: string
description?: string
language?: string
// ... (other metadata fields)
[key: string]: any
}
interface FirecrawlDocument {
id?: string
url?: string
content: string
markdown?: string
html?: string
llm_extraction?: Record<string, any>
createdAt?: Date
updatedAt?: Date
type?: string
metadata: FirecrawlDocumentMetadata
childrenLinks?: string[]
provider?: string
warning?: string
index?: number
}
interface ScrapeResponse {
success: boolean
data?: FirecrawlDocument
error?: string
}
interface CrawlResponse {
success: boolean
jobId?: string
data?: FirecrawlDocument[]
error?: string
}
interface Params {
[key: string]: any
extractorOptions?: {
extractionSchema: z.ZodSchema | any
mode?: 'llm-extraction'
extractionPrompt?: string
}
}
// FirecrawlApp class (not exported)
class FirecrawlApp {
private apiKey: string
private apiUrl: string
constructor({ apiKey = null, apiUrl = null }: FirecrawlAppConfig) {
this.apiKey = apiKey || ''
this.apiUrl = apiUrl || 'https://api.firecrawl.dev'
if (!this.apiKey) {
throw new Error('No API key provided')
}
}
async scrapeUrl(url: string, params: Params | null = null): Promise<ScrapeResponse> {
const headers = this.prepareHeaders()
let jsonData: Params = { url, ...params }
if (params?.extractorOptions?.extractionSchema) {
let schema = params.extractorOptions.extractionSchema
if (schema instanceof z.ZodSchema) {
schema = zodToJsonSchema(schema)
}
jsonData = {
...jsonData,
extractorOptions: {
...params.extractorOptions,
extractionSchema: schema,
mode: params.extractorOptions.mode || 'llm-extraction'
}
}
}
try {
const response: AxiosResponse = await this.postRequest(this.apiUrl + '/v0/scrape', jsonData, headers)
if (response.status === 200) {
const responseData = response.data
if (responseData.success) {
return responseData
} else {
throw new Error(`Failed to scrape URL. Error: ${responseData.error}`)
}
} else {
this.handleError(response, 'scrape URL')
}
} catch (error: any) {
throw new Error(error.message)
}
return { success: false, error: 'Internal server error.' }
}
async crawlUrl(
url: string,
params: Params | null = null,
waitUntilDone: boolean = true,
pollInterval: number = 2,
idempotencyKey?: string
): Promise<CrawlResponse | any> {
const headers = this.prepareHeaders(idempotencyKey)
let jsonData: Params = { url, ...params }
try {
const response: AxiosResponse = await this.postRequest(this.apiUrl + '/v0/crawl', jsonData, headers)
if (response.status === 200) {
const jobId: string = response.data.jobId
if (waitUntilDone) {
return this.monitorJobStatus(jobId, headers, pollInterval)
} else {
return { success: true, jobId }
}
} else {
this.handleError(response, 'start crawl job')
}
} catch (error: any) {
throw new Error(error.message)
}
return { success: false, error: 'Internal server error.' }
}
private prepareHeaders(idempotencyKey?: string): AxiosRequestHeaders {
return {
'Content-Type': 'application/json',
Authorization: `Bearer ${this.apiKey}`,
...(idempotencyKey ? { 'x-idempotency-key': idempotencyKey } : {})
} as AxiosRequestHeaders & { 'x-idempotency-key'?: string }
}
private postRequest(url: string, data: Params, headers: AxiosRequestHeaders): Promise<AxiosResponse> {
return axios.post(url, data, { headers })
}
private getRequest(url: string, headers: AxiosRequestHeaders): Promise<AxiosResponse> {
return axios.get(url, { headers })
}
private async monitorJobStatus(jobId: string, headers: AxiosRequestHeaders, checkInterval: number): Promise<any> {
let isJobCompleted = false
while (!isJobCompleted) {
const statusResponse: AxiosResponse = await this.getRequest(this.apiUrl + `/v0/crawl/status/${jobId}`, headers)
if (statusResponse.status === 200) {
const statusData = statusResponse.data
switch (statusData.status) {
case 'completed':
isJobCompleted = true
if ('data' in statusData) {
return statusData.data
} else {
throw new Error('Crawl job completed but no data was returned')
}
case 'active':
case 'paused':
case 'pending':
case 'queued':
await new Promise((resolve) => setTimeout(resolve, Math.max(checkInterval, 2) * 1000))
break
default:
throw new Error(`Crawl job failed or was stopped. Status: ${statusData.status}`)
}
} else {
this.handleError(statusResponse, 'check crawl status')
}
}
}
private handleError(response: AxiosResponse, action: string): void {
if ([402, 408, 409, 500].includes(response.status)) {
const errorMessage: string = response.data.error || 'Unknown error occurred'
throw new Error(`Failed to ${action}. Status code: ${response.status}. Error: ${errorMessage}`)
} else {
throw new Error(`Unexpected error occurred while trying to ${action}. Status code: ${response.status}`)
}
}
}
// FireCrawl Loader
interface FirecrawlLoaderParameters {
url: string
apiKey?: string
mode?: 'crawl' | 'scrape'
params?: Record<string, unknown>
}
class FireCrawlLoader extends BaseDocumentLoader {
private apiKey: string
private url: string
private mode: 'crawl' | 'scrape'
private params?: Record<string, unknown>
constructor(loaderParams: FirecrawlLoaderParameters) {
super()
const { apiKey, url, mode = 'crawl', params } = loaderParams
if (!apiKey) {
throw new Error('Firecrawl API key not set. You can set it as FIRECRAWL_API_KEY in your .env file, or pass it to Firecrawl.')
}
this.apiKey = apiKey
this.url = url
this.mode = mode
this.params = params
}
public async load(): Promise<DocumentInterface[]> {
const app = new FirecrawlApp({ apiKey: this.apiKey })
let firecrawlDocs: FirecrawlDocument[]
if (this.mode === 'scrape') {
const response = await app.scrapeUrl(this.url, this.params)
if (!response.success) {
throw new Error(`Firecrawl: Failed to scrape URL. Error: ${response.error}`)
}
firecrawlDocs = [response.data as FirecrawlDocument]
} else if (this.mode === 'crawl') {
const response = await app.crawlUrl(this.url, this.params, true)
firecrawlDocs = response as FirecrawlDocument[]
} else {
throw new Error(`Unrecognized mode '${this.mode}'. Expected one of 'crawl', 'scrape'.`)
}
return firecrawlDocs.map(
(doc) =>
new Document({
pageContent: doc.markdown || '',
metadata: doc.metadata || {}
})
)
}
}
// Flowise Node Class
class FireCrawl_DocumentLoaders implements INode {
label: string
name: string
description: string
type: string
icon: string
version: number
category: string
baseClasses: string[]
inputs: INodeParams[]
credential: INodeParams
constructor() {
this.label = 'FireCrawl'
this.name = 'fireCrawl'
this.type = 'Document'
this.icon = 'firecrawl.png'
this.version = 1.0
this.category = 'Document Loaders'
this.description = 'Load data from URL using FireCrawl'
this.baseClasses = [this.type]
this.inputs = [
{
label: 'Text Splitter',
name: 'textSplitter',
type: 'TextSplitter',
optional: true
},
{
label: 'URLs',
name: 'url',
type: 'string',
description: 'URL to be crawled/scraped',
placeholder: 'https://docs.flowiseai.com'
},
{
label: 'Crawler type',
type: 'options',
name: 'crawlerType',
options: [
{
label: 'Crawl',
name: 'crawl',
description: 'Crawl a URL and all accessible subpages'
},
{
label: 'Scrape',
name: 'scrape',
description: 'Scrape a URL and get its content'
}
],
default: 'crawl'
}
// ... (other input parameters)
]
this.credential = {
label: 'FireCrawl API',
name: 'credential',
type: 'credential',
credentialNames: ['fireCrawlApi']
}
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
const metadata = nodeData.inputs?.metadata
const url = nodeData.inputs?.url as string
const crawlerType = nodeData.inputs?.crawlerType as string
const maxCrawlPages = nodeData.inputs?.maxCrawlPages as string
const generateImgAltText = nodeData.inputs?.generateImgAltText as boolean
const returnOnlyUrls = nodeData.inputs?.returnOnlyUrls as boolean
const onlyMainContent = nodeData.inputs?.onlyMainContent as boolean
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const firecrawlApiToken = getCredentialParam('firecrawlApiToken', credentialData, nodeData)
const urlPatternsExcludes = nodeData.inputs?.urlPatternsExcludes
? (nodeData.inputs.urlPatternsExcludes.split(',') as string[])
: undefined
const urlPatternsIncludes = nodeData.inputs?.urlPatternsIncludes
? (nodeData.inputs.urlPatternsIncludes.split(',') as string[])
: undefined
const input: FirecrawlLoaderParameters = {
url,
mode: crawlerType as 'crawl' | 'scrape',
apiKey: firecrawlApiToken,
params: {
crawlerOptions: {
includes: urlPatternsIncludes,
excludes: urlPatternsExcludes,
generateImgAltText,
returnOnlyUrls,
limit: maxCrawlPages ? parseFloat(maxCrawlPages) : undefined
},
pageOptions: {
onlyMainContent
}
}
}
const loader = new FireCrawlLoader(input)
let docs = []
if (textSplitter) {
docs = await loader.loadAndSplit(textSplitter)
} else {
docs = await loader.load()
}
if (metadata) {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
let finaldocs = []
for (const doc of docs) {
const newdoc = {
...doc,
metadata: {
...doc.metadata,
...parsedMetadata
}
}
finaldocs.push(newdoc)
}
return finaldocs
}
return docs
}
}
module.exports = { nodeClass: FireCrawl_DocumentLoaders }

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View File

@ -79,7 +79,7 @@ class Folder_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -162,23 +162,31 @@ class Folder_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -58,7 +58,7 @@ class Gitbook_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -85,23 +85,31 @@ class Gitbook_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -100,7 +100,7 @@ class Github_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -145,23 +145,31 @@ class Github_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -59,7 +59,7 @@ class Json_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -135,23 +135,31 @@ class Json_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -58,7 +58,7 @@ class Jsonlines_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -129,23 +129,31 @@ class Jsonlines_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -58,7 +58,7 @@ class NotionDB_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -104,23 +104,31 @@ class NotionDB_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -51,7 +51,7 @@ class NotionFolder_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -83,23 +83,31 @@ class NotionFolder_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -59,7 +59,7 @@ class NotionPage_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -101,23 +101,31 @@ class NotionPage_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -74,7 +74,7 @@ class Pdf_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -132,23 +132,31 @@ class Pdf_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -54,7 +54,7 @@ class PlainText_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -104,23 +104,31 @@ class PlainText_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -121,7 +121,7 @@ class Playwright_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -217,23 +217,31 @@ class Playwright_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -122,7 +122,7 @@ class Puppeteer_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -218,23 +218,31 @@ class Puppeteer_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -427,7 +427,7 @@ class S3_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -561,25 +561,33 @@ class S3_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata,
[sourceIdKey]: doc.metadata[sourceIdKey] || sourceIdKey
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata,
[sourceIdKey]: doc.metadata[sourceIdKey] || sourceIdKey
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
[sourceIdKey]: doc.metadata[sourceIdKey] || sourceIdKey
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata,
[sourceIdKey]: doc.metadata[sourceIdKey] || sourceIdKey
},
omitMetadataKeys
)
}))
}

View File

@ -68,7 +68,7 @@ class SearchAPI_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -112,23 +112,31 @@ class SearchAPI_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -58,7 +58,7 @@ class SerpAPI_DocumentLoaders implements INode {
type: 'string',
rows: 4,
description:
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma',
'Each document loader comes with a default set of metadata keys that are extracted from the document. You can use this field to omit some of the default metadata keys. The value should be a list of keys, seperated by comma. Use * to omit all metadata keys execept the ones you specify in the Additional Metadata field',
placeholder: 'key1, key2, key3.nestedKey1',
optional: true,
additionalParams: true
@ -86,23 +86,31 @@ class SerpAPI_DocumentLoaders implements INode {
const parsedMetadata = typeof metadata === 'object' ? metadata : JSON.parse(metadata)
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {
...parsedMetadata
}
: omit(
{
...doc.metadata,
...parsedMetadata
},
omitMetadataKeys
)
}))
} else {
docs = docs.map((doc) => ({
...doc,
metadata: omit(
{
...doc.metadata
},
omitMetadataKeys
)
metadata:
_omitMetadataKeys === '*'
? {}
: omit(
{
...doc.metadata
},
omitMetadataKeys
)
}))
}

View File

@ -0,0 +1,189 @@
import { TextSplitter } from 'langchain/text_splitter'
import { Document, DocumentInterface } from '@langchain/core/documents'
import { BaseDocumentLoader } from 'langchain/document_loaders/base'
import { INode, INodeData, INodeParams, ICommonObject } from '../../../src/Interface'
import { getCredentialData, getCredentialParam } from '../../../src/utils'
import SpiderApp from './SpiderApp'
interface SpiderLoaderParameters {
url: string
apiKey?: string
mode?: 'crawl' | 'scrape'
limit?: number
params?: Record<string, unknown>
}
class SpiderLoader extends BaseDocumentLoader {
private apiKey: string
private url: string
private mode: 'crawl' | 'scrape'
private limit?: number
private params?: Record<string, unknown>
constructor(loaderParams: SpiderLoaderParameters) {
super()
const { apiKey, url, mode = 'crawl', limit, params } = loaderParams
if (!apiKey) {
throw new Error('Spider API key not set. You can set it as SPIDER_API_KEY in your .env file, or pass it to Spider.')
}
this.apiKey = apiKey
this.url = url
this.mode = mode
this.limit = Number(limit)
this.params = params
}
public async load(): Promise<DocumentInterface[]> {
const app = new SpiderApp({ apiKey: this.apiKey })
let spiderDocs: any[]
if (this.mode === 'scrape') {
const response = await app.scrapeUrl(this.url, this.params)
if (!response.success) {
throw new Error(`Spider: Failed to scrape URL. Error: ${response.error}`)
}
spiderDocs = [response.data]
} else if (this.mode === 'crawl') {
if (this.params) {
this.params.limit = this.limit
}
const response = await app.crawlUrl(this.url, this.params)
if (!response.success) {
throw new Error(`Spider: Failed to crawl URL. Error: ${response.error}`)
}
spiderDocs = response.data
} else {
throw new Error(`Unrecognized mode '${this.mode}'. Expected one of 'crawl', 'scrape'.`)
}
return spiderDocs.map(
(doc) =>
new Document({
pageContent: doc.content || '',
metadata: { source: doc.url }
})
)
}
}
class Spider_DocumentLoaders implements INode {
label: string
name: string
description: string
type: string
icon: string
version: number
category: string
baseClasses: string[]
inputs: INodeParams[]
credential: INodeParams
constructor() {
this.label = 'Spider Document Loaders'
this.name = 'spiderDocumentLoaders'
this.version = 1.0
this.type = 'Document'
this.icon = 'spider.svg'
this.category = 'Document Loaders'
this.description = 'Scrape & Crawl the web with Spider'
this.baseClasses = [this.type]
this.inputs = [
{
label: 'Text Splitter',
name: 'textSplitter',
type: 'TextSplitter',
optional: true
},
{
label: 'Mode',
name: 'mode',
type: 'options',
options: [
{
label: 'Scrape',
name: 'scrape',
description: 'Scrape a single page'
},
{
label: 'Crawl',
name: 'crawl',
description: 'Crawl a website and extract pages within the same domain'
}
],
default: 'scrape'
},
{
label: 'Web Page URL',
name: 'url',
type: 'string',
placeholder: 'https://spider.cloud'
},
{
label: 'Limit',
name: 'limit',
type: 'number',
default: 25
},
{
label: 'Additional Parameters',
name: 'params',
description:
'Find all the available parameters in the <a _target="blank" href="https://spider.cloud/docs/api">Spider API documentation</a>',
additionalParams: true,
placeholder: '{ "anti_bot": true }',
type: 'json',
optional: true
}
]
this.credential = {
label: 'Credential',
name: 'credential',
type: 'credential',
credentialNames: ['spiderApi']
}
}
async init(nodeData: INodeData, _: string, options: ICommonObject): Promise<any> {
const textSplitter = nodeData.inputs?.textSplitter as TextSplitter
const url = nodeData.inputs?.url as string
const mode = nodeData.inputs?.mode as 'crawl' | 'scrape'
const limit = nodeData.inputs?.limit as number
let params = nodeData.inputs?.params || {}
const credentialData = await getCredentialData(nodeData.credential ?? '', options)
const spiderApiKey = getCredentialParam('spiderApiKey', credentialData, nodeData)
if (typeof params === 'string') {
try {
params = JSON.parse(params)
} catch (e) {
throw new Error('Invalid JSON string provided for params')
}
}
// Ensure return_format is set to markdown
params.return_format = 'markdown'
const input: SpiderLoaderParameters = {
url,
mode: mode as 'crawl' | 'scrape',
apiKey: spiderApiKey,
limit: limit as number,
params: params as Record<string, unknown>
}
const loader = new SpiderLoader(input)
let docs = []
if (textSplitter) {
docs = await loader.loadAndSplit(textSplitter)
} else {
docs = await loader.load()
}
return docs
}
}
module.exports = { nodeClass: Spider_DocumentLoaders }

Some files were not shown because too many files have changed in this diff Show More