Compare commits

...

14 Commits

Author SHA1 Message Date
Jiajie Zhong ce8da46fb2
[doc] Correct 3.0.0 content link version (#9555) 2022-04-18 15:53:27 +08:00
Jiajie Zhong d1ffa45ead
Correct the version of 3.0.0-alpha document (#9552) 2022-04-18 15:06:01 +08:00
zhuangchong 9d7fff52e9 [maven-release-plugin] prepare for next development iteration 2022-04-14 09:21:19 +08:00
zhuangchong fe532d5242 [maven-release-plugin] prepare release 3.0.0-alpha 2022-04-14 09:21:18 +08:00
Jiajie Zhong 4ae7cbc003
[release] Change release version (#9483) 2022-04-14 09:07:31 +08:00
Kerwin a0c15ada3a
[3.0.0-alpha-prepare]3.0.0 alpha prepare 9481 9476 (#9482) 2022-04-13 20:56:56 +08:00
Kerwin 82756c3128
[FIX-9471][Script] fix run install.sh error (#9472) (#9477)
Co-authored-by: mazhong <316422240@qq.com>
2022-04-13 18:03:55 +08:00
Kerwin 2c4d44dcf9
Add python module dependency in the dist module (#9450) 2022-04-12 14:53:20 +08:00
caishunfeng 9953e86b8e
[Future-9396]Support output parameters transfer from parent workflow to child work flow (#9410) (#9442)
* [Future-9396]Support output parameters transfer from parent workflow to child work flow

* fix note
2022-04-11 23:30:05 +08:00
Jiajie Zhong a82f5026b7
[cherry-pick] some commit until apr 11 (#9446)
* [python] Add missing document

which including `configuration`, `run example`
`how to connect remote server`

close: #9286, #9284, #8917

* [python] Recover python release properties

This patch recovers the properties `python.sign.skip=false`
when the combined profile `release,python` is used.

also close: #9433

* [doc] Add some dev missing doc

Including general-setting, task-definition, audit-log
and they related img

Co-authored-by: Tq <tianqitobethefirst@gmail.com>

Co-authored-by: Tq <tianqitobethefirst@gmail.com>
2022-04-11 21:47:46 +08:00
caishunfeng 9fda8c5811
[cherry-pick-3.0.0-alpha] Cherry pick dev to 3.0.0 alpha (#9429)
* [Fix][UI Next][V1.0.0-Alpha]Add zh for dag execution policy (#9363)

* [Bug-9235][Alert]Fix wechat markdown message and change wechat form structure (#9367)

* fix wechat issues:
1. change table msg type to markdown.
2. change userId to not required and enrich hints
3. change 'app id' to 'app id and chat id'

* fix wechat issues:
1. revert table showtype and add markdown showtype.
2. enrich hints.
3. delete 'chatid', rename agentid to weChatAgentIdChatId.
4. modify code to send markdown message.

* fix wechat issues: Change the language pack of agentId to agentId/chatId.

* fix format

* fix param name

Co-authored-by: Amy <amywang0104@163.com>

* [FIX-9355] Fix scheduleTime of start-process-instance api in api-doc (#9359)

* fix #9355

* fix #9355

* fix ut error

* fix ut error

* [CI] try to fix ci (#9366)

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* try to fix ci

* [optimization] [Service] Optimization ProcessService and add ProcessService interface (#9370)

* [task-spark][docs] Corrected notice section (#9375)

* [python] Migrate pythonGatewayServer into api server (#9372)

Currently the size of our distribute package is up to
800MB, this patch is migrate python gateway server into
api server

The distribute package size before and after this patch is:

```sh
# before
796M   apache-dolphinscheduler-2.0.4-SNAPSHOT-bin.tar.gz

# after
647M   apache-dolphinscheduler-2.0.4-SNAPSHOT-bin.tar.gz
```

* [Fix][UI Next][V1.0.0-Alpha] Add light color theme to echarts. (#9381)

* [Bug][API-9364]fix ProcessInstance wrong alert group id (#9383)

* fix ProcessInstance wrong alert group id

* change  createComplementCommandList method to protected

* [BUG][WORKER-9349]fix param priority (#9379)

* fix param priority

* fix params priority code logic

* [Improvement] change method access (#9390)

* change method to protected

* change method access

* [Fix-9221] [alert-server] optimization and gracefully close (#9246)

* [Fix-9221] [alert-server] optimization and gracefully close

This closes #9221

* [Fix-9221] [alert-server] remove unused mock data

This closes #9221

* [Fix-9221] [alert-server] remove unused mock data

This closes #9221

* [Fix-9221] [alert-server] remove unnecessary Mockito stubbings

* [Fix-9221] [alert-server] init AlertPluginManager in AlertServer

* [Fix-9221] [alert-server] AlertServerTest add AlertPluginManager installPlugin

* [Fix-9221] [alert-server] replace @Eventlistener with @PostConstruct

* [Fix-9221] [alert-server] sonar check solution

* [Improvement-9221] [alert] update constructor injection and replace IStoppable with Closeable

Co-authored-by: guoshupei <guoshupei@lixiang.com>

* [Fix][UI Next][V1.0.0-Alpha] Fix the task instance forced success button multi-language support error. (#9392)

* [doc] Change get help from dev mail list to slack (#9377)

* Change all get help from dev mailing list to slack, because
  we find out mailing list have many users ask for subscribe
  and they maybe subscribe by accident.
* remove join dev mailing list in faq.md because we already
  have it in https://dolphinscheduler.apache.org/en-us/community/development/subscribe.html

* Add new code owner of docs module (#9388)

* [CI] Enable CI to remove unexpected files in /docs/img dir (#9393)

* [Bug][UI Next]Modify the display state logic of save buttons under workflow definition (#9403)

* Modifying site Configurations

* Modify the display state logic of save buttons under workflow definition

* [doc] Remove observability (#9402)

SkyWalking v9 is coming soon and there are without
DolphinScheduler menus anymore, So we should remove
the SW agent to avoid confusion.

close: #9242

* [DS-9387][refactor]Remove the lock in the start method of the MasterRegistryClient class (#9389)

* [Fix-9251] [WORKER] reslove the sql task about of add the udf resource failed (#9319)

* feat(resource  manager): extend s3 to the storage of ds

1.fix some spell question
2.extend the type of storage
3.add the s3utils
to manager resource
4.automatic inject the storage in addition to your
config

* fix(resource  manager): update the dependency

* fix(resource  manager): extend s3 to the storage of ds

fix the constant of hadooputils

* fix(resource  manager): extend s3 to the storage of ds

1.fix some spell question
2.delete the import *

* fix(resource  manager):

merge  the unitTest:
1.TenantServiceImpl
2.ResourceServiceImpl
3.UserServiceImpl

* fix(resource  manager): extend s3 to the storage of ds

merge the resourceServiceTest

* fix(resource  manager): test  cancel the test method

createTenant verifyTenant

* fix(resource  manager): merge the code  follow the check-result of sonar

* fix(resource  manager): extend s3 to the storage of ds

fit the spell question

* fix(resource  manager): extend s3 to the storage of ds

revert the common.properties

* fix(resource  manager): extend s3 to the storage of ds

update the storageConfig with None

* fix(resource  manager): extend s3 to the storage of ds

fix the judge of resourceType

* fix(resource  manager): extend s3 to the storage of ds

undo the compile-mysql

* fix(resource  manager): extend s3 to the storage of ds

delete hadoop aws

* fix(resource  manager): extend s3 to the storage of ds

update the know-dependencies to delete aws 1.7.4
update the e2e
file-manager common.properties

* fix(resource  manager): extend s3 to the storage of ds

update the aws-region

* fix(resource  manager): extend s3 to the storage of ds

fix the storageconfig init

* fix(resource  manager): update e2e docker-compose

update e2e docker-compose

* fix(resource  manager): extend s3 to the storage of ds

revent the e2e common.proprites

print the resource type in propertyUtil

* fix(resource  manager): extend s3 to the storage of ds
1.println the properties

* fix(resource  manager): println the s3 info

* fix(resource  manager): extend s3 to the storage of ds

delete the info  and upgrade the s3 info to e2e

* fix(resource  manager): extend s3 to the storage of ds

add the bucket init

* fix(resource  manager): extend s3 to the storage of ds

1.fix some spell question
2.delete the import *

* fix(resource  manager): extend s3 to the storage of ds

upgrade the s3 endpoint

* fix(resource  manager): withPathStyleAccessEnabled(true)

* fix(resource  manager): extend s3 to the storage of ds

1.fix some spell question
2.delete the import *

* fix(resource  manager): upgrade the  s3client builder

* fix(resource  manager): correct  the s3 point to s3client

* fix(resource  manager): update the constant BUCKET_NAME

* fix(resource  manager): e2e  s3 endpoint -> s3:9000

* fix(resource  manager): extend s3 to the storage of ds

1.fix some spell question
2.delete the import *

* style(resource  manager): add info to createBucket

* style(resource  manager): debug the log

* ci(resource  manager): test

test s3

* ci(ci): add INSERT INTO dolphinscheduler.t_ds_tenant (id, tenant_code, description, queue_id, create_time, update_time) VALUES(1, 'root', NULL, 1, NULL, NULL); to h2.sql

* fix(resource  manager): update the h2 sql

* fix(resource  manager): solve to delete the tenant

* style(resource  manager): merge the style end delete the unuse s3 config

* fix(resource  manager): extend s3 to the storage of ds

UPDATE the rename resources when s3

* fix(resource  manager): extend s3 to the storage of ds

1.fix the code style of QuartzImpl

* fix(resource  manager): extend s3 to the storage of ds

1.impoort restore_type to CommonUtils

* fix(resource  manager): update the work thread

* fix(resource  manager): update  the baseTaskProcessor

* fix(resource  manager): upgrade dolphinscheduler-standalone-server.xml

* fix(resource  manager): add  user Info to dolphinscheduler_h2.sql

* fix(resource  manager): merge  the resourceType to NONE

* style(upgrade the log level to info):

* fix(resource  manager): sysnc the h2.sql

* fix(resource  manager): update the merge the user tenant

* fix(resource  manager): merge the resourcesServiceImpl

* fix(resource  manager):

when the storage is s3 ,that the directory can't be renamed

* fix(resource  manager): in s3 ,the directory cannot be renamed

* fix(resource  manager): delete the deleteRenameDirectory in E2E

* fix(resource  manager): check the style and  recoverd the test

* fix(resource  manager): delete the log.print(LoginUser)

* fix(server): fix the  udf serialize

* fix(master  task): update the udfTest to update the json string

* fix(test): update the udfFuncTest

* fix(common): syn the common.properties

* fix(udfTest): upgrade the udfTest

* fix(common): revent the common.properties

* [Fix-9316] [Task] Configure DB2 data source SQL script execution report ResultSet has been closed exception in SQL task  (#9317)

* fix db2 error in the sql task

* update limit in sql task

* [UI] Migrate NPM to PNPM in CI builds (#9431)

Co-authored-by: Devosend <devosend@gmail.com>
Co-authored-by: Tq <tianqitobethefirst@gmail.com>
Co-authored-by: Amy <amywang0104@163.com>
Co-authored-by: xiangzihao <460888207@qq.com>
Co-authored-by: gaojun2048 <gaojun2048@gmail.com>
Co-authored-by: mans2singh <mans2singh@users.noreply.github.com>
Co-authored-by: Jiajie Zhong <zhongjiajie955@hotmail.com>
Co-authored-by: Amy0104 <97265214+Amy0104@users.noreply.github.com>
Co-authored-by: guoshupei <15764973965@163.com>
Co-authored-by: guoshupei <guoshupei@lixiang.com>
Co-authored-by: songjianet <1778651752@qq.com>
Co-authored-by: Eric Gao <ericgao.apache@gmail.com>
Co-authored-by: labbomb <739955946@qq.com>
Co-authored-by: worry <7039986@qq.com>
Co-authored-by: nobolity <nobolity@users.noreply.github.com>
Co-authored-by: Kerwin <37063904+zhuangchong@users.noreply.github.com>
Co-authored-by: kezhenxu94 <kezhenxu94@apache.org>
2022-04-11 16:50:26 +08:00
Jiajie Zhong ff6a3bd6dd
[Bug][UI Next]Modify the display state logic of save buttons under workflow definition (#9403) (#9411)
* Modifying site Configurations

* Modify the display state logic of save buttons under workflow definition

Co-authored-by: labbomb <739955946@qq.com>
2022-04-09 17:48:41 +08:00
Devosend 83c745eb41
[cherry-pick][Fix][UI Next][V1.0.0-Alpha]Add zh for dag execution policy (#9397) 2022-04-08 13:36:02 +08:00
songjianet 6c9e07cce2
[cherry-pick][Fix][UI Next][V1.0.0-Alpha] Fix the task instance forced success button multi-language support error. (#9395) 2022-04-08 10:21:06 +08:00
197 changed files with 4526 additions and 4453 deletions

1
.github/CODEOWNERS vendored
View File

@ -20,3 +20,4 @@ dolphinscheduler/dolphinscheduler-e2e @kezhenxu94
dolphinscheduler/dolphinscheduler-registry @kezhenxu94
dolphinscheduler/dolphinscheduler-standalone-server @kezhenxu94
dolphinscheduler/dolphinscheduler-python @zhongjiajie
dolphinscheduler/docs @zhongjiajie @Tianqi-Dotes

View File

@ -22,11 +22,12 @@ labels: [ "bug", "Waiting for reply" ]
body:
- type: markdown
attributes:
value: |
value: >
Please make sure what you are reporting is indeed a bug with reproducible steps, if you want to ask questions
or share ideas, you can head to our
[Discussions](https://github.com/apache/dolphinscheduler/discussions) tab, you can also [subscribe to our mailing list](mailto:dev-subscribe@dolphinscheduler.apache.org) and send
emails to [our mailing list](mailto:dev@dolphinscheduler.apache.org)
[Discussions](https://github.com/apache/dolphinscheduler/discussions) tab, you can also
[join our slack](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw)
and send your question to channel `#troubleshooting`
For better global communication, Please write in English.

View File

@ -42,18 +42,18 @@ jobs:
name: Backend-Path-Filter
runs-on: ubuntu-latest
outputs:
ignore: ${{ steps.filter.outputs.ignore }}
not-ignore: ${{ steps.filter.outputs.not-ignore }}
steps:
- uses: dorny/paths-filter@b2feaf19c27470162a626bd6fa8438ae5b263721
id: filter
with:
filters: |
ignore:
- '(docs/**|dolphinscheduler-ui/**|dolphinscheduler-ui-next/**)'
not-ignore:
- '!(docs/**|dolphinscheduler-ui/**|dolphinscheduler-ui-next/**)'
build:
name: Backend-Build
needs: paths-filter
if: ${{ needs.paths-filter.outputs.ignore == 'false' }}
if: ${{ needs.paths-filter.outputs.not-ignore == 'true' }}
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
@ -81,13 +81,16 @@ jobs:
name: Build
runs-on: ubuntu-latest
timeout-minutes: 30
needs: [ build ]
needs: [ build, paths-filter ]
if: always()
steps:
- name: Status
run: |
if [[ ${{ needs.build.result }} == 'success' || ${{ needs.paths-filter.outputs.ignore == 'true' }} ]]; then
echo "Passed!"
else
if [[ ${{ needs.paths-filter.outputs.not-ignore }} == 'false' ]]; then
echo "Skip Build!"
exit 0
fi
if [[ ${{ needs.build.result }} != 'success' ]]; then
echo "Build Failed!"
exit -1
fi

View File

@ -33,18 +33,18 @@ jobs:
name: E2E-Path-Filter
runs-on: ubuntu-latest
outputs:
ignore: ${{ steps.filter.outputs.ignore }}
not-ignore: ${{ steps.filter.outputs.not-ignore }}
steps:
- uses: dorny/paths-filter@b2feaf19c27470162a626bd6fa8438ae5b263721
id: filter
with:
filters: |
ignore:
- '(docs/**)'
not-ignore:
- '!(docs/**)'
build:
name: E2E-Build
needs: paths-filter
if: ${{ needs.paths-filter.outputs.ignore == 'false' }}
if: ${{ needs.paths-filter.outputs.not-ignore == 'true' }}
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
@ -155,13 +155,16 @@ jobs:
name: E2E
runs-on: ubuntu-latest
timeout-minutes: 30
needs: [ e2e ]
needs: [ e2e, paths-filter ]
if: always()
steps:
- name: Status
run: |
if [[ ${{ needs.e2e.result }} == 'success' || ${{ needs.paths-filter.outputs.ignore == 'true' }} ]]; then
echo "Passed!"
else
if [[ ${{ needs.paths-filter.outputs.not-ignore }} == 'false' ]]; then
echo "Skip E2E!"
exit 0
fi
if [[ ${{ needs.e2e.result }} != 'success' ]]; then
echo "E2E Failed!"
exit -1
fi

View File

@ -58,6 +58,7 @@ jobs:
node-version: 16
- name: Compile and Build
run: |
npm install
npm run lint
npm run build:prod
npm install pnpm -g
pnpm install
pnpm run lint
pnpm run build:prod

View File

@ -40,5 +40,8 @@ jobs:
- name: "Comment in issue"
uses: ./.github/actions/comment-on-issue
with:
message: "Hi:\n* Thank you for your feedback, we have received your issue, Please wait patiently for a reply.\n* In order for us to understand your request as soon as possible, please provide detailed information、version or pictures.\n* If you haven't received a reply for a long time, you can subscribe to the developer's emailMail subscription steps reference https://dolphinscheduler.apache.org/en-us/community/development/subscribe.html ,Then write the issue URL in the email content and send question to dev@dolphinscheduler.apache.org."
message: |
Thank you for your feedback, we have received your issue, Please wait patiently for a reply.
* In order for us to understand your request as soon as possible, please provide detailed information、version or pictures.
* If you haven't received a reply for a long time, you can [join our slack](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw) and send your question to channel `#troubleshooting`
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -40,18 +40,18 @@ jobs:
name: Unit-Test-Path-Filter
runs-on: ubuntu-latest
outputs:
ignore: ${{ steps.filter.outputs.ignore }}
not-ignore: ${{ steps.filter.outputs.not-ignore }}
steps:
- uses: dorny/paths-filter@b2feaf19c27470162a626bd6fa8438ae5b263721
id: filter
with:
filters: |
ignore:
- '(docs/**)'
not-ignore:
- '!(docs/**)'
unit-test:
name: Unit-Test
needs: paths-filter
if: ${{ needs.paths-filter.outputs.ignore == 'false' }}
if: ${{ needs.paths-filter.outputs.not-ignore == 'true' }}
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
@ -114,13 +114,16 @@ jobs:
name: Unit Test
runs-on: ubuntu-latest
timeout-minutes: 30
needs: [ unit-test ]
needs: [ unit-test, paths-filter ]
if: always()
steps:
- name: Status
run: |
if [[ ${{ needs.unit-test.result }} == 'success' || ${{ needs.paths-filter.outputs.ignore == 'true' }} ]]; then
echo "Passed!"
else
if [[ ${{ needs.paths-filter.outputs.not-ignore }} == 'false' ]]; then
echo "Skip Unit Test!"
exit 0
fi
if [[ ${{ needs.unit-test.result }} != 'success' ]]; then
echo "Unit Test Failed!"
exit -1
fi

View File

@ -86,7 +86,7 @@ We would like to express our deep gratitude to all the open-source projects used
## Get Help
1. Submit an [issue](https://github.com/apache/dolphinscheduler/issues/new/choose)
1. Subscribe to this mailing list: https://dolphinscheduler.apache.org/en-us/community/development/subscribe.html, then email dev@dolphinscheduler.apache.org
2. [Join our slack](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw) and send your question to channel `#troubleshooting`
## Community

View File

@ -87,8 +87,8 @@ Dolphin Scheduler使用了很多优秀的开源项目比如google的guava、g
## 获得帮助
1. 提交issue
2. 先订阅邮件开发列表:[订阅邮件列表](https://dolphinscheduler.apache.org/zh-cn/community/development/subscribe.html), 订阅成功后发送邮件到dev@dolphinscheduler.apache.org.
1. 提交 [issue](https://github.com/apache/dolphinscheduler/issues/new/choose)
2. [加入slack群](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw) 并在频道 `#troubleshooting` 中提问
## 社区

View File

@ -16,7 +16,7 @@
# under the License.
#
HUB=ghcr.io/apache/dolphinscheduler
TAG=latest
TAG=3.0.0-alpha
TZ=Asia/Shanghai
SPRING_DATASOURCE_URL=jdbc:postgresql://dolphinscheduler-postgresql:5432/dolphinscheduler

View File

@ -140,29 +140,6 @@ services:
networks:
- dolphinscheduler
dolphinscheduler-python-gateway:
image: ${HUB}/dolphinscheduler-python:${TAG}
ports:
- "54321:54321"
- "25333:25333"
env_file: .env
healthcheck:
test: [ "CMD", "curl", "http://localhost:54321/actuator/health" ]
interval: 30s
timeout: 5s
retries: 3
depends_on:
dolphinscheduler-schema-initializer:
condition: service_completed_successfully
dolphinscheduler-zookeeper:
condition: service_healthy
volumes:
- dolphinscheduler-logs:/opt/dolphinscheduler/logs
- dolphinscheduler-shared-local:/opt/soft
- dolphinscheduler-resource-local:/dolphinscheduler
networks:
- dolphinscheduler
networks:
dolphinscheduler:
driver: bridge

View File

@ -118,27 +118,6 @@ services:
mode: replicated
replicas: 1
dolphinscheduler-python-gateway:
image: apache/dolphinscheduler-python-gateway
ports:
- 54321:54321
- 25333:25333
env_file: .env
healthcheck:
test: [ "CMD", "curl", "http://localhost:54321/actuator/health" ]
interval: 30s
timeout: 5s
retries: 3
volumes:
- dolphinscheduler-logs:/opt/dolphinscheduler/logs
- dolphinscheduler-shared-local:/opt/soft
- dolphinscheduler-resource-local:/dolphinscheduler
networks:
- dolphinscheduler
deploy:
mode: replicated
replicas: 1
networks:
dolphinscheduler:
driver: overlay

View File

@ -39,7 +39,7 @@ version: 2.0.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application.
appVersion: 2.0.4-SNAPSHOT
appVersion: 3.0.0-alpha
dependencies:
- name: postgresql

View File

@ -44,9 +44,6 @@ Create default docker images' fullname.
{{- define "dolphinscheduler.image.fullname.tools" -}}
{{- .Values.image.registry }}/dolphinscheduler-tools:{{ .Values.image.tag | default .Chart.AppVersion -}}
{{- end -}}
{{- define "dolphinscheduler.image.fullname.python-gateway" -}}
{{- .Values.image.registry }}/dolphinscheduler-python-gateway:{{ .Values.image.tag | default .Chart.AppVersion -}}
{{- end -}}
{{/*
Create a default common labels.

View File

@ -23,7 +23,7 @@ timezone: "Asia/Shanghai"
image:
registry: "dolphinscheduler.docker.scarf.sh/apache"
tag: "2.0.4-SNAPSHOT"
tag: "3.0.0-alpha"
pullPolicy: "IfNotPresent"
pullSecret: ""

View File

@ -25,15 +25,15 @@ export default {
children: [
{
title: 'Introduction',
link: '/en-us/docs/dev/user_doc/about/introduction.html',
link: '/en-us/docs/3.0.0/user_doc/about/introduction.html',
},
{
title: 'Hardware Environment',
link: '/en-us/docs/dev/user_doc/about/hardware.html',
link: '/en-us/docs/3.0.0/user_doc/about/hardware.html',
},
{
title: 'Glossary',
link: '/en-us/docs/dev/user_doc/about/glossary.html',
link: '/en-us/docs/3.0.0/user_doc/about/glossary.html',
},
],
},
@ -42,11 +42,11 @@ export default {
children: [
{
title: 'Quick Start',
link: '/en-us/docs/dev/user_doc/guide/start/quick-start.html',
link: '/en-us/docs/3.0.0/user_doc/guide/start/quick-start.html',
},
{
title: 'Docker Deployment',
link: '/en-us/docs/dev/user_doc/guide/start/docker.html',
link: '/en-us/docs/3.0.0/user_doc/guide/start/docker.html',
},
],
},
@ -55,19 +55,19 @@ export default {
children: [
{
title: 'Standalone Deployment',
link: '/en-us/docs/dev/user_doc/guide/installation/standalone.html',
link: '/en-us/docs/3.0.0/user_doc/guide/installation/standalone.html',
},
{
title: 'Pseudo Cluster Deployment',
link: '/en-us/docs/dev/user_doc/guide/installation/pseudo-cluster.html',
link: '/en-us/docs/3.0.0/user_doc/guide/installation/pseudo-cluster.html',
},
{
title: 'Cluster Deployment',
link: '/en-us/docs/dev/user_doc/guide/installation/cluster.html',
link: '/en-us/docs/3.0.0/user_doc/guide/installation/cluster.html',
},
{
title: 'Kubernetes Deployment',
link: '/en-us/docs/dev/user_doc/guide/installation/kubernetes.html',
link: '/en-us/docs/3.0.0/user_doc/guide/installation/kubernetes.html',
},
],
},
@ -76,26 +76,30 @@ export default {
children: [
{
title: 'Workflow Overview',
link: '/en-us/docs/dev/user_doc/guide/homepage.html',
link: '/en-us/docs/3.0.0/user_doc/guide/homepage.html',
},
{
title: 'Project',
children: [
{
title: 'Project List',
link: '/en-us/docs/dev/user_doc/guide/project/project-list.html',
link: '/en-us/docs/3.0.0/user_doc/guide/project/project-list.html',
},
{
title: 'Workflow Definition',
link: '/en-us/docs/dev/user_doc/guide/project/workflow-definition.html',
link: '/en-us/docs/3.0.0/user_doc/guide/project/workflow-definition.html',
},
{
title: 'Workflow Instance',
link: '/en-us/docs/dev/user_doc/guide/project/workflow-instance.html',
link: '/en-us/docs/3.0.0/user_doc/guide/project/workflow-instance.html',
},
{
title: 'Task Instance',
link: '/en-us/docs/dev/user_doc/guide/project/task-instance.html',
link: '/en-us/docs/3.0.0/user_doc/guide/project/task-instance.html',
},
{
title: 'Task Definition',
link: '/zh-cn/docs/3.0.0/user_doc/guide/project/task-definition.html',
},
]
},
@ -104,63 +108,63 @@ export default {
children: [
{
title: 'Shell',
link: '/en-us/docs/dev/user_doc/guide/task/shell.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/shell.html',
},
{
title: 'SubProcess',
link: '/en-us/docs/dev/user_doc/guide/task/sub-process.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/sub-process.html',
},
{
title: 'Dependent',
link: '/en-us/docs/dev/user_doc/guide/task/dependent.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/dependent.html',
},
{
title: 'Stored Procedure',
link: '/en-us/docs/dev/user_doc/guide/task/stored-procedure.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/stored-procedure.html',
},
{
title: 'SQL',
link: '/en-us/docs/dev/user_doc/guide/task/sql.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/sql.html',
},
{
title: 'Spark',
link: '/en-us/docs/dev/user_doc/guide/task/spark.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/spark.html',
},
{
title: 'MapReduce',
link: '/en-us/docs/dev/user_doc/guide/task/map-reduce.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/map-reduce.html',
},
{
title: 'Python',
link: '/en-us/docs/dev/user_doc/guide/task/python.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/python.html',
},
{
title: 'Flink',
link: '/en-us/docs/dev/user_doc/guide/task/flink.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/flink.html',
},
{
title: 'HTTP',
link: '/en-us/docs/dev/user_doc/guide/task/http.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/http.html',
},
{
title: 'DataX',
link: '/en-us/docs/dev/user_doc/guide/task/datax.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/datax.html',
},
{
title: 'Pigeon',
link: '/en-us/docs/dev/user_doc/guide/task/pigeon.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/pigeon.html',
},
{
title: 'Conditions',
link: '/en-us/docs/dev/user_doc/guide/task/conditions.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/conditions.html',
},
{
title: 'Switch',
link: '/en-us/docs/dev/user_doc/guide/task/switch.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/switch.html',
},
{
title: 'Amazon EMR',
link: '/en-us/docs/dev/user_doc/guide/task/emr.html',
link: '/en-us/docs/3.0.0/user_doc/guide/task/emr.html',
},
],
},
@ -169,23 +173,23 @@ export default {
children: [
{
title: 'Built-in Parameter',
link: '/en-us/docs/dev/user_doc/guide/parameter/built-in.html',
link: '/en-us/docs/3.0.0/user_doc/guide/parameter/built-in.html',
},
{
title: 'Global Parameter',
link: '/en-us/docs/dev/user_doc/guide/parameter/global.html',
link: '/en-us/docs/3.0.0/user_doc/guide/parameter/global.html',
},
{
title: 'Local Parameter',
link: '/en-us/docs/dev/user_doc/guide/parameter/local.html',
link: '/en-us/docs/3.0.0/user_doc/guide/parameter/local.html',
},
{
title: 'Parameter Context',
link: '/en-us/docs/dev/user_doc/guide/parameter/context.html',
link: '/en-us/docs/3.0.0/user_doc/guide/parameter/context.html',
},
{
title: 'Parameter Priority',
link: '/en-us/docs/dev/user_doc/guide/parameter/priority.html',
link: '/en-us/docs/3.0.0/user_doc/guide/parameter/priority.html',
},
],
},
@ -194,23 +198,23 @@ export default {
children: [
{
title: 'Introduction',
link: '/en-us/docs/dev/user_doc/guide/datasource/introduction.html',
link: '/en-us/docs/3.0.0/user_doc/guide/datasource/introduction.html',
},
{
title: 'MySQL',
link: '/en-us/docs/dev/user_doc/guide/datasource/mysql.html',
link: '/en-us/docs/3.0.0/user_doc/guide/datasource/mysql.html',
},
{
title: 'PostgreSQL',
link: '/en-us/docs/dev/user_doc/guide/datasource/postgresql.html',
link: '/en-us/docs/3.0.0/user_doc/guide/datasource/postgresql.html',
},
{
title: 'HIVE',
link: '/en-us/docs/dev/user_doc/guide/datasource/hive.html',
link: '/en-us/docs/3.0.0/user_doc/guide/datasource/hive.html',
},
{
title: 'Spark',
link: '/en-us/docs/dev/user_doc/guide/datasource/spark.html',
link: '/en-us/docs/3.0.0/user_doc/guide/datasource/spark.html',
},
],
},
@ -219,53 +223,62 @@ export default {
children: [
{
title: 'Alert Component User Guide ',
link: '/en-us/docs/dev/user_doc/guide/alert/alert_plugin_user_guide.html',
link: '/en-us/docs/3.0.0/user_doc/guide/alert/alert_plugin_user_guide.html',
},
{
title: 'Telegram',
link: '/en-us/docs/dev/user_doc/guide/alert/telegram.html',
link: '/en-us/docs/3.0.0/user_doc/guide/alert/telegram.html',
},
{
title: 'Ding Talk',
link: '/en-us/docs/dev/user_doc/guide/alert/dingtalk.html',
link: '/en-us/docs/3.0.0/user_doc/guide/alert/dingtalk.html',
},
{
title: 'Enterprise Wechat',
link: '/en-us/docs/dev/user_doc/guide/alert/enterprise-wechat.html',
link: '/en-us/docs/3.0.0/user_doc/guide/alert/enterprise-wechat.html',
},
{
title: 'Enterprise Webexteams',
link: '/en-us/docs/dev/user_doc/guide/alert/enterprise-webexteams.html',
link: '/en-us/docs/3.0.0/user_doc/guide/alert/enterprise-webexteams.html',
},
],
},
{
title: 'Resource',
link: '/en-us/docs/dev/user_doc/guide/resource.html',
link: '/en-us/docs/3.0.0/user_doc/guide/resource.html',
},
{
title: 'Monitor',
link: '/en-us/docs/dev/user_doc/guide/monitor.html',
link: '/en-us/docs/3.0.0/user_doc/guide/monitor.html',
},
{
title: 'Security',
link: '/en-us/docs/dev/user_doc/guide/security.html',
link: '/en-us/docs/3.0.0/user_doc/guide/security.html',
},
{
title: 'How-To',
children: [
{
title: 'General Setting',
link: '/en-us/docs/3.0.0/user_doc/guide/howto/general-setting.html',
}
],
},
{
title: 'Open API',
link: '/en-us/docs/dev/user_doc/guide/open-api.html',
link: '/en-us/docs/3.0.0/user_doc/guide/open-api.html',
},
{
title: 'Flink',
link: '/en-us/docs/dev/user_doc/guide/flink-call.html',
link: '/en-us/docs/3.0.0/user_doc/guide/flink-call.html',
},
{
title: 'Upgrade',
link: '/en-us/docs/dev/user_doc/guide/upgrade.html',
link: '/en-us/docs/3.0.0/user_doc/guide/upgrade.html',
},
{
title: 'Expansion and Reduction',
link: '/en-us/docs/dev/user_doc/guide/expansion-reduction.html',
link: '/en-us/docs/3.0.0/user_doc/guide/expansion-reduction.html',
},
],
},
@ -274,37 +287,27 @@ export default {
children: [
{
title: 'Architecture Design',
link: '/en-us/docs/dev/user_doc/architecture/design.html',
link: '/en-us/docs/3.0.0/user_doc/architecture/design.html',
},
{
title: 'Metadata',
link: '/en-us/docs/dev/user_doc/architecture/metadata.html',
link: '/en-us/docs/3.0.0/user_doc/architecture/metadata.html',
},
{
title: 'Configuration File',
link: '/en-us/docs/dev/user_doc/architecture/configuration.html',
link: '/en-us/docs/3.0.0/user_doc/architecture/configuration.html',
},
{
title: 'Task Structure',
link: '/en-us/docs/dev/user_doc/architecture/task-structure.html',
link: '/en-us/docs/3.0.0/user_doc/architecture/task-structure.html',
},
{
title: 'Load Balance',
link: '/en-us/docs/dev/user_doc/architecture/load-balance.html',
link: '/en-us/docs/3.0.0/user_doc/architecture/load-balance.html',
},
{
title: 'Cache',
link: '/en-us/docs/dev/user_doc/architecture/cache.html',
},
],
},
{
title: 'Observability',
children: [
{
title: 'SkyWalking-Agent',
link: '/en-us/docs/dev/user_doc/guide/installation/skywalking-agent.html',
link: '/en-us/docs/3.0.0/user_doc/architecture/cache.html',
},
],
},
@ -336,15 +339,15 @@ export default {
children: [
{
title: '',
link: '/zh-cn/docs/dev/user_doc/about/introduction.html',
link: '/zh-cn/docs/3.0.0/user_doc/about/introduction.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/about/hardware.html',
link: '/zh-cn/docs/3.0.0/user_doc/about/hardware.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/about/glossary.html',
link: '/zh-cn/docs/3.0.0/user_doc/about/glossary.html',
},
],
},
@ -353,11 +356,11 @@ export default {
children: [
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/start/quick-start.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/start/quick-start.html',
},
{
title: 'Docker(Docker)',
link: '/zh-cn/docs/dev/user_doc/guide/start/docker.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/start/docker.html',
},
],
},
@ -366,19 +369,19 @@ export default {
children: [
{
title: '(Standalone)',
link: '/zh-cn/docs/dev/user_doc/guide/installation/standalone.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/installation/standalone.html',
},
{
title: '(Pseudo-Cluster)',
link: '/zh-cn/docs/dev/user_doc/guide/installation/pseudo-cluster.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/installation/pseudo-cluster.html',
},
{
title: '(Cluster)',
link: '/zh-cn/docs/dev/user_doc/guide/installation/cluster.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/installation/cluster.html',
},
{
title: 'Kubernetes(Kubernetes)',
link: '/zh-cn/docs/dev/user_doc/guide/installation/kubernetes.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/installation/kubernetes.html',
},
],
},
@ -387,26 +390,30 @@ export default {
children: [
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/homepage.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/homepage.html',
},
{
title: '',
children: [
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/project/project-list.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/project/project-list.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/project/workflow-definition.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/project/workflow-definition.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/project/workflow-instance.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/project/workflow-instance.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/project/task-instance.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/project/task-instance.html',
},
{
title: '',
link: '/zh-cn/docs/3.0.0/user_doc/guide/project/task-definition.html',
},
]
},
@ -415,63 +422,63 @@ export default {
children: [
{
title: 'Shell',
link: '/zh-cn/docs/dev/user_doc/guide/task/shell.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/shell.html',
},
{
title: 'SubProcess',
link: '/zh-cn/docs/dev/user_doc/guide/task/sub-process.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/sub-process.html',
},
{
title: 'Dependent',
link: '/zh-cn/docs/dev/user_doc/guide/task/dependent.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/dependent.html',
},
{
title: 'Stored Procedure',
link: '/zh-cn/docs/dev/user_doc/guide/task/stored-procedure.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/stored-procedure.html',
},
{
title: 'SQL',
link: '/zh-cn/docs/dev/user_doc/guide/task/sql.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/sql.html',
},
{
title: 'Spark',
link: '/zh-cn/docs/dev/user_doc/guide/task/spark.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/spark.html',
},
{
title: 'MapReduce',
link: '/zh-cn/docs/dev/user_doc/guide/task/map-reduce.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/map-reduce.html',
},
{
title: 'Python',
link: '/zh-cn/docs/dev/user_doc/guide/task/python.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/python.html',
},
{
title: 'Flink',
link: '/zh-cn/docs/dev/user_doc/guide/task/flink.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/flink.html',
},
{
title: 'HTTP',
link: '/zh-cn/docs/dev/user_doc/guide/task/http.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/http.html',
},
{
title: 'DataX',
link: '/zh-cn/docs/dev/user_doc/guide/task/datax.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/datax.html',
},
{
title: 'Pigeon',
link: '/zh-cn/docs/dev/user_doc/guide/task/pigeon.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/pigeon.html',
},
{
title: 'Conditions',
link: '/zh-cn/docs/dev/user_doc/guide/task/conditions.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/conditions.html',
},
{
title: 'Switch',
link: '/zh-cn/docs/dev/user_doc/guide/task/switch.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/switch.html',
},
{
title: 'Amazon EMR',
link: '/zh-cn/docs/dev/user_doc/guide/task/emr.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/task/emr.html',
},
],
},
@ -480,23 +487,23 @@ export default {
children: [
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/parameter/built-in.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/parameter/built-in.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/parameter/global.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/parameter/global.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/parameter/local.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/parameter/local.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/parameter/context.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/parameter/context.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/parameter/priority.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/parameter/priority.html',
},
],
},
@ -505,23 +512,23 @@ export default {
children: [
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/datasource/introduction.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/datasource/introduction.html',
},
{
title: 'MySQL',
link: '/zh-cn/docs/dev/user_doc/guide/datasource/mysql.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/datasource/mysql.html',
},
{
title: 'PostgreSQL',
link: '/zh-cn/docs/dev/user_doc/guide/datasource/postgresql.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/datasource/postgresql.html',
},
{
title: 'HIVE',
link: '/zh-cn/docs/dev/user_doc/guide/datasource/hive.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/datasource/hive.html',
},
{
title: 'Spark',
link: '/zh-cn/docs/dev/user_doc/guide/datasource/spark.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/datasource/spark.html',
},
],
},
@ -530,53 +537,62 @@ export default {
children: [
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/alert/alert_plugin_user_guide.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/alert/alert_plugin_user_guide.html',
},
{
title: 'Telegram',
link: '/zh-cn/docs/dev/user_doc/guide/alert/telegram.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/alert/telegram.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/alert/dingtalk.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/alert/dingtalk.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/alert/enterprise-wechat.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/alert/enterprise-wechat.html',
},
{
title: 'Webexteams',
link: '/zh-cn/docs/dev/user_doc/guide/alert/enterprise-webexteams.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/alert/enterprise-webexteams.html',
},
],
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/resource.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/resource.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/monitor.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/monitor.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/security.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/security.html',
},
{
title: '',
children: [
{
title: '',
link: '/zh-cn/docs/3.0.0/user_doc/guide/howto/general-setting.html',
}
],
},
{
title: 'API',
link: '/zh-cn/docs/dev/user_doc/guide/open-api.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/open-api.html',
},
{
title: 'Flink',
link: '/zh-cn/docs/dev/user_doc/guide/flink-call.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/flink-call.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/upgrade.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/upgrade.html',
},
{
title: '/',
link: '/zh-cn/docs/dev/user_doc/guide/expansion-reduction.html',
link: '/zh-cn/docs/3.0.0/user_doc/guide/expansion-reduction.html',
},
],
},
@ -585,36 +601,27 @@ export default {
children: [
{
title: '',
link: '/zh-cn/docs/dev/user_doc/architecture/metadata.html',
link: '/zh-cn/docs/3.0.0/user_doc/architecture/metadata.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/architecture/design.html',
link: '/zh-cn/docs/3.0.0/user_doc/architecture/design.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/architecture/configuration.html',
link: '/zh-cn/docs/3.0.0/user_doc/architecture/configuration.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/architecture/task-structure.html',
link: '/zh-cn/docs/3.0.0/user_doc/architecture/task-structure.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/architecture/load-balance.html',
link: '/zh-cn/docs/3.0.0/user_doc/architecture/load-balance.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/architecture/cache.html',
},
],
},
{
title: '',
children: [
{
title: 'SkyWalking-Agent',
link: '/zh-cn/docs/dev/user_doc/guide/installation/skywalking-agent.html',
link: '/zh-cn/docs/3.0.0/user_doc/architecture/cache.html',
},
],
},

View File

@ -544,17 +544,6 @@ A: 1, edit nginx config file /etc/nginx/conf.d/escheduler.conf
---
## Q : Welcome to subscribe the DolphinScheduler development mailing list
A: In the process of using DolphinScheduler, if you have any questions or ideas, suggestions, you can participate in the DolphinScheduler community building through the Apache mailing list. Sending a subscription email is also very simple, the steps are as follows:
1, Send an email to dev-subscribe@dolphinscheduler.apache.org with your own email address, subject and content.
2, Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@dolphinscheduler.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.
3, Receive a welcome email. After completing the above steps, you will receive a welcome email with the subject WELCOME to dev@dolphinscheduler.apache.org, and you have successfully subscribed to the Apache DolphinScheduler mailing list.
---
## Q : Workflow Dependency
A: 1, It is currently judged according to natural days, at the end of last month: the judgment time is the workflow A start_time/scheduler_time between '2019-05-31 00:00:00' and '2019-05-31 23:59:59'. Last month: It is judged that there is an A instance completed every day from the 1st to the end of the month. Last week: There are completed A instances 7 days last week. The first two days: Judging yesterday and the day before yesterday, there must be a completed A instance for two days.
@ -712,6 +701,12 @@ AThe repair can be completed by executing the following SQL in the database:
update t_ds_version set version='2.0.1';
```
## Can not find python-gateway-server in distribute package
After version 3.0.0-alpha, Python gateway server integrate into API server, and Python gateway service will start when you
start API server. If you want disabled when Python gateway service you could change API server configuration in path
`api-server/conf/application.yaml` and change attribute `python-gateway.enabled : false`.
---
## We will collect more FAQ later

View File

@ -31,9 +31,9 @@ This article describes how to add a new master service or worker service to an e
mkdir -p /opt
cd /opt
# decompress
tar -zxvf apache-dolphinscheduler-1.3.8-bin.tar.gz -C /opt
tar -zxvf apache-dolphinscheduler-3.0.0-alpha-bin.tar.gz -C /opt
cd /opt
mv apache-dolphinscheduler-1.3.8-bin dolphinscheduler
mv apache-dolphinscheduler-3.0.0-alpha-bin dolphinscheduler
```
```markdown

View File

@ -0,0 +1,22 @@
# General Setting
## Language
DolphinScheduler supports two types of built-in language which include `English` and `Chinese`. You could click the button
on the top control bar named `English` and `Chinese` and change it to another one when you want to switch the language.
The entire DolphinScheduler page language will shift when you switch the language selection.
## Theme
DolphinScheduler supports two types of built-in theme which include `Dark` and `Light`. When you want to change the theme
of DolphinScheduler, all you have to do is click the button named `Dark`(or `Light`) on the top control bar and on the left
of to [language](#language) control button.
## Time Zone
DolphinScheduler support time zone setting. The build-in time zone is based on the server you run DolphinScheduler. You could
click the button on the right of the [language](#language) button and then click `Choose timeZone` to choose the time zone
you want to switch. All time related components will adjust their time zone according to the time zone setting you select.
DolphinScheduler uses UTC time for the internal communication, the time zone you choose only changes the display time format
base on the UTC time. When you choose time zone `TIME ZONE`, we just convert the UTC time to the zone you prefer.

View File

@ -12,16 +12,16 @@ If you are a new hand and want to experience DolphinScheduler functions, we reco
## Install DolphinScheduler
Please download the source code package `apache-dolphinscheduler-1.3.8-src.tar.gz`, download address: [download address](/en-us/download/download.html)
Please download the source code package `apache-dolphinscheduler-3.0.0-alpha-src.tar.gz`, download address: [download address](/en-us/download/download.html)
To publish the release name `dolphinscheduler` version, please execute the following commands:
```
$ tar -zxvf apache-dolphinscheduler-1.3.8-src.tar.gz
$ cd apache-dolphinscheduler-1.3.8-src/docker/kubernetes/dolphinscheduler
$ tar -zxvf apache-dolphinscheduler-3.0.0-alpha-src.tar.gz
$ cd apache-dolphinscheduler-3.0.0-alpha-src/docker/kubernetes/dolphinscheduler
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm dependency update .
$ helm install dolphinscheduler . --set image.tag=1.3.8
$ helm install dolphinscheduler . --set image.tag=3.0.0-alpha
```
To publish the release name `dolphinscheduler` version to `test` namespace:
@ -193,7 +193,7 @@ kubectl scale --replicas=6 sts dolphinscheduler-worker -n test # with test names
2. Create a new `Dockerfile` to add MySQL driver:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
```
@ -236,7 +236,7 @@ externalDatabase:
2. Create a new `Dockerfile` to add MySQL driver:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
```
@ -265,7 +265,7 @@ docker build -t apache/dolphinscheduler:mysql-driver .
2. Create a new `Dockerfile` to add Oracle driver:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
```
@ -288,7 +288,7 @@ docker build -t apache/dolphinscheduler:oracle-driver .
1. Create a new `Dockerfile` to install pip:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY requirements.txt /tmp
RUN apt-get update && \
apt-get install -y --no-install-recommends python-pip && \
@ -321,7 +321,7 @@ docker build -t apache/dolphinscheduler:pip .
1. Create a new `Dockerfile` to install Python 3:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
RUN apt-get update && \
apt-get install -y --no-install-recommends python3 && \
rm -rf /var/lib/apt/lists/*

View File

@ -193,7 +193,9 @@ sh ./bin/dolphinscheduler-daemon.sh start alert-server
sh ./bin/dolphinscheduler-daemon.sh stop alert-server
```
> **_Note:_**: Please refer to the section of "System Architecture Design" for service usage
> **_Note:_**: Please refer to the section of "System Architecture Design" for service usage. Python gateway service is
> started along with the api-server, and if you do not want to start Python gateway service please disabled it by changing
> the yaml config `python-gateway.enabled : false` in api-server's configuration path `api-server/conf/application.yaml`
[jdk]: https://www.oracle.com/technetwork/java/javase/downloads/index.html
[zookeeper]: https://zookeeper.apache.org/releases.html

View File

@ -1,74 +0,0 @@
SkyWalking Agent Deployment
=============================
The `dolphinscheduler-skywalking` module provides [SkyWalking](https://skywalking.apache.org/) monitor agent for the DolphinScheduler project.
This document describes how to enable SkyWalking version 8.4+ support with this module (recommend using SkyWalking 8.5.0).
## Installation
The following configuration is used to enable the SkyWalking agent.
### Through Environment Variable Configuration (for Docker Compose)
Modify SkyWalking environment variables in `docker/docker-swarm/config.env.sh`:
```
SKYWALKING_ENABLE=true
SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
SW_GRPC_LOG_SERVER_HOST=127.0.0.1
SW_GRPC_LOG_SERVER_PORT=11800
```
And run:
```shell
$ docker-compose up -d
```
### Through Environment Variable Configuration (for Docker)
```shell
$ docker run -d --name dolphinscheduler \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
-e SKYWALKING_ENABLE="true" \
-e SW_AGENT_COLLECTOR_BACKEND_SERVICES="your.skywalking-oap-server.com:11800" \
-e SW_GRPC_LOG_SERVER_HOST="your.skywalking-log-reporter.com" \
-e SW_GRPC_LOG_SERVER_PORT="11800" \
-p 12345:12345 \
apache/dolphinscheduler:1.3.8 all
```
### Through install_config.conf Configuration (for DolphinScheduler install.sh)
Add the following configurations to `${workDir}/conf/config/install_config.conf`.
```properties
# SkyWalking config
# note: enable SkyWalking tracking plugin
enableSkywalking="true"
# note: configure SkyWalking backend service address
skywalkingServers="your.skywalking-oap-server.com:11800"
# note: configure SkyWalking log reporter host
skywalkingLogReporterHost="your.skywalking-log-reporter.com"
# note: configure SkyWalking log reporter port
skywalkingLogReporterPort="11800"
```
## Usage
### Import Dashboard
#### Import DolphinScheduler Dashboard to SkyWalking Server
Copy the `${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml` file into `${skywalking-oap-server.home}/config/ui-initialized-templates/` directory, and restart SkyWalking oap-server.
#### View DolphinScheduler Dashboard
If you have opened the SkyWalking dashboard with a browser before, you need to clear the browser cache.
![img1](/img/skywalking/import-dashboard-1.jpg)

View File

@ -39,4 +39,8 @@ sh ./bin/dolphinscheduler-daemon.sh start standalone-server
sh ./bin/dolphinscheduler-daemon.sh stop standalone-server
```
> Note: Python gateway service is started along with the api-server, and if you do not want to start Python gateway
> service please disabled it by changing the yaml config `python-gateway.enabled : false` in api-server's configuration
> path `api-server/conf/application.yaml`
[jdk]: https://www.oracle.com/technetwork/java/javase/downloads/index.html

View File

@ -4,19 +4,19 @@
- Service management is mainly to monitor and display the health status and basic information of each service in the system.
## Monitor Master Server
### Master Server
- Mainly related to master information.
![master](/img/new_ui/dev/monitor/master.png)
## Monitor Worker Server
### Worker Server
- Mainly related to worker information.
![worker](/img/new_ui/dev/monitor/worker.png)
## Monitor DB
### Database
- Mainly the health status of the DB.
@ -24,9 +24,18 @@
## Statistics Management
### Statistics
![statistics](/img/new_ui/dev/monitor/statistics.png)
- Number of commands wait to be executed: statistics of the `t_ds_command` table data.
- The number of failed commands: statistics of the `t_ds_error_command` table data.
- Number of tasks wait to run: count the data of `task_queue` in the ZooKeeper.
- Number of tasks wait to be killed: count the data of `task_kill` in the ZooKeeper.
### Audit Log
The audit log provides information about who accesses the system and the operations made to the system and record related
time, which strengthen the security of the system and maintenance.
![audit-log](/img/new_ui/dev/monitor/audit-log.jpg)

View File

@ -0,0 +1,13 @@
# Task Definition
Task definition allows to modify or operate tasks at the task level rather than modifying them in the workflow definition.
We already have workflow level task editor in [workflow definition](workflow-definition.md) which you can click the specific
workflow and then edit its task definition. It is depressing when you want to edit the task definition but do not remember
which workflow it belongs to. So we decide to add `Task Definition` view under `Task` menu.
![task-definition](/img/new_ui/dev/project/task-definition.jpg)
In this view, you can create, query, update, delete task definition by click the related button in `operation` column. The
most exciting thing is you could query task by task name in the wildcard, and it is useful when you only remember the task
name but forget which workflow it belongs to. It is also supported query by the task name alone with `Task Type` or
`Workflow Name`

View File

@ -39,13 +39,13 @@ Please download the source package apache-dolphinscheduler-x.x.x-src.tar.gz from
> For Windows Docker Desktop user, open **Windows PowerShell**
```
$ tar -zxvf apache-dolphinscheduler-1.3.8-src.tar.gz
$ cd apache-dolphinscheduler-1.3.8-src/docker/docker-swarm
$ docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
$ docker tag apache/dolphinscheduler:1.3.8 apache/dolphinscheduler:latest
$ tar -zxvf apache-dolphinscheduler-3.0.0-alpha-src.tar.gz
$ cd apache-dolphinscheduler-3.0.0-alpha-src/docker/docker-swarm
$ docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
$ docker tag apache/dolphinscheduler:3.0.0-alpha apache/dolphinscheduler:latest
$ docker-compose up -d
```
> PowerShell should use `cd apache-dolphinscheduler-1.3.8-src\docker\docker-swarm`
> PowerShell should use `cd apache-dolphinscheduler-3.0.0-alpha-src\docker\docker-swarm`
**PostgreSQL** (user `root`, password `root`, database `dolphinscheduler`) and **ZooKeeper** services will be started by default
@ -78,7 +78,7 @@ This method requires the installation of [docker](https://docs.docker.com/engine
We have uploaded the DolphinScheduler images for users to the docker repository. Instead of building the image locally, users can pull the image from the docker repository by running the following command.
```
docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
```
#### 5. Run a DolphinScheduler instance
@ -89,7 +89,7 @@ $ docker run -d --name dolphinscheduler \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
-p 12345:12345 \
apache/dolphinscheduler:1.3.8 all
apache/dolphinscheduler:3.0.0-alpha all
```
Note: The database user test and password test need to be replaced with the actual PostgreSQL user and password. 192.168.x.x needs to be replaced with the host IP of PostgreSQL and ZooKeeper.
@ -118,7 +118,7 @@ $ docker run -d --name dolphinscheduler-master \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
apache/dolphinscheduler:1.3.8 master-server
apache/dolphinscheduler:3.0.0-alpha master-server
```
* Start a **worker server**, as follows:
@ -128,7 +128,7 @@ $ docker run -d --name dolphinscheduler-worker \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
apache/dolphinscheduler:1.3.8 worker-server
apache/dolphinscheduler:3.0.0-alpha worker-server
```
* Start a **api server**, as follows:
@ -139,7 +139,7 @@ $ docker run -d --name dolphinscheduler-api \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
-p 12345:12345 \
apache/dolphinscheduler:1.3.8 api-server
apache/dolphinscheduler:3.0.0-alpha api-server
```
* Start a **alter server**, as follows:
@ -148,7 +148,7 @@ apache/dolphinscheduler:1.3.8 api-server
$ docker run -d --name dolphinscheduler-alert \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
apache/dolphinscheduler:1.3.8 alert-server
apache/dolphinscheduler:3.0.0-alpha alert-server
```
**NOTE**: When you run some of the services in dolphinscheduler, you must specify these environment variables `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_DATABASE`, `DATABASE_USERNAME`, `DATABASE_ PASSWORD`, `ZOOKEEPER_QUORUM`.
@ -312,14 +312,14 @@ If you don't understand `. /docker/build/hooks/build` `. /docker/build/hooks/bui
#### Build from binary packages (Maven 3.3+ & JDK 1.8+ not required)
Please download the binary package apache-dolphinscheduler-1.3.8-bin.tar.gz from: [download](/zh-cn/download/download.html). Then put apache-dolphinscheduler-1.3.8-bin.tar.gz into the `apache-dolphinscheduler-1.3.8-src/docker/build` directory and execute it in Terminal or PowerShell:
Please download the binary package apache-dolphinscheduler-3.0.0-alpha-bin.tar.gz from: [download](/zh-cn/download/download.html). Then put apache-dolphinscheduler-3.0.0-alpha-bin.tar.gz into the `apache-dolphinscheduler-3.0.0-alpha-src/docker/build` directory and execute it in Terminal or PowerShell:
```
$ cd apache-dolphinscheduler-1.3.8-src/docker/build
$ docker build --build-arg VERSION=1.3.8 -t apache/dolphinscheduler:1.3.8 .
$ cd apache-dolphinscheduler-3.0.0-alpha-src/docker/build
$ docker build --build-arg VERSION=3.0.0-alpha -t apache/dolphinscheduler:3.0.0-alpha .
```
> PowerShell should use `cd apache-dolphinscheduler-1.3.8-src/docker/build`
> PowerShell should use `cd apache-dolphinscheduler-3.0.0-alpha-src/docker/build`
#### Building images for multi-platform architectures
@ -374,7 +374,7 @@ done
2. Create a new `Dockerfile` to add the MySQL driver package:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
```
@ -420,7 +420,7 @@ DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
2. Create a new `Dockerfile` to add the MySQL driver package:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
```
@ -449,7 +449,7 @@ docker build -t apache/dolphinscheduler:mysql-driver .
2. Create a new `Dockerfile` to add the Oracle driver package:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
```
@ -472,7 +472,7 @@ docker build -t apache/dolphinscheduler:oracle-driver .
1. Create a new `Dockerfile` for installing pip:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY requirements.txt /tmp
RUN apt-get update && \
apt-get install -y --no-install-recommends python-pip && \
@ -506,7 +506,7 @@ docker build -t apache/dolphinscheduler:pip .
1. Create a new `Dockerfile` for installing Python 3:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
RUN apt-get update && \
apt-get install -y --no-install-recommends python3 && \
rm -rf /var/lib/apt/lists/*

View File

@ -65,4 +65,4 @@ Configure the required content according to the parameter descriptions above.
## Notice
JAVA and Scala only used for identification, there is no difference. If use Python to develop Flink, there is no class of the main function and the rest is the same.
JAVA and Scala only used for identification, there is no difference. If you use Python to develop Spark application, there is no class of the main function and the rest is the same.

View File

@ -6,7 +6,7 @@
`sh ./script/stop-all.sh`
## Download the Newest Version Installation Package
## Download the Latest Version Installation Package
- [download](/en-us/download/download.html) the latest version of the installation packages.
- The following upgrade operations need to be performed in the new version's directory.

View File

@ -4,6 +4,10 @@
#### Setup instructions, are available for each stable version of Apache DolphinScheduler below:
### Versions: 3.0.0-alpha
#### Links [3.0.0-alpha Document](../3.0.0/user_doc/about/introduction.md)
### Versions: 2.0.5
#### Links [2.0.5 Document](../2.0.5/user_doc/guide/quick-start.md)

View File

@ -523,18 +523,6 @@ A1edit /etc/nginx/conf.d/escheduler.conf
---
## Q欢迎订阅 DolphinScheduler 开发邮件列表
A在使用 DolphinScheduler 的过程中,如果您有任何问题或者想法、建议,都可以通过 Apache 邮件列表参与到 DolphinScheduler 的社区建设中来。
发送订阅邮件也非常简单,步骤如下:
1用自己的邮箱向 dev-subscribe@dolphinscheduler.apache.org 发送一封邮件,主题和内容任意。
2 接收确认邮件并回复。 完成步骤1后您将收到一封来自 dev-help@dolphinscheduler.apache.org 的确认邮件(如未收到,请确认邮件是否被自动归入垃圾邮件、推广邮件、订阅邮件等文件夹)。然后直接回复该邮件,或点击邮件里的链接快捷回复即可,主题和内容任意。
3 接收欢迎邮件。 完成以上步骤后,您会收到一封主题为 WELCOME to dev@dolphinscheduler.apache.org 的欢迎邮件,至此您已成功订阅 Apache DolphinScheduler的邮件列表。
---
## Q工作流依赖
A1目前是按照自然天来判断上月末判断时间是工作流 A start_time/scheduler_time between '2019-05-31 00:00:00' and '2019-05-31 23:59:59'。上月:是判断上个月从 1 号到月末每天都要有完成的A实例。上周 上周 7 天都要有完成的 A 实例。前两天: 判断昨天和前天,两天都要有完成的 A 实例。

View File

@ -29,9 +29,9 @@
mkdir -p /opt
cd /opt
# 解压缩
tar -zxvf apache-dolphinscheduler-1.3.8-bin.tar.gz -C /opt
tar -zxvf apache-dolphinscheduler-3.0.0-alpha-bin.tar.gz -C /opt
cd /opt
mv apache-dolphinscheduler-1.3.8-bin dolphinscheduler
mv apache-dolphinscheduler-3.0.0-alpha-bin dolphinscheduler
```
```markdown

View File

@ -0,0 +1,19 @@
# 通用配置
## 语言
DolphinScheduler 支持两种内置语言,包括 `English``Chinese` 。您可以点击顶部控制栏名为 `English``Chinese` 的按钮切换语言。
当您将语言从一种切换为另一种时,您所有 DolphinScheduler 的页面语言页面将发生变化。
## 主题
DolphinScheduler 支持两种类型的内置主题,包括 `Dark``Light`。当您想改变主题时,只需单击顶部控制栏在 [语言](#语言) 左侧名为 `Dark`(or `Light`)
的按钮即可。
## 时区
DolphinScheduler 支持时区设置。默认时区基于您运行 DolphinScheduler 服务器的时区。如果你想要切换时区,可以点击 [语言](#语言) 按钮右侧的时区按钮,
然后点击 `请选择时区` 进行时区选择。当切换完成后,所有与时间相关的组件都将更改。
DolphinScheduler 内部通讯,以及任务主流程中使用 UTC 时间,在界面上显示的时间均是在 UTC 时间基础上进行时区格式化的,意味着当你切换了时区后,
只是将格式化的时间从一个时区切换到另一个时区。

View File

@ -13,16 +13,16 @@ Kubernetes部署目的是在Kubernetes集群中部署 DolphinScheduler 服务,
## 安装 dolphinscheduler
请下载源码包 apache-dolphinscheduler-1.3.8-src.tar.gz下载地址: [下载](/zh-cn/download/download.html)
请下载源码包 apache-dolphinscheduler-3.0.0-alpha-src.tar.gz下载地址: [下载](/zh-cn/download/download.html)
发布一个名为 `dolphinscheduler` 的版本(release),请执行以下命令:
```
$ tar -zxvf apache-dolphinscheduler-1.3.8-src.tar.gz
$ cd apache-dolphinscheduler-1.3.8-src/docker/kubernetes/dolphinscheduler
$ tar -zxvf apache-dolphinscheduler-3.0.0-alpha-src.tar.gz
$ cd apache-dolphinscheduler-3.0.0-alpha-src/docker/kubernetes/dolphinscheduler
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm dependency update .
$ helm install dolphinscheduler . --set image.tag=1.3.8
$ helm install dolphinscheduler . --set image.tag=3.0.0-alpha
```
将名为 `dolphinscheduler` 的版本(release) 发布到 `test` 的命名空间中:
@ -194,7 +194,7 @@ kubectl scale --replicas=6 sts dolphinscheduler-worker -n test # with test names
2. 创建一个新的 `Dockerfile`,用于添加 MySQL 的驱动包:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
```
@ -237,7 +237,7 @@ externalDatabase:
2. 创建一个新的 `Dockerfile`,用于添加 MySQL 驱动包:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
```
@ -266,7 +266,7 @@ docker build -t apache/dolphinscheduler:mysql-driver .
2. 创建一个新的 `Dockerfile`,用于添加 Oracle 驱动包:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
```
@ -289,7 +289,7 @@ docker build -t apache/dolphinscheduler:oracle-driver .
1. 创建一个新的 `Dockerfile`,用于安装 pip:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY requirements.txt /tmp
RUN apt-get update && \
apt-get install -y --no-install-recommends python-pip && \
@ -322,7 +322,7 @@ docker build -t apache/dolphinscheduler:pip .
1. 创建一个新的 `Dockerfile`,用于安装 Python 3:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
RUN apt-get update && \
apt-get install -y --no-install-recommends python3 && \
rm -rf /var/lib/apt/lists/*

View File

@ -1,74 +0,0 @@
SkyWalking Agent 部署
=============================
dolphinscheduler-skywalking 模块为 Dolphinscheduler 项目提供了 [Skywalking](https://skywalking.apache.org/) 监控代理。
本文档介绍了如何通过此模块接入 SkyWalking 8.4+ (推荐使用8.5.0)。
# 安装
以下配置用于启用 Skywalking agent。
### 通过配置环境变量 (使用 Docker Compose 部署时)
修改 `docker/docker-swarm/config.env.sh` 文件中的 SKYWALKING 环境变量:
```
SKYWALKING_ENABLE=true
SW_AGENT_COLLECTOR_BACKEND_SERVICES=127.0.0.1:11800
SW_GRPC_LOG_SERVER_HOST=127.0.0.1
SW_GRPC_LOG_SERVER_PORT=11800
```
并且运行
```shell
$ docker-compose up -d
```
### 通过配置环境变量 (使用 Docker 部署时)
```shell
$ docker run -d --name dolphinscheduler \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
-e SKYWALKING_ENABLE="true" \
-e SW_AGENT_COLLECTOR_BACKEND_SERVICES="your.skywalking-oap-server.com:11800" \
-e SW_GRPC_LOG_SERVER_HOST="your.skywalking-log-reporter.com" \
-e SW_GRPC_LOG_SERVER_PORT="11800" \
-p 12345:12345 \
apache/dolphinscheduler:1.3.8 all
```
### 通过配置 install_config.conf (使用 DolphinScheduler install.sh 部署时)
添加以下配置到 `${workDir}/conf/config/install_config.conf`.
```properties
# skywalking config
# note: enable skywalking tracking plugin
enableSkywalking="true"
# note: configure skywalking backend service address
skywalkingServers="your.skywalking-oap-server.com:11800"
# note: configure skywalking log reporter host
skywalkingLogReporterHost="your.skywalking-log-reporter.com"
# note: configure skywalking log reporter port
skywalkingLogReporterPort="11800"
```
# 使用
### 导入图表
#### 导入图表到 Skywalking server
复制 `${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml` 文件到 `${skywalking-oap-server.home}/config/ui-initialized-templates/` 目录下,并重启 Skywalking oap-server。
#### 查看 dolphinscheduler 图表
如果之前已经使用浏览器打开过 Skywalking则需要清空浏览器缓存。
![img1](/img/skywalking/import-dashboard-1.jpg)

View File

@ -4,19 +4,19 @@
- 服务管理主要是对系统中的各个服务的健康状况和基本信息的监控和显示
### Master 监控
### Master
- 主要是 master 的相关信息。
![master](/img/new_ui/dev/monitor/master.png)
### Worker 监控
### Worker
- 主要是 worker 的相关信息。
![worker](/img/new_ui/dev/monitor/worker.png)
### DB 监控
### Database
- 主要是 DB 的健康状况
@ -24,9 +24,17 @@
## 统计管理
### Statistics
![statistics](/img/new_ui/dev/monitor/statistics.png)
- 待执行命令数:统计 t_ds_command 表的数据
- 执行失败的命令数:统计 t_ds_error_command 表的数据
- 待运行任务数:统计 Zookeeper 中 task_queue 的数据
- 待杀死任务数:统计 Zookeeper 中 task_kill 的数据
### 审计日志
审计日志的记录提供了有关谁访问了系统,以及他或她在给定时间段内执行了哪些操作的信息,他对于维护安全都很有用。
![audit-log](/img/new_ui/dev/monitor/audit-log.jpg)

View File

@ -0,0 +1,9 @@
# 任务定义
任务定义允许您在基于任务级别而不是在工作流中操作修改任务。再此之前,我们已经有了工作流级别的任务编辑器,你可以在[工作流定义](workflow-definition.md)
单击特定的工作流,然后编辑任务的定义。当您想编辑特定的任务定义但不记得它属于哪个工作流时,这是令人沮丧的。所以我们决定在 `任务` 菜单下添加 `任务定义` 视图。
![task-definition](/img/new_ui/dev/project/task-definition.jpg)
在该视图中,您可以通过单击 `操作` 列中的相关按钮来进行创建、查询、更新、删除任务定义。最令人兴奋的是您可以通过通配符进行全部任务查询,当您只
记得任务名称但忘记它属于哪个工作流时是非常有用的。也支持通过任务名称结合使用 `任务类型``工作流程名称` 进行查询。

View File

@ -30,7 +30,7 @@
#### 1、下载源码包
请下载源码包 apache-dolphinscheduler-1.3.8-src.tar.gz下载地址: [下载](/zh-cn/download/download.html)
请下载源码包 apache-dolphinscheduler-3.0.0-alpha-src.tar.gz下载地址: [下载](/zh-cn/download/download.html)
#### 2、拉取镜像并启动服务
@ -39,14 +39,14 @@
> 对于 Windows Docker Desktop 用户,打开 **Windows PowerShell**
```
$ tar -zxvf apache-dolphinscheduler-1.3.8-src.tar.gz
$ cd apache-dolphinscheduler-1.3.8-src/docker/docker-swarm
$ docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
$ docker tag apache/dolphinscheduler:1.3.8 apache/dolphinscheduler:latest
$ tar -zxvf apache-dolphinscheduler-3.0.0-alpha-src.tar.gz
$ cd apache-dolphinscheduler-3.0.0-alpha-src/docker/docker-swarm
$ docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
$ docker tag apache/dolphinscheduler:3.0.0-alpha apache/dolphinscheduler:latest
$ docker-compose up -d
```
> PowerShell 应该使用 `cd apache-dolphinscheduler-1.3.8-src\docker\docker-swarm`
> PowerShell 应该使用 `cd apache-dolphinscheduler-3.0.0-alpha-src\docker\docker-swarm`
**PostgreSQL** (用户 `root`, 密码 `root`, 数据库 `dolphinscheduler`) 和 **ZooKeeper** 服务将会默认启动
@ -79,7 +79,7 @@ $ docker-compose up -d
我们已将面向用户的 DolphinScheduler 镜像上传至 docker 仓库,用户无需在本地构建镜像,直接执行以下命令从 docker 仓库 pull 镜像:
```
docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
```
#### 5、运行一个 DolphinScheduler 实例
@ -90,7 +90,7 @@ $ docker run -d --name dolphinscheduler \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
-p 12345:12345 \
apache/dolphinscheduler:1.3.8 all
apache/dolphinscheduler:3.0.0-alpha all
```
注:数据库用户 test 和密码 test 需要替换为实际的 PostgreSQL 用户和密码192.168.x.x 需要替换为 PostgreSQL 和 ZooKeeper 的主机 IP
@ -119,7 +119,7 @@ $ docker run -d --name dolphinscheduler-master \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
apache/dolphinscheduler:1.3.8 master-server
apache/dolphinscheduler:3.0.0-alpha master-server
```
* 启动一个 **worker server**, 如下:
@ -129,7 +129,7 @@ $ docker run -d --name dolphinscheduler-worker \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
apache/dolphinscheduler:1.3.8 worker-server
apache/dolphinscheduler:3.0.0-alpha worker-server
```
* 启动一个 **api server**, 如下:
@ -140,7 +140,7 @@ $ docker run -d --name dolphinscheduler-api \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
-e ZOOKEEPER_QUORUM="192.168.x.x:2181" \
-p 12345:12345 \
apache/dolphinscheduler:1.3.8 api-server
apache/dolphinscheduler:3.0.0-alpha api-server
```
* 启动一个 **alert server**, 如下:
@ -149,7 +149,7 @@ apache/dolphinscheduler:1.3.8 api-server
$ docker run -d --name dolphinscheduler-alert \
-e DATABASE_HOST="192.168.x.x" -e DATABASE_PORT="5432" -e DATABASE_DATABASE="dolphinscheduler" \
-e DATABASE_USERNAME="test" -e DATABASE_PASSWORD="test" \
apache/dolphinscheduler:1.3.8 alert-server
apache/dolphinscheduler:3.0.0-alpha alert-server
```
**注意**: 当你运行 dolphinscheduler 中的部分服务时,你必须指定这些环境变量 `DATABASE_HOST`, `DATABASE_PORT`, `DATABASE_DATABASE`, `DATABASE_USERNAME`, `DATABASE_PASSWORD`, `ZOOKEEPER_QUORUM`
@ -313,14 +313,14 @@ C:\dolphinscheduler-src>.\docker\build\hooks\build.bat
#### 从二进制包构建 (不需要 Maven 3.3+ & JDK 1.8+)
请下载二进制包 apache-dolphinscheduler-1.3.8-bin.tar.gz下载地址: [下载](/zh-cn/download/download.html). 然后将 apache-dolphinscheduler-1.3.8-bin.tar.gz 放到 `apache-dolphinscheduler-1.3.8-src/docker/build` 目录里,在 Terminal 或 PowerShell 中执行:
请下载二进制包 apache-dolphinscheduler-3.0.0-alpha-bin.tar.gz下载地址: [下载](/zh-cn/download/download.html). 然后将 apache-dolphinscheduler-3.0.0-alpha-bin.tar.gz 放到 `apache-dolphinscheduler-3.0.0-alpha-src/docker/build` 目录里,在 Terminal 或 PowerShell 中执行:
```
$ cd apache-dolphinscheduler-1.3.8-src/docker/build
$ docker build --build-arg VERSION=1.3.8 -t apache/dolphinscheduler:1.3.8 .
$ cd apache-dolphinscheduler-3.0.0-alpha-src/docker/build
$ docker build --build-arg VERSION=3.0.0-alpha -t apache/dolphinscheduler:3.0.0-alpha .
```
> PowerShell 应该使用 `cd apache-dolphinscheduler-1.3.8-src/docker/build`
> PowerShell 应该使用 `cd apache-dolphinscheduler-3.0.0-alpha-src/docker/build`
#### 构建多平台架构镜像
@ -375,7 +375,7 @@ done
2. 创建一个新的 `Dockerfile`,用于添加 MySQL 的驱动包:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
```
@ -421,7 +421,7 @@ DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
2. 创建一个新的 `Dockerfile`,用于添加 MySQL 驱动包:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
```
@ -450,7 +450,7 @@ docker build -t apache/dolphinscheduler:mysql-driver .
2. 创建一个新的 `Dockerfile`,用于添加 Oracle 驱动包:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY ojdbc8-19.9.0.0.jar /opt/dolphinscheduler/lib
```
@ -473,7 +473,7 @@ docker build -t apache/dolphinscheduler:oracle-driver .
1. 创建一个新的 `Dockerfile`,用于安装 pip:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
COPY requirements.txt /tmp
RUN apt-get update && \
apt-get install -y --no-install-recommends python-pip && \
@ -506,7 +506,7 @@ docker build -t apache/dolphinscheduler:pip .
1. 创建一个新的 `Dockerfile`,用于安装 Python 3:
```
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:1.3.8
FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:3.0.0-alpha
RUN apt-get update && \
apt-get install -y --no-install-recommends python3 && \
rm -rf /var/lib/apt/lists/*

View File

@ -3,6 +3,10 @@
# 历史版本:
#### 以下是Apache DolphinScheduler每个稳定版本的设置说明。
### 版本3.0.0-alpha
#### 地址:[3.0.0-alpha 文档](../3.0.0/user_doc/about/introduction.md)
### 版本2.0.5
#### 地址:[2.0.5 文档](../2.0.5/user_doc/guide/quick-start.md)

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 782 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

View File

@ -32,16 +32,6 @@ root_dir: Path = Path(__file__).parent
img_dir: Path = root_dir.joinpath("img")
doc_dir: Path = root_dir.joinpath("docs")
expect_img_types: Set = {
"jpg",
"png",
}
def build_pattern() -> re.Pattern:
"""Build current document image regexp pattern."""
return re.compile(f"(/img.*?\\.({'|'.join(expect_img_types)}))")
def get_files_recurse(path: Path) -> Set:
"""Get all files recursively from given :param:`path`."""
@ -68,14 +58,15 @@ def get_paths_rel_path(paths: Set[Path], rel: Path) -> Set:
return {f"/{path.relative_to(rel)}" for path in paths}
def get_docs_img_path(paths: Set[Path], pattern: re.Pattern) -> Set:
def get_docs_img_path(paths: Set[Path]) -> Set:
"""Get all img syntax from given :param:`paths` using the regexp from :param:`pattern`."""
res = set()
pattern = re.compile(r"/img[\w./-]*")
for path in paths:
content = path.read_text()
find = pattern.findall(content)
if find:
res |= {item[0] for item in find}
res |= {item for item in find}
return res
@ -102,16 +93,6 @@ def diff_two_set(first: Set, second: Set) -> Tuple[set, set]:
return first.difference(second), second.difference(first)
def check_diff_img_type() -> Tuple[set, set]:
"""Check images difference type.
:return: Tuple[(actual - expect), (expect - actual)]
"""
img = get_files_recurse(img_dir)
img_suffix = get_paths_uniq_suffix(img)
return diff_two_set(img_suffix, expect_img_types)
def check_diff_img() -> Tuple[set, set]:
"""Check images difference files.
@ -120,20 +101,12 @@ def check_diff_img() -> Tuple[set, set]:
img = get_files_recurse(img_dir)
docs = get_files_recurse(doc_dir)
img_rel_path = get_paths_rel_path(img, root_dir)
pat = build_pattern()
docs_rel_path = get_docs_img_path(docs, pat)
docs_rel_path = get_docs_img_path(docs)
return diff_two_set(docs_rel_path, img_rel_path)
def check() -> None:
"""Runner for `check` sub command."""
img_type_act, img_type_exp = check_diff_img_type()
assert not img_type_act and not img_type_exp, (
f"Images type assert failed: \n"
f"* difference actual types to expect is: {img_type_act if img_type_act else 'None'}\n"
f"* difference expect types to actual is: {img_type_exp if img_type_exp else 'None'}\n"
)
img_docs, img_img = check_diff_img()
assert not img_docs and not img_img, (
f"Images assert failed: \n"

View File

@ -18,13 +18,11 @@
~ under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-api</artifactId>

View File

@ -25,11 +25,13 @@ public enum ShowType {
* 1 TEXT;
* 2 attachment;
* 3 TABLE+attachment;
* 4 MARKDOWN;
*/
TABLE(0, "table"),
TEXT(1, "text"),
ATTACHMENT(2, "attachment"),
TABLE_ATTACHMENT(3, "table attachment");
TABLE_ATTACHMENT(3, "table attachment"),
MARKDOWN(4, "markdown"),;
private final int code;
private final String descp;

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-dingtalk</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-email</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-feishu</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-http</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-pagerduty</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-script</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-slack</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-telegram</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-webexteams</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert-plugins</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-wechat</artifactId>

View File

@ -56,14 +56,14 @@ public final class WeChatAlertChannelFactory implements AlertChannelFactory {
.build();
InputParam usersParam = InputParam.newBuilder(WeChatAlertParamsConstants.NAME_ENTERPRISE_WE_CHAT_USERS, WeChatAlertParamsConstants.ENTERPRISE_WE_CHAT_USERS)
.setPlaceholder("please input users ")
.setPlaceholder("use `|` to separate userIds and `@all` to everyone ")
.addValidate(Validate.newBuilder()
.setRequired(true)
.setRequired(false)
.build())
.build();
InputParam agentIdParam = InputParam.newBuilder(WeChatAlertParamsConstants.NAME_ENTERPRISE_WE_CHAT_AGENT_ID, WeChatAlertParamsConstants.ENTERPRISE_WE_CHAT_AGENT_ID)
.setPlaceholder("please input agent id ")
.setPlaceholder("please input agent id or chat id ")
.addValidate(Validate.newBuilder()
.setRequired(true)
.build())
@ -77,9 +77,9 @@ public final class WeChatAlertChannelFactory implements AlertChannelFactory {
.build();
RadioParam showType = RadioParam.newBuilder(AlertConstants.NAME_SHOW_TYPE, AlertConstants.SHOW_TYPE)
.addParamsOptions(new ParamsOptions(ShowType.TABLE.getDescp(), ShowType.TABLE.getDescp(), false))
.addParamsOptions(new ParamsOptions(ShowType.MARKDOWN.getDescp(), ShowType.MARKDOWN.getDescp(), false))
.addParamsOptions(new ParamsOptions(ShowType.TEXT.getDescp(), ShowType.TEXT.getDescp(), false))
.setValue(ShowType.TABLE.getDescp())
.setValue(ShowType.MARKDOWN.getDescp())
.addValidate(Validate.newBuilder().setRequired(true).build())
.build();

View File

@ -24,9 +24,8 @@ public final class WeChatAlertParamsConstants {
static final String NAME_ENTERPRISE_WE_CHAT_SECRET = "secret";
static final String ENTERPRISE_WE_CHAT_TEAM_SEND_MSG = "$t('teamSendMsg')";
static final String NAME_ENTERPRISE_WE_CHAT_TEAM_SEND_MSG = "teamSendMsg";
static final String ENTERPRISE_WE_CHAT_AGENT_ID = "$t('agentId')";
static final String NAME_ENTERPRISE_WE_CHAT_AGENT_ID = "agentId";
static final String NAME_ENTERPRISE_WE_CHAT_CHAT_ID = "chatId";
static final String ENTERPRISE_WE_CHAT_AGENT_ID = "$t('agentId/chatId')";
static final String NAME_ENTERPRISE_WE_CHAT_AGENT_ID = "agentId/chatId";
static final String ENTERPRISE_WE_CHAT_USERS = "$t('users')";
static final String NAME_ENTERPRISE_WE_CHAT_USERS = "users";

View File

@ -17,15 +17,10 @@
package org.apache.dolphinscheduler.plugin.alert.wechat;
import static java.util.Objects.requireNonNull;
import static org.apache.dolphinscheduler.plugin.alert.wechat.WeChatAlertConstants.*;
import org.apache.dolphinscheduler.alert.api.AlertConstants;
import org.apache.dolphinscheduler.alert.api.AlertResult;
import org.apache.dolphinscheduler.alert.api.ShowType;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
import org.apache.dolphinscheduler.spi.utils.StringUtils;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
@ -34,18 +29,20 @@ import org.apache.http.entity.StringEntity;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Set;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import static java.util.Objects.requireNonNull;
import static org.apache.dolphinscheduler.plugin.alert.wechat.WeChatAlertConstants.*;
public final class WeChatSender {
private static final Logger logger = LoggerFactory.getLogger(WeChatSender.class);
@ -57,8 +54,7 @@ public final class WeChatSender {
private static final String CORP_ID_REGEX = "{corpId}";
private static final String SECRET_REGEX = "{secret}";
private static final String TOKEN_REGEX = "{token}";
private final String weChatAgentId;
private final String weChatChatId;
private final String weChatAgentIdChatId;
private final String weChatUsers;
private final String weChatTokenUrlReplace;
private final String weChatToken;
@ -66,8 +62,7 @@ public final class WeChatSender {
private final String showType;
WeChatSender(Map<String, String> config) {
weChatAgentId = config.get(WeChatAlertParamsConstants.NAME_ENTERPRISE_WE_CHAT_AGENT_ID);
weChatChatId = config.get(WeChatAlertParamsConstants.NAME_ENTERPRISE_WE_CHAT_CHAT_ID);
weChatAgentIdChatId = config.get(WeChatAlertParamsConstants.NAME_ENTERPRISE_WE_CHAT_AGENT_ID);
weChatUsers = config.get(WeChatAlertParamsConstants.NAME_ENTERPRISE_WE_CHAT_USERS);
String weChatCorpId = config.get(WeChatAlertParamsConstants.NAME_ENTERPRISE_WE_CHAT_CORP_ID);
String weChatSecret = config.get(WeChatAlertParamsConstants.NAME_ENTERPRISE_WE_CHAT_SECRET);
@ -76,8 +71,8 @@ public final class WeChatSender {
showType = config.get(AlertConstants.NAME_SHOW_TYPE);
requireNonNull(showType, AlertConstants.NAME_SHOW_TYPE + MUST_NOT_NULL);
weChatTokenUrlReplace = weChatTokenUrl
.replace(CORP_ID_REGEX, weChatCorpId)
.replace(SECRET_REGEX, weChatSecret);
.replace(CORP_ID_REGEX, weChatCorpId)
.replace(SECRET_REGEX, weChatSecret);
weChatToken = getToken();
}
@ -100,42 +95,10 @@ public final class WeChatSender {
}
}
/**
* convert table to markdown style
*
* @param title the title
* @param content the content
* @return markdown table content
*/
private static String markdownTable(String title, String content) {
List<LinkedHashMap> mapItemsList = JSONUtils.toList(content, LinkedHashMap.class);
if (null == mapItemsList || mapItemsList.isEmpty()) {
logger.error("itemsList is null");
throw new RuntimeException("itemsList is null");
}
StringBuilder contents = new StringBuilder(200);
for (LinkedHashMap mapItems : mapItemsList) {
Set<Entry<String, Object>> entries = mapItems.entrySet();
Iterator<Entry<String, Object>> iterator = entries.iterator();
StringBuilder t = new StringBuilder(String.format("`%s`%s", title, WeChatAlertConstants.MARKDOWN_ENTER));
while (iterator.hasNext()) {
Map.Entry<String, Object> entry = iterator.next();
t.append(WeChatAlertConstants.MARKDOWN_QUOTE);
t.append(entry.getKey()).append(":").append(entry.getValue());
t.append(WeChatAlertConstants.MARKDOWN_ENTER);
}
contents.append(t);
}
return contents.toString();
}
/**
* convert text to markdown style
*
* @param title the title
* @param title the title
* @param content the content
* @return markdown text
*/
@ -242,17 +205,17 @@ public final class WeChatSender {
return alertResult;
}
String enterpriseWeChatPushUrlReplace = "";
Map<String,String> contentMap=new HashMap<>();
contentMap.put(WeChatAlertConstants.WE_CHAT_CONTENT_KEY,data);
String msgJson="";
Map<String, String> contentMap = new HashMap<>();
contentMap.put(WeChatAlertConstants.WE_CHAT_CONTENT_KEY, data);
String msgJson = "";
if (sendType.equals(WeChatType.APP.getDescp())) {
enterpriseWeChatPushUrlReplace = WeChatAlertConstants.WE_CHAT_PUSH_URL.replace(TOKEN_REGEX, weChatToken);
WechatAppMessage wechatAppMessage=new WechatAppMessage(weChatUsers, WE_CHAT_MESSAGE_TYPE_TEXT, Integer.valueOf(weChatAgentId),contentMap, WE_CHAT_MESSAGE_SAFE_PUBLICITY, WE_CHAT_ENABLE_ID_TRANS, WE_CHAT_DUPLICATE_CHECK_INTERVAL_ZERO);
msgJson=JSONUtils.toJsonString(wechatAppMessage);
WechatAppMessage wechatAppMessage = new WechatAppMessage(weChatUsers, showType, Integer.valueOf(weChatAgentIdChatId), contentMap, WE_CHAT_MESSAGE_SAFE_PUBLICITY, WE_CHAT_ENABLE_ID_TRANS, WE_CHAT_DUPLICATE_CHECK_INTERVAL_ZERO);
msgJson = JSONUtils.toJsonString(wechatAppMessage);
} else if (sendType.equals(WeChatType.APPCHAT.getDescp())) {
enterpriseWeChatPushUrlReplace = WeChatAlertConstants.WE_CHAT_APP_CHAT_PUSH_URL.replace(TOKEN_REGEX, weChatToken);
WechatAppChatMessage wechatAppChatMessage=new WechatAppChatMessage(weChatChatId, WE_CHAT_MESSAGE_TYPE_TEXT, contentMap, WE_CHAT_MESSAGE_SAFE_PUBLICITY);
msgJson=JSONUtils.toJsonString(wechatAppChatMessage);
WechatAppChatMessage wechatAppChatMessage = new WechatAppChatMessage(weChatAgentIdChatId, showType, contentMap, WE_CHAT_MESSAGE_SAFE_PUBLICITY);
msgJson = JSONUtils.toJsonString(wechatAppChatMessage);
}
try {
@ -272,14 +235,7 @@ public final class WeChatSender {
* @return the markdown alert table/text
*/
private String markdownByAlert(String title, String content) {
String result = "";
if (showType.equals(ShowType.TABLE.getDescp())) {
result = markdownTable(title, content);
} else if (showType.equals(ShowType.TEXT.getDescp())) {
result = markdownText(title, content);
}
return result;
return markdownText(title, content);
}
private String getToken() {

View File

@ -17,6 +17,8 @@
package org.apache.dolphinscheduler.plugin.alert.wechat;
import org.apache.dolphinscheduler.alert.api.ShowType;
import java.util.Map;
public class WechatAppChatMessage {
@ -24,6 +26,7 @@ public class WechatAppChatMessage {
private String chatid;
private String msgtype;
private Map<String,String> text;
private Map<String,String> markdown;
private Integer safe;
public String getChatid() {
@ -58,13 +61,25 @@ public class WechatAppChatMessage {
this.safe = safe;
}
public Map<String, String> getMarkdown() {
return markdown;
}
public void setMarkdown(Map<String, String> markdown) {
this.markdown = markdown;
}
public WechatAppChatMessage() {
}
public WechatAppChatMessage(String chatid, String msgtype, Map<String, String> text, Integer safe) {
public WechatAppChatMessage(String chatid, String msgtype, Map<String, String> contentMap, Integer safe) {
this.chatid = chatid;
this.msgtype = msgtype;
this.text = text;
if (msgtype.equals(ShowType.MARKDOWN.getDescp())) {
this.markdown = contentMap;
} else {
this.text = contentMap;
}
this.safe = safe;
}
}

View File

@ -17,6 +17,8 @@
package org.apache.dolphinscheduler.plugin.alert.wechat;
import org.apache.dolphinscheduler.alert.api.ShowType;
import java.util.Map;
public class WechatAppMessage {
@ -24,7 +26,8 @@ public class WechatAppMessage {
private String touser;
private String msgtype;
private Integer agentid;
private Map<String,String> text;
private Map<String, String> text;
private Map<String, String> markdown;
private Integer safe;
private Integer enable_id_trans;
private Integer enable_duplicate_check;
@ -85,16 +88,28 @@ public class WechatAppMessage {
this.enable_duplicate_check = enable_duplicate_check;
}
public Map<String, String> getMarkdown() {
return markdown;
}
public void setMarkdown(Map<String, String> markdown) {
this.markdown = markdown;
}
public WechatAppMessage() {
}
public WechatAppMessage(String touser, String msgtype, Integer agentid, Map<String, String> text, Integer safe, Integer enable_id_trans, Integer enable_duplicate_check) {
public WechatAppMessage(String touser, String msgtype, Integer agentid, Map<String, String> contentMap, Integer safe, Integer enableIdTrans, Integer enableDuplicateCheck) {
this.touser = touser;
this.msgtype = msgtype;
this.agentid = agentid;
this.text = text;
if (msgtype.equals(ShowType.MARKDOWN.getDescp())) {
this.markdown = contentMap;
} else {
this.text = contentMap;
}
this.safe = safe;
this.enable_id_trans = enable_id_trans;
this.enable_duplicate_check = enable_duplicate_check;
this.enable_id_trans = enableIdTrans;
this.enable_duplicate_check = enableDuplicateCheck;
}
}

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-alert</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-alert-plugins</artifactId>

View File

@ -16,13 +16,12 @@
~ limitations under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler-alert</artifactId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<artifactId>dolphinscheduler-alert-server</artifactId>
<name>${project.artifactId}</name>

View File

@ -41,8 +41,6 @@ import java.util.Optional;
import java.util.ServiceLoader;
import java.util.Set;
import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.context.event.ApplicationReadyEvent;
@ -55,23 +53,23 @@ public final class AlertPluginManager {
private final PluginDao pluginDao;
private final Map<Integer, AlertChannel> channelKeyedById = new HashMap<>();
private final PluginParams warningTypeParams = getWarningTypeParams();
public AlertPluginManager(PluginDao pluginDao) {
this.pluginDao = pluginDao;
}
private final Map<Integer, AlertChannel> channelKeyedById = new HashMap<>();
private final PluginParams warningTypeParams = getWarningTypeParams();
public PluginParams getWarningTypeParams() {
return
RadioParam.newBuilder(AlertConstants.NAME_WARNING_TYPE, AlertConstants.WARNING_TYPE)
.addParamsOptions(new ParamsOptions(WarningType.SUCCESS.getDescp(), WarningType.SUCCESS.getDescp(), false))
.addParamsOptions(new ParamsOptions(WarningType.FAILURE.getDescp(), WarningType.FAILURE.getDescp(), false))
.addParamsOptions(new ParamsOptions(WarningType.ALL.getDescp(), WarningType.ALL.getDescp(), false))
.setValue(WarningType.ALL.getDescp())
.addValidate(Validate.newBuilder().setRequired(true).build())
.build();
RadioParam.newBuilder(AlertConstants.NAME_WARNING_TYPE, AlertConstants.WARNING_TYPE)
.addParamsOptions(new ParamsOptions(WarningType.SUCCESS.getDescp(), WarningType.SUCCESS.getDescp(), false))
.addParamsOptions(new ParamsOptions(WarningType.FAILURE.getDescp(), WarningType.FAILURE.getDescp(), false))
.addParamsOptions(new ParamsOptions(WarningType.ALL.getDescp(), WarningType.ALL.getDescp(), false))
.setValue(WarningType.ALL.getDescp())
.addValidate(Validate.newBuilder().setRequired(true).build())
.build();
}
@EventListener

View File

@ -36,10 +36,10 @@ import io.netty.channel.Channel;
public final class AlertRequestProcessor implements NettyRequestProcessor {
private static final Logger logger = LoggerFactory.getLogger(AlertRequestProcessor.class);
private final AlertSender alertSender;
private final AlertSenderService alertSenderService;
public AlertRequestProcessor(AlertSender alertSender) {
this.alertSender = alertSender;
public AlertRequestProcessor(AlertSenderService alertSenderService) {
this.alertSenderService = alertSenderService;
}
@Override
@ -51,7 +51,7 @@ public final class AlertRequestProcessor implements NettyRequestProcessor {
logger.info("Received command : {}", alertSendRequestCommand);
AlertSendResponseCommand alertSendResponseCommand = alertSender.syncHandler(
AlertSendResponseCommand alertSendResponseCommand = alertSenderService.syncHandler(
alertSendRequestCommand.getGroupId(),
alertSendRequestCommand.getTitle(),
alertSendRequestCommand.getContent(),

View File

@ -17,43 +17,65 @@
package org.apache.dolphinscheduler.alert;
import org.apache.commons.collections.CollectionUtils;
import org.apache.dolphinscheduler.alert.api.AlertChannel;
import org.apache.dolphinscheduler.alert.api.AlertConstants;
import org.apache.dolphinscheduler.alert.api.AlertData;
import org.apache.dolphinscheduler.alert.api.AlertInfo;
import org.apache.dolphinscheduler.alert.api.AlertResult;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.AlertStatus;
import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.dao.AlertDao;
import org.apache.dolphinscheduler.dao.entity.Alert;
import org.apache.dolphinscheduler.dao.entity.AlertPluginInstance;
import org.apache.dolphinscheduler.remote.command.alert.AlertSendResponseCommand;
import org.apache.dolphinscheduler.remote.command.alert.AlertSendResponseResult;
import org.apache.commons.collections.CollectionUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Component;
@Component
public final class AlertSender {
private static final Logger logger = LoggerFactory.getLogger(AlertSender.class);
@Service
public final class AlertSenderService extends Thread {
private static final Logger logger = LoggerFactory.getLogger(AlertSenderService.class);
private final AlertDao alertDao;
private final AlertPluginManager alertPluginManager;
public AlertSender(AlertDao alertDao, AlertPluginManager alertPluginManager) {
public AlertSenderService(AlertDao alertDao, AlertPluginManager alertPluginManager) {
this.alertDao = alertDao;
this.alertPluginManager = alertPluginManager;
}
@Override
public synchronized void start() {
super.setName("AlertSenderService");
super.start();
}
@Override
public void run() {
logger.info("alert sender started");
while (Stopper.isRunning()) {
try {
List<Alert> alerts = alertDao.listPendingAlerts();
this.send(alerts);
ThreadUtils.sleep(Constants.SLEEP_TIME_MILLIS * 5L);
} catch (Exception e) {
logger.error("alert sender thread error", e);
}
}
}
public void send(List<Alert> alerts) {
for (Alert alert : alerts) {
//get alert group from alert
@ -66,11 +88,11 @@ public final class AlertSender {
}
AlertData alertData = new AlertData();
alertData.setId(alert.getId())
.setContent(alert.getContent())
.setLog(alert.getLog())
.setTitle(alert.getTitle())
.setTitle(alert.getTitle())
.setWarnType(alert.getWarningType().getCode());
.setContent(alert.getContent())
.setLog(alert.getLog())
.setTitle(alert.getTitle())
.setTitle(alert.getTitle())
.setWarnType(alert.getWarningType().getCode());
for (AlertPluginInstance instance : alertInstanceList) {
@ -81,23 +103,22 @@ public final class AlertSender {
}
}
}
}
/**
* sync send alert handler
*
* @param alertGroupId alertGroupId
* @param title title
* @param content content
* @param title title
* @param content content
* @return AlertSendResponseCommand
*/
public AlertSendResponseCommand syncHandler(int alertGroupId, String title, String content , int warnType) {
public AlertSendResponseCommand syncHandler(int alertGroupId, String title, String content, int warnType) {
List<AlertPluginInstance> alertInstanceList = alertDao.listInstanceByAlertGroupId(alertGroupId);
AlertData alertData = new AlertData();
alertData.setContent(content)
.setTitle(title)
.setWarnType(warnType);
.setTitle(title)
.setWarnType(warnType);
boolean sendResponseStatus = true;
List<AlertSendResponseResult> sendResponseResults = new ArrayList<>();
@ -116,7 +137,7 @@ public final class AlertSender {
AlertResult alertResult = this.alertResultHandler(instance, alertData);
if (alertResult != null) {
AlertSendResponseResult alertSendResponseResult = new AlertSendResponseResult(
Boolean.parseBoolean(String.valueOf(alertResult.getStatus())), alertResult.getMessage());
Boolean.parseBoolean(String.valueOf(alertResult.getStatus())), alertResult.getMessage());
sendResponseStatus = sendResponseStatus && alertSendResponseResult.getStatus();
sendResponseResults.add(alertSendResponseResult);
}
@ -128,7 +149,7 @@ public final class AlertSender {
/**
* alert result handler
*
* @param instance instance
* @param instance instance
* @param alertData alertData
* @return AlertResult
*/
@ -147,7 +168,7 @@ public final class AlertSender {
Map<String, String> paramsMap = JSONUtils.toMap(instance.getPluginInstanceParams());
String instanceWarnType = WarningType.ALL.getDescp();
if(paramsMap != null){
if (paramsMap != null) {
instanceWarnType = paramsMap.getOrDefault(AlertConstants.NAME_WARNING_TYPE, WarningType.ALL.getDescp());
}

View File

@ -17,73 +17,95 @@
package org.apache.dolphinscheduler.alert;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.dao.AlertDao;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.dao.PluginDao;
import org.apache.dolphinscheduler.dao.entity.Alert;
import org.apache.dolphinscheduler.remote.NettyRemotingServer;
import org.apache.dolphinscheduler.remote.command.CommandType;
import org.apache.dolphinscheduler.remote.config.NettyServerConfig;
import java.io.Closeable;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import javax.annotation.PreDestroy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.context.event.ApplicationReadyEvent;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.event.EventListener;
import javax.annotation.PreDestroy;
import java.io.Closeable;
@SpringBootApplication
@ComponentScan("org.apache.dolphinscheduler")
public class AlertServer implements Closeable {
private static final Logger logger = LoggerFactory.getLogger(AlertServer.class);
private final PluginDao pluginDao;
private final AlertDao alertDao;
private final AlertPluginManager alertPluginManager;
private final AlertSender alertSender;
private final AlertSenderService alertSenderService;
private final AlertRequestProcessor alertRequestProcessor;
private final AlertConfig alertConfig;
private NettyRemotingServer nettyRemotingServer;
private NettyRemotingServer server;
@Autowired
private AlertConfig config;
public AlertServer(PluginDao pluginDao, AlertDao alertDao, AlertPluginManager alertPluginManager, AlertSender alertSender, AlertRequestProcessor alertRequestProcessor) {
public AlertServer(PluginDao pluginDao, AlertSenderService alertSenderService, AlertRequestProcessor alertRequestProcessor, AlertConfig alertConfig) {
this.pluginDao = pluginDao;
this.alertDao = alertDao;
this.alertPluginManager = alertPluginManager;
this.alertSender = alertSender;
this.alertSenderService = alertSenderService;
this.alertRequestProcessor = alertRequestProcessor;
this.alertConfig = alertConfig;
}
/**
* alert server startup, not use web service
*
* @param args arguments
*/
public static void main(String[] args) {
SpringApplication.run(AlertServer.class, args);
Thread.currentThread().setName(Constants.THREAD_NAME_ALERT_SERVER);
new SpringApplicationBuilder(AlertServer.class).web(WebApplicationType.NONE).run(args);
}
@EventListener
public void start(ApplicationReadyEvent readyEvent) {
logger.info("Starting Alert server");
public void run(ApplicationReadyEvent readyEvent) {
logger.info("alert server starting...");
checkTable();
startServer();
Executors.newScheduledThreadPool(1)
.scheduleAtFixedRate(new Sender(), 5, 5, TimeUnit.SECONDS);
alertSenderService.start();
}
@Override
@PreDestroy
public void close() {
server.close();
destroy("alert server destroy");
}
/**
* gracefully stop
*
* @param cause stop cause
*/
public void destroy(String cause) {
try {
// execute only once
if (Stopper.isStopped()) {
return;
}
logger.info("alert server is stopping ..., cause : {}", cause);
// set stop signal is true
Stopper.stop();
// thread sleep 3 seconds for thread quietly stop
ThreadUtils.sleep(3000L);
// close
this.nettyRemotingServer.close();
} catch (Exception e) {
logger.error("alert server stop exception ", e);
}
}
private void checkTable() {
@ -95,26 +117,11 @@ public class AlertServer implements Closeable {
private void startServer() {
NettyServerConfig serverConfig = new NettyServerConfig();
serverConfig.setListenPort(config.getPort());
serverConfig.setListenPort(alertConfig.getPort());
server = new NettyRemotingServer(serverConfig);
server.registerProcessor(CommandType.ALERT_SEND_REQUEST, alertRequestProcessor);
server.start();
nettyRemotingServer = new NettyRemotingServer(serverConfig);
nettyRemotingServer.registerProcessor(CommandType.ALERT_SEND_REQUEST, alertRequestProcessor);
nettyRemotingServer.start();
}
final class Sender implements Runnable {
@Override
public void run() {
if (!Stopper.isRunning()) {
return;
}
try {
final List<Alert> alerts = alertDao.listPendingAlerts();
alertSender.send(alerts);
} catch (Exception e) {
logger.error("Failed to send alert", e);
}
}
}
}

View File

@ -18,7 +18,9 @@
package org.apache.dolphinscheduler.alert;
import junit.framework.TestCase;
import org.apache.dolphinscheduler.dao.AlertDao;
import org.apache.dolphinscheduler.dao.PluginDao;
import org.apache.dolphinscheduler.dao.entity.Alert;
import org.apache.dolphinscheduler.remote.NettyRemotingServer;
import org.apache.dolphinscheduler.remote.config.NettyServerConfig;
import org.junit.Assert;
@ -27,9 +29,13 @@ import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.MockitoAnnotations;
import org.mockito.junit.MockitoJUnitRunner;
import org.powermock.reflect.Whitebox;
import java.util.ArrayList;
import java.util.List;
@RunWith(MockitoJUnitRunner.class)
public class AlertServerTest extends TestCase {
@ -42,19 +48,26 @@ public class AlertServerTest extends TestCase {
@Mock
private AlertConfig alertConfig;
@Mock
private AlertSenderService alertSenderService;
@Test
public void testStart() {
Mockito.when(pluginDao.checkPluginDefineTableExist()).thenReturn(true);
Mockito.when(alertConfig.getPort()).thenReturn(50053);
alertServer.start(null);
Mockito.doNothing().when(alertSenderService).start();
alertServer.run(null);
NettyRemotingServer nettyRemotingServer = Whitebox.getInternalState(alertServer, "server");
NettyRemotingServer nettyRemotingServer = Whitebox.getInternalState(alertServer, "nettyRemotingServer");
NettyServerConfig nettyServerConfig = Whitebox.getInternalState(nettyRemotingServer, "serverConfig");
Assert.assertEquals(50053, nettyServerConfig.getListenPort());
}
}

View File

@ -20,30 +20,35 @@ package org.apache.dolphinscheduler.alert.processor;
import static org.mockito.Mockito.mock;
import org.apache.dolphinscheduler.alert.AlertRequestProcessor;
import org.apache.dolphinscheduler.alert.AlertSender;
import org.apache.dolphinscheduler.alert.AlertSenderService;
import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.dao.AlertDao;
import org.apache.dolphinscheduler.remote.command.Command;
import org.apache.dolphinscheduler.remote.command.CommandType;
import org.apache.dolphinscheduler.remote.command.alert.AlertSendRequestCommand;
import org.apache.dolphinscheduler.remote.command.alert.AlertSendResponseCommand;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import io.netty.channel.Channel;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.junit.MockitoJUnitRunner;
@RunWith(MockitoJUnitRunner.class)
public class AlertRequestProcessorTest {
@InjectMocks
private AlertRequestProcessor alertRequestProcessor;
@Before
public void before() {
final AlertDao alertDao = mock(AlertDao.class);
alertRequestProcessor = new AlertRequestProcessor(new AlertSender(alertDao, null));
}
@Mock
private AlertSenderService alertSenderService;
@Test
public void testProcess() {
Mockito.when(alertSenderService.syncHandler(1, "title", "content", WarningType.FAILURE.getCode())).thenReturn(new AlertSendResponseCommand());
Channel channel = mock(Channel.class);
AlertSendRequestCommand alertSendRequestCommand = new AlertSendRequestCommand(1, "title", "content", WarningType.FAILURE.getCode());
Command reqCommand = alertSendRequestCommand.convert2Command();

View File

@ -21,7 +21,7 @@ import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
import org.apache.dolphinscheduler.alert.AlertPluginManager;
import org.apache.dolphinscheduler.alert.AlertSender;
import org.apache.dolphinscheduler.alert.AlertSenderService;
import org.apache.dolphinscheduler.alert.api.AlertChannel;
import org.apache.dolphinscheduler.alert.api.AlertResult;
import org.apache.dolphinscheduler.common.enums.WarningType;
@ -39,24 +39,29 @@ import java.util.Optional;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.MockitoAnnotations;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class AlertSenderTest {
private static final Logger logger = LoggerFactory.getLogger(AlertSenderTest.class);
public class AlertSenderServiceTest {
private static final Logger logger = LoggerFactory.getLogger(AlertSenderServiceTest.class);
@Mock
private AlertDao alertDao;
@Mock
private PluginDao pluginDao;
@Mock
private AlertPluginManager alertPluginManager;
private AlertSender alertSender;
@InjectMocks
private AlertSenderService alertSenderService;
@Before
public void before() {
alertDao = mock(AlertDao.class);
pluginDao = mock(PluginDao.class);
alertPluginManager = mock(AlertPluginManager.class);
MockitoAnnotations.initMocks(this);
}
@Test
@ -65,12 +70,11 @@ public class AlertSenderTest {
int alertGroupId = 1;
String title = "alert mail test title";
String content = "alert mail test content";
alertSender = new AlertSender(alertDao, alertPluginManager);
//1.alert instance does not exist
when(alertDao.listInstanceByAlertGroupId(alertGroupId)).thenReturn(null);
AlertSendResponseCommand alertSendResponseCommand = alertSender.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
AlertSendResponseCommand alertSendResponseCommand = alertSenderService.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
Assert.assertFalse(alertSendResponseCommand.getResStatus());
alertSendResponseCommand.getResResults().forEach(result ->
logger.info("alert send response result, status:{}, message:{}", result.getStatus(), result.getMessage()));
@ -89,7 +93,7 @@ public class AlertSenderTest {
PluginDefine pluginDefine = new PluginDefine(pluginName, "1", null);
when(pluginDao.getPluginDefineById(pluginDefineId)).thenReturn(pluginDefine);
alertSendResponseCommand = alertSender.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
alertSendResponseCommand = alertSenderService.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
Assert.assertFalse(alertSendResponseCommand.getResStatus());
alertSendResponseCommand.getResResults().forEach(result ->
logger.info("alert send response result, status:{}, message:{}", result.getStatus(), result.getMessage()));
@ -99,7 +103,7 @@ public class AlertSenderTest {
when(alertChannelMock.process(Mockito.any())).thenReturn(null);
when(alertPluginManager.getAlertChannel(1)).thenReturn(Optional.of(alertChannelMock));
alertSendResponseCommand = alertSender.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
alertSendResponseCommand = alertSenderService.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
Assert.assertFalse(alertSendResponseCommand.getResStatus());
alertSendResponseCommand.getResResults().forEach(result ->
logger.info("alert send response result, status:{}, message:{}", result.getStatus(), result.getMessage()));
@ -111,7 +115,7 @@ public class AlertSenderTest {
when(alertChannelMock.process(Mockito.any())).thenReturn(alertResult);
when(alertPluginManager.getAlertChannel(1)).thenReturn(Optional.of(alertChannelMock));
alertSendResponseCommand = alertSender.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
alertSendResponseCommand = alertSenderService.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
Assert.assertFalse(alertSendResponseCommand.getResStatus());
alertSendResponseCommand.getResResults().forEach(result ->
logger.info("alert send response result, status:{}, message:{}", result.getStatus(), result.getMessage()));
@ -123,7 +127,7 @@ public class AlertSenderTest {
when(alertChannelMock.process(Mockito.any())).thenReturn(alertResult);
when(alertPluginManager.getAlertChannel(1)).thenReturn(Optional.of(alertChannelMock));
alertSendResponseCommand = alertSender.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
alertSendResponseCommand = alertSenderService.syncHandler(alertGroupId, title, content, WarningType.ALL.getCode());
Assert.assertTrue(alertSendResponseCommand.getResStatus());
alertSendResponseCommand.getResResults().forEach(result ->
logger.info("alert send response result, status:{}, message:{}", result.getStatus(), result.getMessage()));
@ -143,7 +147,7 @@ public class AlertSenderTest {
alert.setWarningType(WarningType.FAILURE);
alertList.add(alert);
alertSender = new AlertSender(alertDao, alertPluginManager);
// alertSenderService = new AlertSenderService();
int pluginDefineId = 1;
String pluginInstanceParams = "alert-instance-mail-params";
@ -165,6 +169,7 @@ public class AlertSenderTest {
when(alertChannelMock.process(Mockito.any())).thenReturn(alertResult);
when(alertPluginManager.getAlertChannel(1)).thenReturn(Optional.of(alertChannelMock));
Assert.assertTrue(Boolean.parseBoolean(alertResult.getStatus()));
alertSender.send(alertList);
when(alertDao.listInstanceByAlertGroupId(1)).thenReturn(new ArrayList<>());
alertSenderService.send(alertList);
}
}

View File

@ -18,13 +18,11 @@
~ under the License.
-->
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://maven.apache.org/POM/4.0.0"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<packaging>pom</packaging>

View File

@ -16,13 +16,12 @@
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler</artifactId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<artifactId>dolphinscheduler-api</artifactId>
<name>${project.artifactId}</name>
@ -275,6 +274,12 @@
</exclusions>
<scope>test</scope>
</dependency>
<!-- Python -->
<dependency>
<groupId>net.sf.py4j</groupId>
<artifactId>py4j</artifactId>
</dependency>
</dependencies>
<build>

View File

@ -29,6 +29,6 @@ WORKDIR $DOLPHINSCHEDULER_HOME
ADD ./target/api-server $DOLPHINSCHEDULER_HOME
EXPOSE 12345
EXPOSE 12345 25333
CMD [ "/bin/bash", "./bin/start.sh" ]

View File

@ -15,17 +15,17 @@
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.config;
package org.apache.dolphinscheduler.api.configuration;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Configuration;
import org.springframework.stereotype.Component;
@Component
@EnableConfigurationProperties
@ConfigurationProperties("python-gateway")
public class PythonGatewayConfig {
@ConfigurationProperties(value = "python-gateway", ignoreUnknownFields = false)
public class PythonGatewayConfiguration {
private boolean enabled;
private String gatewayServerAddress;
private int gatewayServerPort;
private String pythonAddress;
@ -33,6 +33,14 @@ public class PythonGatewayConfig {
private int connectTimeout;
private int readTimeout;
public boolean getEnabled() {
return enabled;
}
public void setEnabled(boolean enabled) {
this.enabled = enabled;
}
public String getGatewayServerAddress() {
return gatewayServerAddress;
}

View File

@ -95,22 +95,22 @@ public class ExecutorController extends BaseController {
*/
@ApiOperation(value = "startProcessInstance", notes = "RUN_PROCESS_INSTANCE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "processDefinitionCode", value = "PROCESS_DEFINITION_CODE", required = true, dataType = "Long", example = "100"),
@ApiImplicitParam(name = "scheduleTime", value = "SCHEDULE_TIME", dataType = "String"),
@ApiImplicitParam(name = "failureStrategy", value = "FAILURE_STRATEGY", required = true, dataType = "FailureStrategy"),
@ApiImplicitParam(name = "startNodeList", value = "START_NODE_LIST", dataType = "String"),
@ApiImplicitParam(name = "taskDependType", value = "TASK_DEPEND_TYPE", dataType = "TaskDependType"),
@ApiImplicitParam(name = "execType", value = "COMMAND_TYPE", dataType = "CommandType"),
@ApiImplicitParam(name = "warningType", value = "WARNING_TYPE", required = true, dataType = "WarningType"),
@ApiImplicitParam(name = "warningGroupId", value = "WARNING_GROUP_ID", dataType = "Int", example = "100"),
@ApiImplicitParam(name = "runMode", value = "RUN_MODE", dataType = "RunMode"),
@ApiImplicitParam(name = "processInstancePriority", value = "PROCESS_INSTANCE_PRIORITY", required = true, dataType = "Priority"),
@ApiImplicitParam(name = "workerGroup", value = "WORKER_GROUP", dataType = "String", example = "default"),
@ApiImplicitParam(name = "environmentCode", value = "ENVIRONMENT_CODE", dataType = "Long", example = "-1"),
@ApiImplicitParam(name = "timeout", value = "TIMEOUT", dataType = "Int", example = "100"),
@ApiImplicitParam(name = "expectedParallelismNumber", value = "EXPECTED_PARALLELISM_NUMBER", dataType = "Int" , example = "8"),
@ApiImplicitParam(name = "dryRun", value = "DRY_RUN", dataType = "Int", example = "0"),
@ApiImplicitParam(name = "complementDependentMode", value = "COMPLEMENT_DEPENDENT_MODE", dataType = "complementDependentMode")
@ApiImplicitParam(name = "processDefinitionCode", value = "PROCESS_DEFINITION_CODE", required = true, dataType = "Long", example = "100"),
@ApiImplicitParam(name = "scheduleTime", value = "SCHEDULE_TIME", required = true, dataType = "String", example = "2022-04-06 00:00:00,2022-04-06 00:00:00"),
@ApiImplicitParam(name = "failureStrategy", value = "FAILURE_STRATEGY", required = true, dataType = "FailureStrategy"),
@ApiImplicitParam(name = "startNodeList", value = "START_NODE_LIST", dataType = "String"),
@ApiImplicitParam(name = "taskDependType", value = "TASK_DEPEND_TYPE", dataType = "TaskDependType"),
@ApiImplicitParam(name = "execType", value = "COMMAND_TYPE", dataType = "CommandType"),
@ApiImplicitParam(name = "warningType", value = "WARNING_TYPE", required = true, dataType = "WarningType"),
@ApiImplicitParam(name = "warningGroupId", value = "WARNING_GROUP_ID", dataType = "Int", example = "100"),
@ApiImplicitParam(name = "runMode", value = "RUN_MODE", dataType = "RunMode"),
@ApiImplicitParam(name = "processInstancePriority", value = "PROCESS_INSTANCE_PRIORITY", required = true, dataType = "Priority"),
@ApiImplicitParam(name = "workerGroup", value = "WORKER_GROUP", dataType = "String", example = "default"),
@ApiImplicitParam(name = "environmentCode", value = "ENVIRONMENT_CODE", dataType = "Long", example = "-1"),
@ApiImplicitParam(name = "timeout", value = "TIMEOUT", dataType = "Int", example = "100"),
@ApiImplicitParam(name = "expectedParallelismNumber", value = "EXPECTED_PARALLELISM_NUMBER", dataType = "Int" , example = "8"),
@ApiImplicitParam(name = "dryRun", value = "DRY_RUN", dataType = "Int", example = "0"),
@ApiImplicitParam(name = "complementDependentMode", value = "COMPLEMENT_DEPENDENT_MODE", dataType = "complementDependentMode")
})
@PostMapping(value = "start-process-instance")
@ResponseStatus(HttpStatus.OK)
@ -119,7 +119,7 @@ public class ExecutorController extends BaseController {
public Result startProcessInstance(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@ApiParam(name = "projectCode", value = "PROJECT_CODE", required = true) @PathVariable long projectCode,
@RequestParam(value = "processDefinitionCode") long processDefinitionCode,
@RequestParam(value = "scheduleTime", required = false) String scheduleTime,
@RequestParam(value = "scheduleTime") String scheduleTime,
@RequestParam(value = "failureStrategy") FailureStrategy failureStrategy,
@RequestParam(value = "startNodeList", required = false) String startNodeList,
@RequestParam(value = "taskDependType", required = false) TaskDependType taskDependType,
@ -159,7 +159,7 @@ public class ExecutorController extends BaseController {
* batch execute process instance
* If any processDefinitionCode cannot be found, the failure information is returned and the status is set to
* failed. The successful task will run normally and will not stop
*
*
* @param loginUser login user
* @param projectCode project code
* @param processDefinitionCodes process definition codes
@ -180,7 +180,7 @@ public class ExecutorController extends BaseController {
@ApiOperation(value = "batchStartProcessInstance", notes = "BATCH_RUN_PROCESS_INSTANCE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "processDefinitionCodes", value = "PROCESS_DEFINITION_CODES", required = true, dataType = "String", example = "1,2,3"),
@ApiImplicitParam(name = "scheduleTime", value = "SCHEDULE_TIME", required = true, dataType = "String"),
@ApiImplicitParam(name = "scheduleTime", value = "SCHEDULE_TIME", required = true, dataType = "String", example = "2022-04-06 00:00:00,2022-04-06 00:00:00"),
@ApiImplicitParam(name = "failureStrategy", value = "FAILURE_STRATEGY", required = true, dataType = "FailureStrategy"),
@ApiImplicitParam(name = "startNodeList", value = "START_NODE_LIST", dataType = "String"),
@ApiImplicitParam(name = "taskDependType", value = "TASK_DEPEND_TYPE", dataType = "TaskDependType"),
@ -201,24 +201,24 @@ public class ExecutorController extends BaseController {
@ApiException(START_PROCESS_INSTANCE_ERROR)
@AccessLogAnnotation(ignoreRequestArgs = "loginUser")
public Result batchStartProcessInstance(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@ApiParam(name = "projectCode", value = "PROJECT_CODE", required = true) @PathVariable long projectCode,
@RequestParam(value = "processDefinitionCodes") String processDefinitionCodes,
@RequestParam(value = "scheduleTime", required = false) String scheduleTime,
@RequestParam(value = "failureStrategy") FailureStrategy failureStrategy,
@RequestParam(value = "startNodeList", required = false) String startNodeList,
@RequestParam(value = "taskDependType", required = false) TaskDependType taskDependType,
@RequestParam(value = "execType", required = false) CommandType execType,
@RequestParam(value = "warningType") WarningType warningType,
@RequestParam(value = "warningGroupId", required = false) int warningGroupId,
@RequestParam(value = "runMode", required = false) RunMode runMode,
@RequestParam(value = "processInstancePriority", required = false) Priority processInstancePriority,
@RequestParam(value = "workerGroup", required = false, defaultValue = "default") String workerGroup,
@RequestParam(value = "environmentCode", required = false, defaultValue = "-1") Long environmentCode,
@RequestParam(value = "timeout", required = false) Integer timeout,
@RequestParam(value = "startParams", required = false) String startParams,
@RequestParam(value = "expectedParallelismNumber", required = false) Integer expectedParallelismNumber,
@RequestParam(value = "dryRun", defaultValue = "0", required = false) int dryRun,
@RequestParam(value = "complementDependentMode", required = false) ComplementDependentMode complementDependentMode) {
@ApiParam(name = "projectCode", value = "PROJECT_CODE", required = true) @PathVariable long projectCode,
@RequestParam(value = "processDefinitionCodes") String processDefinitionCodes,
@RequestParam(value = "scheduleTime") String scheduleTime,
@RequestParam(value = "failureStrategy") FailureStrategy failureStrategy,
@RequestParam(value = "startNodeList", required = false) String startNodeList,
@RequestParam(value = "taskDependType", required = false) TaskDependType taskDependType,
@RequestParam(value = "execType", required = false) CommandType execType,
@RequestParam(value = "warningType") WarningType warningType,
@RequestParam(value = "warningGroupId", required = false) int warningGroupId,
@RequestParam(value = "runMode", required = false) RunMode runMode,
@RequestParam(value = "processInstancePriority", required = false) Priority processInstancePriority,
@RequestParam(value = "workerGroup", required = false, defaultValue = "default") String workerGroup,
@RequestParam(value = "environmentCode", required = false, defaultValue = "-1") Long environmentCode,
@RequestParam(value = "timeout", required = false) Integer timeout,
@RequestParam(value = "startParams", required = false) String startParams,
@RequestParam(value = "expectedParallelismNumber", required = false) Integer expectedParallelismNumber,
@RequestParam(value = "dryRun", defaultValue = "0", required = false) int dryRun,
@RequestParam(value = "complementDependentMode", required = false) ComplementDependentMode complementDependentMode) {
if (timeout == null) {
timeout = Constants.MAX_TASK_TIMEOUT;
@ -269,8 +269,8 @@ public class ExecutorController extends BaseController {
*/
@ApiOperation(value = "execute", notes = "EXECUTE_ACTION_TO_PROCESS_INSTANCE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "processInstanceId", value = "PROCESS_INSTANCE_ID", required = true, dataType = "Int", example = "100"),
@ApiImplicitParam(name = "executeType", value = "EXECUTE_TYPE", required = true, dataType = "ExecuteType")
@ApiImplicitParam(name = "processInstanceId", value = "PROCESS_INSTANCE_ID", required = true, dataType = "Int", example = "100"),
@ApiImplicitParam(name = "executeType", value = "EXECUTE_TYPE", required = true, dataType = "ExecuteType")
})
@PostMapping(value = "/execute")
@ResponseStatus(HttpStatus.OK)
@ -293,7 +293,7 @@ public class ExecutorController extends BaseController {
*/
@ApiOperation(value = "startCheckProcessDefinition", notes = "START_CHECK_PROCESS_DEFINITION_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "processDefinitionCode", value = "PROCESS_DEFINITION_CODE", required = true, dataType = "Long", example = "100")
@ApiImplicitParam(name = "processDefinitionCode", value = "PROCESS_DEFINITION_CODE", required = true, dataType = "Long", example = "100")
})
@PostMapping(value = "/start-check")
@ResponseStatus(HttpStatus.OK)

View File

@ -15,7 +15,7 @@
* limitations under the License.
*/
package org.apache.dolphinscheduler.server;
package org.apache.dolphinscheduler.api.python;
import org.apache.dolphinscheduler.api.dto.resources.ResourceComponent;
import org.apache.dolphinscheduler.api.enums.Status;
@ -56,7 +56,7 @@ import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectUserMapper;
import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper;
import org.apache.dolphinscheduler.server.config.PythonGatewayConfig;
import org.apache.dolphinscheduler.api.configuration.PythonGatewayConfiguration;
import org.apache.dolphinscheduler.spi.enums.ResourceType;
import org.apache.commons.collections.CollectionUtils;
@ -75,17 +75,13 @@ import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.stereotype.Component;
import py4j.GatewayServer;
@SpringBootApplication
@ComponentScan(value = "org.apache.dolphinscheduler")
public class PythonGatewayServer extends SpringBootServletInitializer {
private static final Logger logger = LoggerFactory.getLogger(PythonGatewayServer.class);
@Component
public class PythonGateway {
private static final Logger logger = LoggerFactory.getLogger(PythonGateway.class);
private static final WarningType DEFAULT_WARNING_TYPE = WarningType.NONE;
private static final int DEFAULT_WARNING_GROUP_ID = 0;
@ -141,7 +137,7 @@ public class PythonGatewayServer extends SpringBootServletInitializer {
private DataSourceMapper dataSourceMapper;
@Autowired
private PythonGatewayConfig pythonGatewayConfig;
private PythonGatewayConfiguration pythonGatewayConfiguration;
@Autowired
private ProjectUserMapper projectUserMapper;
@ -546,30 +542,32 @@ public class PythonGatewayServer extends SpringBootServletInitializer {
}
@PostConstruct
public void run() {
GatewayServer server;
try {
InetAddress gatewayHost = InetAddress.getByName(pythonGatewayConfig.getGatewayServerAddress());
InetAddress pythonHost = InetAddress.getByName(pythonGatewayConfig.getPythonAddress());
server = new GatewayServer(
this,
pythonGatewayConfig.getGatewayServerPort(),
pythonGatewayConfig.getPythonPort(),
gatewayHost,
pythonHost,
pythonGatewayConfig.getConnectTimeout(),
pythonGatewayConfig.getReadTimeout(),
null
);
GatewayServer.turnLoggingOn();
logger.info("PythonGatewayServer started on: " + gatewayHost.toString());
server.start();
} catch (UnknownHostException e) {
logger.error("exception occurred while constructing PythonGatewayServer().", e);
public void init() {
if (pythonGatewayConfiguration.getEnabled()) {
this.start();
}
}
public static void main(String[] args) {
SpringApplication.run(PythonGatewayServer.class, args);
private void start() {
GatewayServer server;
try {
InetAddress gatewayHost = InetAddress.getByName(pythonGatewayConfiguration.getGatewayServerAddress());
InetAddress pythonHost = InetAddress.getByName(pythonGatewayConfiguration.getPythonAddress());
server = new GatewayServer(
this,
pythonGatewayConfiguration.getGatewayServerPort(),
pythonGatewayConfiguration.getPythonPort(),
gatewayHost,
pythonHost,
pythonGatewayConfiguration.getConnectTimeout(),
pythonGatewayConfiguration.getReadTimeout(),
null
);
GatewayServer.turnLoggingOn();
logger.info("PythonGatewayService started on: " + gatewayHost.toString());
server.start();
} catch (UnknownHostException e) {
logger.error("exception occurred while constructing PythonGatewayService().", e);
}
}
}

View File

@ -619,7 +619,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
* @param runMode
* @return
*/
private int createComplementCommandList(Date start, Date end, RunMode runMode, Command command,
protected int createComplementCommandList(Date start, Date end, RunMode runMode, Command command,
Integer expectedParallelismNumber, ComplementDependentMode complementDependentMode) {
int createCount = 0;
int dependentProcessDefinitionCreateCount = 0;
@ -713,7 +713,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
/**
* create complement dependent command
*/
private int createComplementDependentCommand(List<Schedule> schedules, Command command) {
protected int createComplementDependentCommand(List<Schedule> schedules, Command command) {
int dependentProcessDefinitionCreateCount = 0;
Command dependentCommand;

View File

@ -214,7 +214,6 @@ public class ProcessInstanceServiceImpl extends BaseServiceImpl implements Proce
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processId);
} else {
processInstance.setWarningGroupId(processDefinition.getWarningGroupId());
processInstance.setLocations(processDefinition.getLocations());
processInstance.setDagData(processService.genDagData(processDefinition));
result.put(DATA_LIST, processInstance);

View File

@ -108,6 +108,26 @@ audit:
metrics:
enabled: true
python-gateway:
# Weather enable python gateway server or not. The default value is true.
enabled: true
# The address of Python gateway server start. Set its value to `0.0.0.0` if your Python API run in different
# between Python gateway server. It could be be specific to other address like `127.0.0.1` or `localhost`
gateway-server-address: 0.0.0.0
# The port of Python gateway server start. Define which port you could connect to Python gateway server from
# Python API side.
gateway-server-port: 25333
# The address of Python callback client.
python-address: 127.0.0.1
# The port of Python callback client.
python-port: 25334
# Close connection of socket server if no other request accept after x milliseconds. Define value is (0 = infinite),
# and socket server would never close even though no requests accept
connect-timeout: 0
# Close each active connection of socket server if python program not active after x milliseconds. Define value is
# (0 = infinite), and socket server would never close even though no requests accept
read-timeout: 0
# Override by profile
---

View File

@ -219,7 +219,7 @@ QUERY_ALL_DEFINITION_LIST_NOTES=query all definition list
PAGE_NO=page no
PROCESS_INSTANCE_ID=process instance id
PROCESS_INSTANCE_JSON=process instance info(json format)
SCHEDULE_TIME=schedule time
SCHEDULE_TIME=schedule time,empty string indicates the current day
SYNC_DEFINE=update the information of the process instance to the process definition
RECOVERY_PROCESS_INSTANCE_FLAG=whether to recovery process instance
PREVIEW_SCHEDULE_NOTES=preview schedule

View File

@ -206,7 +206,7 @@ PROCESS_INSTANCE_ID=流程实例ID
PROCESS_INSTANCE_IDS=流程实例ID集合
PROCESS_INSTANCE_JSON=流程实例信息(json格式)
PREVIEW_SCHEDULE_NOTES=定时调度预览
SCHEDULE_TIME=定时时间
SCHEDULE_TIME=定时时间,空字符串表示当前天
SYNC_DEFINE=更新流程实例的信息是否同步到流程定义
RECOVERY_PROCESS_INSTANCE_FLAG=是否恢复流程实例
SEARCH_VAL=搜索值

View File

@ -202,9 +202,10 @@ public class ExecutorControllerTest extends AbstractControllerTest {
paramsMap.add("processDefinitionCode", String.valueOf(processDefinitionCode));
paramsMap.add("failureStrategy", String.valueOf(failureStrategy));
paramsMap.add("warningType", String.valueOf(warningType));
paramsMap.add("scheduleTime", scheduleTime);
when(executorService.execProcessInstance(any(User.class), eq(projectCode), eq(processDefinitionCode),
eq(null), eq(null), eq(failureStrategy), eq(null), eq(null), eq(warningType),
eq(scheduleTime), eq(null), eq(failureStrategy), eq(null), eq(null), eq(warningType),
eq(0), eq(null), eq(null), eq("default"), eq(-1L),
eq(Constants.MAX_TASK_TIMEOUT), eq(null), eq(null), eq(0),
eq(complementDependentMode))).thenReturn(executeServiceResult);

View File

@ -21,7 +21,7 @@
<parent>
<groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler</artifactId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<artifactId>dolphinscheduler-common</artifactId>
<name>dolphinscheduler-common</name>

View File

@ -327,6 +327,7 @@ public final class Constants {
public static final String NULL = "NULL";
public static final String THREAD_NAME_MASTER_SERVER = "Master-Server";
public static final String THREAD_NAME_WORKER_SERVER = "Worker-Server";
public static final String THREAD_NAME_ALERT_SERVER = "Alert-Server";
/**
* command parameter keys

View File

@ -16,13 +16,12 @@
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.apache.dolphinscheduler</groupId>
<artifactId>dolphinscheduler</artifactId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<artifactId>dolphinscheduler-dao</artifactId>
<name>${project.artifactId}</name>

View File

@ -17,19 +17,18 @@
package org.apache.dolphinscheduler.dao.entity;
import org.apache.dolphinscheduler.common.enums.UdfType;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.commons.lang.StringUtils;
import java.io.IOException;
import java.util.Date;
import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableField;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.KeyDeserializer;
import org.apache.commons.lang.StringUtils;
import org.apache.dolphinscheduler.common.enums.UdfType;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import java.io.IOException;
import java.util.Date;
/**
* udf function
@ -39,13 +38,23 @@ public class UdfFunc {
/**
* id
*/
@TableId(value="id", type=IdType.AUTO)
@TableId(value = "id", type = IdType.AUTO)
private int id;
/**
* user id
*/
private int userId;
public String getResourceType() {
return resourceType;
}
public void setResourceType(String resourceType) {
this.resourceType = "UDF";
}
@TableField(exist = false)
private String resourceType = "UDF";
/**
* udf function name
*/

View File

@ -966,7 +966,7 @@ CREATE TABLE t_ds_version
-- Records of t_ds_version
-- ----------------------------
INSERT INTO t_ds_version
VALUES ('1', '1.4.0');
VALUES ('1', '3.0.0');
-- ----------------------------

View File

@ -956,7 +956,7 @@ CREATE TABLE `t_ds_version` (
-- ----------------------------
-- Records of t_ds_version
-- ----------------------------
INSERT INTO `t_ds_version` VALUES ('1', '2.0.2');
INSERT INTO `t_ds_version` VALUES ('1', '3.0.0');
-- ----------------------------

View File

@ -965,7 +965,7 @@ INSERT INTO t_ds_queue(queue_name, queue, create_time, update_time)
VALUES ('default', 'default', '2018-11-29 10:22:33', '2018-11-29 10:22:33');
-- Records of t_ds_queue,default queue name : default
INSERT INTO t_ds_version(version) VALUES ('1.4.0');
INSERT INTO t_ds_version(version) VALUES ('3.0.0');
--
-- Table structure for table t_ds_plugin_define

View File

@ -1 +1 @@
2.0.4
3.0.0

View File

@ -1,38 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY',''));
-- uc_dolphin_T_t_ds_resources_R_full_name
drop PROCEDURE if EXISTS uc_dolphin_T_t_ds_resources_R_full_name;
delimiter d//
CREATE PROCEDURE uc_dolphin_T_t_ds_resources_R_full_name()
BEGIN
IF EXISTS (SELECT 1 FROM information_schema.COLUMNS
WHERE TABLE_NAME='t_ds_resources'
AND TABLE_SCHEMA=(SELECT DATABASE())
AND COLUMN_NAME ='full_name')
THEN
ALTER TABLE t_ds_resources MODIFY COLUMN `full_name` varchar(128);
END IF;
END;
d//
delimiter ;
CALL uc_dolphin_T_t_ds_resources_R_full_name;
DROP PROCEDURE uc_dolphin_T_t_ds_resources_R_full_name;

View File

@ -1,16 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

View File

@ -1,44 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
delimiter d//
CREATE OR REPLACE FUNCTION public.dolphin_update_metadata(
)
RETURNS character varying
LANGUAGE 'plpgsql'
COST 100
VOLATILE PARALLEL UNSAFE
AS $BODY$
DECLARE
v_schema varchar;
BEGIN
---get schema name
v_schema =current_schema();
--- alter column
EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_resources ALTER COLUMN full_name Type varchar(128)';
return 'Success!';
exception when others then
---Raise EXCEPTION '(%)',SQLERRM;
return SQLERRM;
END;
$BODY$;
select dolphin_update_metadata();
d//

View File

@ -1,17 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

View File

@ -15,6 +15,28 @@
* limitations under the License.
*/
SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY',''));
-- uc_dolphin_T_t_ds_resources_R_full_name
drop PROCEDURE if EXISTS uc_dolphin_T_t_ds_resources_R_full_name;
delimiter d//
CREATE PROCEDURE uc_dolphin_T_t_ds_resources_R_full_name()
BEGIN
IF EXISTS (SELECT 1 FROM information_schema.COLUMNS
WHERE TABLE_NAME='t_ds_resources'
AND TABLE_SCHEMA=(SELECT DATABASE())
AND COLUMN_NAME ='full_name')
THEN
ALTER TABLE t_ds_resources MODIFY COLUMN `full_name` varchar(128);
END IF;
END;
d//
delimiter ;
CALL uc_dolphin_T_t_ds_resources_R_full_name;
DROP PROCEDURE uc_dolphin_T_t_ds_resources_R_full_name;
ALTER TABLE `t_ds_task_instance` ADD INDEX `idx_code_version` (`task_code`, `task_definition_version`) USING BTREE;
ALTER TABLE `t_ds_task_instance` MODIFY COLUMN `task_params` longtext COMMENT 'job custom parameters' AFTER `app_link`;
ALTER TABLE `t_ds_process_task_relation` ADD KEY `idx_code` (`project_code`, `process_definition_code`) USING BTREE;
@ -210,4 +232,4 @@ CREATE TABLE `t_ds_k8s_namespace` (
`update_time` datetime DEFAULT NULL COMMENT 'update time',
PRIMARY KEY (`id`),
UNIQUE KEY `k8s_namespace_unique` (`namespace`,`k8s`)
) ENGINE= INNODB AUTO_INCREMENT= 1 DEFAULT CHARSET= utf8;
) ENGINE= INNODB AUTO_INCREMENT= 1 DEFAULT CHARSET= utf8;

View File

@ -15,6 +15,7 @@
* limitations under the License.
*/
INSERT INTO `t_ds_dq_comparison_type`
(`id`, `type`, `execute_sql`, `output_table`, `name`, `create_time`, `update_time`, `is_inner_source`)
VALUES(1, 'FixValue', NULL, NULL, NULL, '2021-06-30 00:00:00.000', '2021-06-30 00:00:00.000', false);

View File

@ -16,6 +16,31 @@
*/
delimiter d//
CREATE OR REPLACE FUNCTION public.dolphin_update_metadata(
)
RETURNS character varying
LANGUAGE 'plpgsql'
COST 100
VOLATILE PARALLEL UNSAFE
AS $BODY$
DECLARE
v_schema varchar;
BEGIN
---get schema name
v_schema =current_schema();
--- alter column
EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_resources ALTER COLUMN full_name Type varchar(128)';
return 'Success!';
exception when others then
---Raise EXCEPTION '(%)',SQLERRM;
return SQLERRM;
END;
$BODY$;
select dolphin_update_metadata();
CREATE OR REPLACE FUNCTION public.dolphin_update_metadata(
)
RETURNS character varying
@ -203,4 +228,4 @@ $BODY$;
select dolphin_update_metadata();
d//
d//

View File

@ -14,6 +14,7 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
delimiter d//
CREATE OR REPLACE FUNCTION public.dolphin_insert_dq_initial_data(
)

View File

@ -18,10 +18,11 @@
package org.apache.dolphinscheduler.dao.entity;
import org.apache.dolphinscheduler.dao.entity.UdfFunc.UdfFuncDeserializer;
import java.io.IOException;
import org.junit.Assert;
import org.junit.Test;
import java.io.IOException;
public class UdfFuncTest {
/**
@ -35,9 +36,9 @@ public class UdfFuncTest {
udfFunc.setResourceId(2);
udfFunc.setClassName("org.apache.dolphinscheduler.test.mrUpdate");
Assert.assertEquals("{\"id\":0,\"userId\":0,\"funcName\":null,\"className\":\"org.apache.dolphinscheduler.test.mrUpdate\",\"argTypes\":null,\"database\":null,"
+ "\"description\":null,\"resourceId\":2,\"resourceName\":\"dolphin_resource_update\",\"type\":null,\"createTime\":null,\"updateTime\":null}"
, udfFunc.toString());
Assert.assertEquals("{\"id\":0,\"userId\":0,\"resourceType\":\"UDF\",\"funcName\":null,\"className\":\"org.apache.dolphinscheduler.test.mrUpdate\",\"argTypes\":null,\"database\":null,"
+ "\"description\":null,\"resourceId\":2,\"resourceName\":\"dolphin_resource_update\",\"type\":null,\"createTime\":null,\"updateTime\":null}"
, udfFunc.toString());
}
/**

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>dolphinscheduler-data-quality</artifactId>

View File

@ -15,13 +15,11 @@
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-datasource-plugin</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>

View File

@ -16,13 +16,11 @@
~ limitations under the License.
-->
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>dolphinscheduler-datasource-plugin</artifactId>
<groupId>org.apache.dolphinscheduler</groupId>
<version>2.0.4-SNAPSHOT</version>
<version>3.0.1-alpha-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>

Some files were not shown because too many files have changed in this diff Show More