Compare commits

..

1 Commits

Author SHA1 Message Date
Jiajie Zhong 00ce7ce606 [common] Using protected in CommonUtils constructor 2022-04-19 13:39:17 +08:00
287 changed files with 1124 additions and 3958 deletions

View File

@ -28,10 +28,6 @@ metadata:
labels:
app.kubernetes.io/name: {{ include "dolphinscheduler.fullname" . }}
{{- include "dolphinscheduler.common.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
rules:
- host: {{ .Values.ingress.host }}

View File

@ -403,7 +403,6 @@ ingress:
enabled: false
host: "dolphinscheduler.org"
path: "/dolphinscheduler"
annotations: {}
tls:
enabled: false
secretName: "dolphinscheduler-tls"

View File

@ -243,10 +243,6 @@ export default {
},
],
},
{
title: 'Data Quality',
link: '/en-us/docs/dev/user_doc/guide/data-quality.html',
},
{
title: 'Resource',
link: '/en-us/docs/dev/user_doc/guide/resource.html',
@ -561,10 +557,6 @@ export default {
},
],
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide /data-quality.html',
},
{
title: '',
link: '/zh-cn/docs/dev/user_doc/guide/resource.html',

View File

@ -397,41 +397,21 @@ apiServers="ds1"
### dolphinscheduler_env.sh [load environment variables configs]
When using shell to commit tasks, DolphinScheduler will export environment variables from `bin/env/dolphinscheduler_env.sh`. The
mainly configuration including `JAVA_HOME`, mata database, registry center, and task configuration.
When using shell to commit tasks, DS will load environment variables inside dolphinscheduler_env.sh into the host.
Types of tasks involved are: Shell, Python, Spark, Flink, DataX, etc.
```bash
# JAVA_HOME, will use it to start DolphinScheduler server
export JAVA_HOME=${JAVA_HOME:-/opt/soft/java}
export HADOOP_HOME=/opt/soft/hadoop
export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
export SPARK_HOME1=/opt/soft/spark1
export SPARK_HOME2=/opt/soft/spark2
export PYTHON_HOME=/opt/soft/python
export JAVA_HOME=/opt/soft/java
export HIVE_HOME=/opt/soft/hive
export FLINK_HOME=/opt/soft/flink
export DATAX_HOME=/opt/soft/datax/bin/datax.py
# Database related configuration, set database type, username and password
export DATABASE=${DATABASE:-postgresql}
export SPRING_PROFILES_ACTIVE=${DATABASE}
export SPRING_DATASOURCE_DRIVER_CLASS_NAME
export SPRING_DATASOURCE_URL
export SPRING_DATASOURCE_USERNAME
export SPRING_DATASOURCE_PASSWORD
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
# DolphinScheduler server related configuration
export SPRING_CACHE_TYPE=${SPRING_CACHE_TYPE:-none}
export SPRING_JACKSON_TIME_ZONE=${SPRING_JACKSON_TIME_ZONE:-UTC}
export MASTER_FETCH_COMMAND_NUM=${MASTER_FETCH_COMMAND_NUM:-10}
# Registry center configuration, determines the type and link of the registry center
export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}
export REGISTRY_ZOOKEEPER_CONNECT_STRING=${REGISTRY_ZOOKEEPER_CONNECT_STRING:-localhost:2181}
# Tasks related configurations, need to change the configuration if you use the related tasks.
export HADOOP_HOME=${HADOOP_HOME:-/opt/soft/hadoop}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/opt/soft/hadoop/etc/hadoop}
export SPARK_HOME1=${SPARK_HOME1:-/opt/soft/spark1}
export SPARK_HOME2=${SPARK_HOME2:-/opt/soft/spark2}
export PYTHON_HOME=${PYTHON_HOME:-/opt/soft/python}
export HIVE_HOME=${HIVE_HOME:-/opt/soft/hive}
export FLINK_HOME=${FLINK_HOME:-/opt/soft/flink}
export DATAX_HOME=${DATAX_HOME:-/opt/soft/datax}
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
### Services logback configs

View File

@ -22,7 +22,7 @@ If you don't care about its internal design, but simply want to know how to deve
* dolphinscheduler-alert-plugins
This module is currently a plug-in provided by us, and now we have supported dozens of plug-ins, such as Email, DingTalk, Script, etc.
This module is currently a plug-in provided by us, such as Email, DingTalk, Script, etc.
#### Alert SPI Main class information.
@ -59,8 +59,6 @@ The specific design of alert_spi can be seen in the issue: [Alert Plugin Design]
* DingTalk
Alert for DingTalk group chat bots
Related parameter configuration can refer to the DingTalk robot document.
* EnterpriseWeChat
@ -75,27 +73,3 @@ The specific design of alert_spi can be seen in the issue: [Alert Plugin Design]
* SMS
SMS alerts
* FeiShu
FeiShu alert notification
* Slack
Slack alert notification
* PagerDuty
PagerDuty alert notification
* WebexTeams
WebexTeams alert notification
Related parameter configuration can refer to the WebexTeams document.
* Telegram
Telegram alert notification
Related parameter configuration can refer to the Telegram document.
* Http
We have implemented a Http script for alerting. And calling most of the alerting plug-ins end up being Http requests, if we not support your alert plug-in yet, you can use Http to realize your alert login. Also welcome to contribute your common plug-ins to the community :)

View File

@ -218,7 +218,7 @@ A: 1, in **the process definition list**, click the **Start** button.
## Q : Python task setting Python version
A: 1**for the version after 1.0.3** only need to modify PYTHON_HOME in `bin/env/dolphinscheduler_env.sh`
A: 1**for the version after 1.0.3** only need to modify PYTHON_HOME in conf/env/.dolphinscheduler_env.sh
```
export PYTHON_HOME=/bin/python

View File

@ -2,11 +2,7 @@
## How to Create Alert Plugins and Alert Groups
In version 2.0.0, users need to create alert instances, and needs to choose an alarm policy when defining an alarm instance, there are three options: send if the task succeeds, send on failure, and send on both success and failure. when the workflow or task is executed, if an alarm is triggered, calling the alarm instance send method needs a logical judgment, which matches the alarm instance with the task status, executes the alarm instance sending logic if it matches, and filters if it does not match. When create alert instances then associate them with alert groups. Alert group can use multiple alert instances.
The alarm module supports the following scenarios:
<img src="/img/alert/alert_scenarios_en.png">
The steps to use are as follows:
In version 2.0.0, users need to create alert instances, and then associate them with alert groups. Alert group can use multiple alert instances and notify them one by one.
First, go to the Security Center page. Select Alarm Group Management, click Alarm Instance Management on the left and create an alarm instance. Select the corresponding alarm plug-in and fill in the relevant alarm parameters.
@ -15,4 +11,4 @@ Then select Alarm Group Management, create an alarm group, and choose the corres
<img src="/img/alert/alert_step_1.png">
<img src="/img/alert/alert_step_2.png">
<img src="/img/alert/alert_step_3.png">
<img src="/img/alert/alert_step_4.png">
<img src="/img/alert/alert_step_4.png">

View File

@ -1,310 +0,0 @@
# Overview
## Introduction
The data quality task is used to check the data accuracy during the integration and processing of data. Data quality tasks in this release include single-table checking, single-table custom SQL checking, multi-table accuracy, and two-table value comparisons. The running environment of the data quality task is Spark 2.4.0, and other versions have not been verified, and users can verify by themselves.
- The execution flow of the data quality task is as follows:
> The user defines the task in the interface, and the user input value is stored in `TaskParam`
When running a task, `Master` will parse `TaskParam`, encapsulate the parameters required by `DataQualityTask` and send it to `Worker`.
Worker runs the data quality task. After the data quality task finishes running, it writes the statistical results to the specified storage engine. The current data quality task result is stored in the `t_ds_dq_execute_result` table of `dolphinscheduler`
`Worker` sends the task result to `Master`, after `Master` receives `TaskResponse`, it will judge whether the task type is `DataQualityTask`, if so, it will read the corresponding result from `t_ds_dq_execute_result` according to `taskInstanceId`, and then The result is judged according to the check mode, operator and threshold configured by the user. If the result is a failure, the corresponding operation, alarm or interruption will be performed according to the failure policy configured by the user.
Add config : `<server-name>/conf/common.properties`
```properties
data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
```
Please fill in `data-quality.jar.name` according to the actual package name,
If you package `data-quality` separately, remember to modify the package name to be consistent with `data-quality.jar.name`.
If the old version is upgraded and used, you need to execute the `sql` update script to initialize the database before running.
If you want to use `MySQL` data, you need to comment out the `scope` of `MySQL` in `pom.xml`
Currently only `MySQL`, `PostgreSQL` and `HIVE` data sources have been tested, other data sources have not been tested yet
`Spark` needs to be configured to read `Hive` metadata, `Spark` does not use `jdbc` to read `Hive`
## Detail
- CheckMethod: [CheckFormula][Operator][Threshold], if the result is true, it indicates that the data does not meet expectations, and the failure strategy is executed.
- CheckFormula
- Expected-Actual
- Actual-Expected
- (Actual/Expected)x100%
- (Expected-Actual)/Expected x100%
- Operator=、>、>=、<<=、!=
- ExpectedValue
- FixValue
- DailyAvg
- WeeklyAvg
- MonthlyAvg
- Last7DayAvg
- Last30DayAvg
- SrcTableTotalRows
- TargetTableTotalRows
- example
- CheckFormulaExpected-Actual
- Operator>
- Threshold0
- ExpectedValueFixValue=9。
Assuming that the actual value is 10, the operator is >, and the expected value is 9, then the result 10 -9 > 0 is true, which means that the row data in the empty column has exceeded the threshold, and the task is judged to fail
# Guide
## NullCheck
### Introduction
The goal of the null value check is to check the number of empty rows in the specified column. The number of empty rows can be compared with the total number of rows or a specified threshold. If it is greater than a certain threshold, it will be judged as failure.
- Calculate the SQL statement that the specified column is empty as follows:
```sql
SELECT COUNT(*) AS miss FROM ${src_table} WHERE (${src_field} is null or ${src_field} = '') AND (${src_filter})
```
- The SQL to calculate the total number of rows in the table is as follows:
```sql
SELECT COUNT(*) AS total FROM ${src_table} WHERE (${src_filter})
```
### UI Guide
![dataquality_null_check](/img/tasks/demo/null_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: drop-down to select the table where the validation data is located
- Src filter conditions: such as the title, it will also be used when counting the total number of rows in the table, optional
- Src table check column: drop-down to select the check column name
- Check method:
- [Expected-Actual]
- [Actual-Expected]
- [Actual/Expected]x100%
- [(Expected-Actual)/Expected]x100%
- Check operators: =, >, >=, <, <=, ! =
- Threshold: The value used in the formula for comparison
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
- Expected value type: select the desired type from the drop-down menu
## Timeliness Check
### Introduction
The timeliness check is used to check whether the data is processed within the expected time. The start time and end time can be specified to define the time range. If the amount of data within the time range does not reach the set threshold, the check task will be judged as fail
### UI Guide
![dataquality_timeliness_check](/img/tasks/demo/timeliness_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: drop-down to select the table where the validation data is located
- Src filter conditions: such as the title, it will also be used when counting the total number of rows in the table, optional
- Src table check column: drop-down to select check column name
- start time: the start time of a time range
- end time: the end time of a time range
- Time Format: Set the corresponding time format
- Check method:
- [Expected-Actual]
- [Actual-Expected]
- [Actual/Expected]x100%
- [(Expected-Actual)/Expected]x100%
- Check operators: =, >, >=, <, <=, ! =
- Threshold: The value used in the formula for comparison
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
- Expected value type: select the desired type from the drop-down menu
## Field Length Check
### Introduction
The goal of field length verification is to check whether the length of the selected field meets the expectations. If there is data that does not meet the requirements, and the number of rows exceeds the threshold, the task will be judged to fail
### UI Guide
![dataquality_length_check](/img/tasks/demo/field_length_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: drop-down to select the table where the validation data is located
- Src filter conditions: such as the title, it will also be used when counting the total number of rows in the table, optional
- Src table check column: drop-down to select the check column name
- Logical operators: =, >, >=, <, <=, ! =
- Field length limit: like the title
- Check method:
- [Expected-Actual]
- [Actual-Expected]
- [Actual/Expected]x100%
- [(Expected-Actual)/Expected]x100%
- Check operators: =, >, >=, <, <=, ! =
- Threshold: The value used in the formula for comparison
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
- Expected value type: select the desired type from the drop-down menu
## Uniqueness Check
### Introduction
The goal of the uniqueness check is to check whether the field is duplicated. It is generally used to check whether the primary key is duplicated. If there is duplication and the threshold is reached, the check task will be judged to be failed.
### UI Guide
![dataquality_uniqueness_check](/img/tasks/demo/uniqueness_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: drop-down to select the table where the validation data is located
- Src filter conditions: such as the title, it will also be used when counting the total number of rows in the table, optional
- Src table check column: drop-down to select the check column name
- Check method:
- [Expected-Actual]
- [Actual-Expected]
- [Actual/Expected]x100%
- [(Expected-Actual)/Expected]x100%
- Check operators: =, >, >=, <, <=, ! =
- Threshold: The value used in the formula for comparison
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
- Expected value type: select the desired type from the drop-down menu
## Regular Expression Check
### Introduction
The goal of regular expression verification is to check whether the format of the value of a field meets the requirements, such as time format, email format, ID card format, etc. If there is data that does not meet the format and exceeds the threshold, the task will be judged as failed.
### UI Guide
![dataquality_regex_check](/img/tasks/demo/regexp_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: drop-down to select the table where the validation data is located
- Src filter conditions: such as the title, it will also be used when counting the total number of rows in the table, optional
- Src table check column: drop-down to select check column name
- Regular expression: as title
- Check method:
- [Expected-Actual]
- [Actual-Expected]
- [Actual/Expected]x100%
- [(Expected-Actual)/Expected]x100%
- Check operators: =, >, >=, <, <=, ! =
- Threshold: The value used in the formula for comparison
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
- Expected value type: select the desired type from the drop-down menu
## Enumeration Check
### Introduction
The goal of enumeration value verification is to check whether the value of a field is within the range of enumeration values. If there is data that is not in the range of enumeration values and exceeds the threshold, the task will be judged to fail
### UI Guide
![dataquality_enum_check](/img/tasks/demo/enumeration_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: drop-down to select the table where the validation data is located
- Src table filter conditions: such as title, also used when counting the total number of rows in the table, optional
- Src table check column: drop-down to select the check column name
- List of enumeration values: separated by commas
- Check method:
- [Expected-Actual]
- [Actual-Expected]
- [Actual/Expected]x100%
- [(Expected-Actual)/Expected]x100%
- Check operators: =, >, >=, <, <=, ! =
- Threshold: The value used in the formula for comparison
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
- Expected value type: select the desired type from the drop-down menu
## Table Count Check
### Introduction
The goal of table row number verification is to check whether the number of rows in the table reaches the expected value. If the number of rows does not meet the standard, the task will be judged as failed.
### UI Guide
![dataquality_count_check](/img/tasks/demo/table_count_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: drop-down to select the table where the validation data is located
- Src filter conditions: such as the title, it will also be used when counting the total number of rows in the table, optional
- Src table check column: drop-down to select the check column name
- Check method:
- [Expected-Actual]
- [Actual-Expected]
- [Actual/Expected]x100%
- [(Expected-Actual)/Expected]x100%
- Check operators: =, >, >=, <, <=, ! =
- Threshold: The value used in the formula for comparison
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
- Expected value type: select the desired type from the drop-down menu
## Custom SQL Check
### Introduction
### UI Guide
![dataquality_custom_sql_check](/img/tasks/demo/custom_sql_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: drop-down to select the table where the data to be verified is located
- Actual value name: alias in SQL for statistical value calculation, such as max_num
- Actual value calculation SQL: SQL for outputting actual values,
- Note: The SQL must be statistical SQL, such as counting the number of rows, calculating the maximum value, minimum value, etc.
- select max(a) as max_num from ${src_table}, the table name must be filled like this
- Src filter conditions: such as the title, it will also be used when counting the total number of rows in the table, optional
- Check method:
- Check operators: =, >, >=, <, <=, ! =
- Threshold: The value used in the formula for comparison
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
- Expected value type: select the desired type from the drop-down menu
## Accuracy check of multi-table
### Introduction
Accuracy checks are performed by comparing the accuracy differences of data records for selected fields between two tables, examples are as follows
- table test1
| c1 | c2 |
| :---: | :---: |
| a | 1 |
|b|2|
- table test2
| c21 | c22 |
| :---: | :---: |
| a | 1 |
|b|3|
If you compare the data in c1 and c21, the tables test1 and test2 are exactly the same. If you compare c2 and c22, the data in table test1 and table test2 are inconsistent.
### UI Guide
![dataquality_multi_table_accuracy_check](/img/tasks/demo/multi_table_accuracy_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: drop-down to select the table where the data to be verified is located
- Src filter conditions: such as the title, it will also be used when counting the total number of rows in the table, optional
- Target data type: choose MySQL, PostgreSQL, etc.
- Target data source: the corresponding data source under the source data type
- Target data table: drop-down to select the table where the data to be verified is located
- Target filter conditions: such as the title, it will also be used when counting the total number of rows in the table, optional
- Check column:
- Fill in the source data column, operator and target data column respectively
- Verification method: select the desired verification method
- Operators: =, >, >=, <, <=, ! =
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
- Expected value type: select the desired type in the drop-down menu, only SrcTableTotalRow, TargetTableTotalRow and fixed value are suitable for selection here
## Comparison of the values checked by the two tables
### Introduction
Two-table value comparison allows users to customize different SQL statistics for two tables and compare the corresponding values. For example, for the source table A, the total amount of a certain column is calculated, and for the target table, the total amount of a certain column is calculated. value sum2, compare sum1 and sum2 to determine the check result
### UI Guide
![dataquality_multi_table_comparison_check](/img/tasks/demo/multi_table_comparison_check.png)
- Source data type: select MySQL, PostgreSQL, etc.
- Source data source: the corresponding data source under the source data type
- Source data table: the table where the data is to be verified
- Actual value name: Calculate the alias in SQL for the actual value, such as max_age1
- Actual value calculation SQL: SQL for outputting actual values,
- Note: The SQL must be statistical SQL, such as counting the number of rows, calculating the maximum value, minimum value, etc.
- select max(age) as max_age1 from ${src_table} The table name must be filled like this
- Target data type: choose MySQL, PostgreSQL, etc.
- Target data source: the corresponding data source under the source data type
- Target data table: the table where the data is to be verified
- Expected value name: Calculate the alias in SQL for the expected value, such as max_age2
- Expected value calculation SQL: SQL for outputting expected value,
- Note: The SQL must be statistical SQL, such as counting the number of rows, calculating the maximum value, minimum value, etc.
- select max(age) as max_age2 from ${target_table} The table name must be filled like this
- Verification method: select the desired verification method
- Operators: =, >, >=, <, <=, ! =
- Failure strategy
- Alert: The data quality task failed, the DolphinScheduler task result is successful, and an alert is sent
- Blocking: The data quality task fails, the DolphinScheduler task result is failed, and an alarm is sent
## Task result view
![dataquality_result](/img/tasks/demo/result.png)
## Rule View
### List of rules
![dataquality_rule_list](/img/tasks/demo/rule_list.png)
### Rules Details
![dataquality_rule_detail](/img/tasks/demo/rule_detail.png)

View File

@ -73,10 +73,10 @@ sed -i 's/Defaults requirett/#Defaults requirett/g' /etc/sudoers
datasource.properties: database connection information
zookeeper.properties: information for connecting zk
common.properties: Configuration information about the resource store (if hadoop is set up, please check if the core-site.xml and hdfs-site.xml configuration files exist).
dolphinscheduler_env.sh: environment Variables
env/dolphinscheduler_env.sh: environment Variables
````
- Modify the `dolphinscheduler_env.sh` environment variable in the `bin/env/dolphinscheduler_env.sh` directory according to the machine configuration (the following is the example that all the used software install under `/opt/soft`)
- Modify the `dolphinscheduler_env.sh` environment variable in the `conf/env` directory according to the machine configuration (the following is the example that all the used software install under `/opt/soft`)
```shell
export HADOOP_HOME=/opt/soft/hadoop

View File

@ -6,7 +6,7 @@ If you are a new hand and want to experience DolphinScheduler functions, we reco
## Deployment Steps
Cluster deployment uses the same scripts and configuration files as [pseudo-cluster deployment](pseudo-cluster.md), so the preparation and deployment steps are the same as pseudo-cluster deployment. The difference is that pseudo-cluster deployment is for one machine, while cluster deployment (Cluster) is for multiple machines. And steps of "Modify Configuration" are quite different between pseudo-cluster deployment and cluster deployment.
Cluster deployment uses the same scripts and configuration files as [pseudo-cluster deployment](pseudo-cluster.md), so the preparation and deployment steps are the same as pseudo-cluster deployment. The difference is that [pseudo-cluster deployment](pseudo-cluster.md) is for one machine, while cluster deployment (Cluster) is for multiple machines. And steps of "Modify Configuration" are quite different between pseudo-cluster deployment and cluster deployment.
### Prerequisites and DolphinScheduler Startup Environment Preparations
@ -32,8 +32,8 @@ apiServers="ds5"
## Start and Login DolphinScheduler
Same as [pseudo-cluster](pseudo-cluster.md)
Same as pseudo-cluster.md](pseudo-cluster.md)
## Start and Stop Server
Same as [pseudo-cluster](pseudo-cluster.md)
Same as pseudo-cluster.md](pseudo-cluster.md)

View File

@ -87,13 +87,7 @@ sh script/create-dolphinscheduler.sh
## Modify Configuration
After completing the preparation of the basic environment, you need to modify the configuration file according to the
environment you used. The configuration files are both in directory `bin/env` and named `install_env.sh` and `dolphinscheduler_env.sh`.
### Modify `install_env.sh`
File `install_env.sh` describes which machines will be installed DolphinScheduler and what server will be installed on
each machine. You could find this file in the path `bin/env/install_env.sh` and the detail of the configuration as below.
After completing the preparation of the basic environment, you need to modify the configuration file according to your environment. The configuration file is in the path of `conf/config/install_config.conf`. Generally, you just need to modify the **INSTALL MACHINE, DolphinScheduler ENV, Database, Registry Server** part to complete the deployment, the following describes the parameters that must be modified:
```shell
# ---------------------------------------------------------
@ -111,73 +105,51 @@ installPath="~/dolphinscheduler"
# Deploy user, use the user you create in section **Configure machine SSH password-free login**
deployUser="dolphinscheduler"
```
### Modify `dolphinscheduler_env.sh`
# ---------------------------------------------------------
# DolphinScheduler ENV
# ---------------------------------------------------------
# The path of JAVA_HOME, which JDK install path in section **Preparation**
javaHome="/your/java/home/here"
File `dolphinscheduler_env.sh` describes the database configuration of DolphinScheduler, which in the path `bin/env/dolphinscheduler_env.sh`
and some tasks which need external dependencies or libraries such as `JAVA_HOME` and `SPARK_HOME`. You could ignore the
task external dependencies if you do not use those tasks, but you have to change `JAVA_HOME`, registry center and database
related configurations based on your environment.
# ---------------------------------------------------------
# Database
# ---------------------------------------------------------
# Database type, username, password, IP, port, metadata. For now `dbtype` supports `mysql` and `postgresql`
dbtype="mysql"
dbhost="localhost:3306"
# Need to modify if you are not using `dolphinscheduler/dolphinscheduler` as your username and password
username="dolphinscheduler"
password="dolphinscheduler"
dbname="dolphinscheduler"
```sh
# JAVA_HOME, will use it to start DolphinScheduler server
export JAVA_HOME=${JAVA_HOME:-/custom/path}
# Database related configuration, set database type, username and password
export DATABASE=${DATABASE:-postgresql}
export SPRING_PROFILES_ACTIVE=${DATABASE}
export SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.postgresql.Driver
export SPRING_DATASOURCE_URL="jdbc:postgresql://127.0.0.1:5432/dolphinscheduler"
export SPRING_DATASOURCE_USERNAME="username"
export SPRING_DATASOURCE_PASSWORD="password"
# Registry center configuration, determines the type and link of the registry center
export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}
export REGISTRY_ZOOKEEPER_CONNECT_STRING=${REGISTRY_ZOOKEEPER_CONNECT_STRING:-localhost:2181}
# ---------------------------------------------------------
# Registry Server
# ---------------------------------------------------------
# Registration center address, the address of ZooKeeper service
registryServers="localhost:2181"
```
## Initialize the Database
DolphinScheduler metadata is stored in the relational database. Currently, supports PostgreSQL and MySQL. If you use MySQL, you need to manually download [mysql-connector-java driver][mysql] (8.0.16) and move it to the lib directory of DolphinScheduler, which is `tools/libs/`. Let's take MySQL as an example for how to initialize the database:
For mysql 5.6 / 5.7
DolphinScheduler metadata is stored in the relational database. Currently, supports PostgreSQL and MySQL. If you use MySQL, you need to manually download [mysql-connector-java driver][mysql] (8.0.16) and move it to the lib directory of DolphinScheduler. Let's take MySQL as an example for how to initialize the database:
```shell
mysql -uroot -p
mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
# Replace {user} and {password} with your username and password
# Change {user} and {password} by requests
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
mysql> flush privileges;
```
For mysql 8:
```shell
mysql -uroot -p
mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
# Replace {user} and {password} with your username and password
mysql> CREATE USER '{user}'@'%' IDENTIFIED BY '{password}';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%';
mysql> CREATE USER '{user}'@'localhost' IDENTIFIED BY '{password}';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost';
mysql> FLUSH PRIVILEGES;
```
Change the username and password in `tools/conf/application.yaml` to {user} and {password} you set in the previous step.
Then, modify `tools/bin/dolphinscheduler_env.sh`, set mysql as default database `export DATABASE=${DATABASE:-mysql}`.
After the above steps done you would create a new database for DolphinScheduler, then run Shell scripts to init database:
```shell
sh tools/bin/create-schema.sh
sh script/create-dolphinscheduler.sh
```
## Start DolphinScheduler
@ -185,7 +157,7 @@ sh tools/bin/create-schema.sh
Use **deployment user** you created above, running the following command to complete the deployment, and the server log will be stored in the logs folder.
```shell
sh ./bin/install.sh
sh install.sh
```
> **_Note:_** For the first time deployment, there maybe occur five times of `sh: bin/dolphinscheduler-daemon.sh: No such file or directory` in the terminal,
@ -221,12 +193,7 @@ sh ./bin/dolphinscheduler-daemon.sh start alert-server
sh ./bin/dolphinscheduler-daemon.sh stop alert-server
```
> **_Note1:_**: Each server have `dolphinscheduler_env.sh` file in path `<server-name>/conf/dolphinscheduler_env.sh` which
> for micro-services need. It means that you could start all servers by command `<server-name>/bin/start.sh` with different
> environment variable from `bin/env/dolphinscheduler_env.sh`. But it will use file `bin/env/dolphinscheduler_env.sh` overwrite
> `<server-name>/conf/dolphinscheduler_env.sh` if you start server with command `/bin/dolphinscheduler-daemon.sh start <server-name>`.
> **_Note2:_**: Please refer to the section of "System Architecture Design" for service usage. Python gateway service is
> **_Note:_**: Please refer to the section of "System Architecture Design" for service usage. Python gateway service is
> started along with the api-server, and if you do not want to start Python gateway service please disabled it by changing
> the yaml config `python-gateway.enabled : false` in api-server's configuration path `api-server/conf/application.yaml`

View File

@ -40,12 +40,12 @@ Please download the source package apache-dolphinscheduler-x.x.x-src.tar.gz from
```
$ tar -zxvf apache-dolphinscheduler-<version>-src.tar.gz
$ cd apache-dolphinscheduler-<version>-src/deploy/docker
$ cd apache-dolphinscheduler-<version>-src/docker/docker-swarm
$ docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:<version>
$ docker tag apache/dolphinscheduler:<version> apache/dolphinscheduler:latest
$ docker-compose up -d
```
> PowerShell should use `cd apache-dolphinscheduler-<version>-src\deploy\docker`
> PowerShell should use `cd apache-dolphinscheduler-<version>-src\docker\docker-swarm`
**PostgreSQL** (user `root`, password `root`, database `dolphinscheduler`) and **ZooKeeper** services will be started by default
@ -225,15 +225,15 @@ Lists all running containers:
```
docker ps
docker ps --format "{{.Names}}" # Show container name only
docker ps --format "{{.Names}}" # 只打印名字
```
View the logs of the container named docker-swarm_dolphinscheduler-api_1:
```
docker logs docker-swarm_dolphinscheduler-api_1
docker logs -f docker-swarm_dolphinscheduler-api_1 # Follow the latest logs
docker logs --tail 10 docker-swarm_dolphinscheduler-api_1 # Follow the latest ten lines of logs
docker logs -f docker-swarm_dolphinscheduler-api_1 # 跟随日志输出
docker logs --tail 10 docker-swarm_dolphinscheduler-api_1 # 显示倒数10行日志
```
### How to scale master and worker with docker-compose?
@ -977,7 +977,7 @@ Configure the mail service port for `alert-server`, default value `empty`.
Configure the mail sender for `alert-server`, default value `empty`.
**`MAIL_USER`**
**`MAIL_USER=`**
Configure the user name of the mail service for `alert-server`, default value `empty`.

View File

@ -40,7 +40,7 @@ This example demonstrates how to import data from Hive into MySQL.
### Configure the DataX environment in DolphinScheduler
If you are using the DataX task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `bin/env/dolphinscheduler_env.sh`.
If you are using the DataX task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
![datax_task01](/img/tasks/demo/datax_task01.png)

View File

@ -46,13 +46,13 @@ This is a common introductory case in the big data ecosystem, which often apply
#### Configure the flink environment in DolphinScheduler
If you are using the flink task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `bin/env/dolphinscheduler_env.sh`.
If you are using the flink task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
![demo-flink-simple](/img/tasks/demo/flink_task01.png)
#### Upload the Main Package
When using the Flink task node, you need to upload the jar package to the Resource Center for the execution, refer to the [resource center](../resource.md).
When using the Flink task node, you need to upload the jar package to the Resource Centre for the execution, refer to the [resource center](../resource.md).
After finish the Resource Centre configuration, upload the required target files directly by dragging and dropping.

View File

@ -54,7 +54,7 @@ This example is a common introductory type of MapReduce application, which used
#### Configure the MapReduce Environment in DolphinScheduler
If you are using the MapReduce task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `bin/env/dolphinscheduler_env.sh`.
If you are using the MapReduce task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
![mr_configure](/img/tasks/demo/mr_task01.png)

View File

@ -45,7 +45,7 @@ This is a common introductory case in the big data ecosystem, which often apply
#### Configure the Spark Environment in DolphinScheduler
If you are using the Spark task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `bin/env/dolphinscheduler_env.sh`.
If you are using the Spark task type in a production environment, it is necessary to configure the required environment first. The following is the configuration file: `/dolphinscheduler/conf/env/dolphinscheduler_env.sh`.
![spark_configure](/img/tasks/demo/spark_task01.png)

View File

@ -13,17 +13,24 @@
## Database Upgrade
- Change `username` and `password` in `./tools/conf/application.yaml` to yours.
- Modify the following properties in `conf/datasource.properties`.
- If using MySQL as the database to run DolphinScheduler, please config it in `./tools/bin/dolphinscheduler_env.sh`, and add MYSQL connector jar into lib dir `./tools/lib`, here we download `mysql-connector-java-8.0.16.jar`, and then correctly configure database connection information. You can download MYSQL connector jar from [here](https://downloads.MySQL.com/archives/c-j/). Otherwise, PostgreSQL is the default database.
- If using MySQL as the database to run DolphinScheduler, please comment out PostgreSQL related configurations, and add MYSQL connector jar into lib dir, here we download `mysql-connector-java-8.0.16.jar`, and then correctly configure database connection information. You can download MYSQL connector jar from [here](https://downloads.MySQL.com/archives/c-j/). Alternatively, if you use PostgreSQL as the database, you just need to comment out Mysql related configurations and correctly configure database connect information.
```shell
export DATABASE=${DATABASE:-mysql}
```properties
# postgre
#spring.datasource.driver-class-name=org.postgresql.Driver
#spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
# mysql
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true
spring.datasource.username=xxx
spring.datasource.password=xxx
```
- Execute database upgrade script:
`sh ./tools/bin/upgrade-schema.sh`
`sh ./script/upgrade-dolphinscheduler.sh`
## Backend Service Upgrade

View File

@ -380,42 +380,21 @@ apiServers="ds1"
```
## 11.dolphinscheduler_env.sh [环境变量配置]
通过类似shell方式提交任务的的时候,会加载该配置文件中的环境变量到主机中. 涉及到的 `JAVA_HOME`、元数据库、注册中心和任务类型配置,其中任务
类型主要有: Shell任务、Python任务、Spark任务、Flink任务、Datax任务等等
通过类似shell方式提交任务的的时候,会加载该配置文件中的环境变量到主机中.
涉及到的任务类型有: Shell任务、Python任务、Spark任务、Flink任务、Datax任务等等
```bash
# JAVA_HOME, will use it to start DolphinScheduler server
export JAVA_HOME=${JAVA_HOME:-/opt/soft/java}
export HADOOP_HOME=/opt/soft/hadoop
export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
export SPARK_HOME1=/opt/soft/spark1
export SPARK_HOME2=/opt/soft/spark2
export PYTHON_HOME=/opt/soft/python
export JAVA_HOME=/opt/soft/java
export HIVE_HOME=/opt/soft/hive
export FLINK_HOME=/opt/soft/flink
export DATAX_HOME=/opt/soft/datax/bin/datax.py
# Database related configuration, set database type, username and password
export DATABASE=${DATABASE:-postgresql}
export SPRING_PROFILES_ACTIVE=${DATABASE}
export SPRING_DATASOURCE_DRIVER_CLASS_NAME
export SPRING_DATASOURCE_URL
export SPRING_DATASOURCE_USERNAME
export SPRING_DATASOURCE_PASSWORD
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH:$FLINK_HOME/bin:$DATAX_HOME:$PATH
# DolphinScheduler server related configuration
export SPRING_CACHE_TYPE=${SPRING_CACHE_TYPE:-none}
export SPRING_JACKSON_TIME_ZONE=${SPRING_JACKSON_TIME_ZONE:-UTC}
export MASTER_FETCH_COMMAND_NUM=${MASTER_FETCH_COMMAND_NUM:-10}
# Registry center configuration, determines the type and link of the registry center
export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}
export REGISTRY_ZOOKEEPER_CONNECT_STRING=${REGISTRY_ZOOKEEPER_CONNECT_STRING:-localhost:2181}
# Tasks related configurations, need to change the configuration if you use the related tasks.
export HADOOP_HOME=${HADOOP_HOME:-/opt/soft/hadoop}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/opt/soft/hadoop/etc/hadoop}
export SPARK_HOME1=${SPARK_HOME1:-/opt/soft/spark1}
export SPARK_HOME2=${SPARK_HOME2:-/opt/soft/spark2}
export PYTHON_HOME=${PYTHON_HOME:-/opt/soft/python}
export HIVE_HOME=${HIVE_HOME:-/opt/soft/hive}
export FLINK_HOME=${FLINK_HOME:-/opt/soft/flink}
export DATAX_HOME=${DATAX_HOME:-/opt/soft/datax}
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
## 12.各服务日志配置文件

View File

@ -22,7 +22,7 @@ DolphinScheduler 正在处于微内核 + 插件化的架构更改之中,所有
* dolphinscheduler-alert-plugins
该模块是目前我们提供的插件,目前我们已经支持数十种插件,如 Email、DingTalk、Script等。
该模块是目前我们提供的插件,如 Email、DingTalk、Script等。
#### Alert SPI 主要类信息:
@ -61,7 +61,6 @@ alert_spi 具体设计可见 issue[Alert Plugin Design](https://github.com/ap
钉钉群聊机器人告警
相关参数配置可以参考钉钉机器人文档。
* EnterpriseWeChat
企业微信告警通知
@ -70,24 +69,3 @@ alert_spi 具体设计可见 issue[Alert Plugin Design](https://github.com/ap
* Script
我们实现了 Shell 脚本告警,我们会将相关告警参数透传给脚本,你可以在 Shell 中实现你的相关告警逻辑,如果你需要对接内部告警应用,这是一种不错的方法。
* FeiShu
飞书告警通知
* Slack
Slack告警通知
* PagerDuty
PagerDuty告警通知
* WebexTeams
WebexTeams告警通知
相关参数配置可以参考WebexTeams文档。
* Telegram
Telegram告警通知
相关参数配置可以参考Telegram文档。
* Http
我们实现了Http告警调用大部分的告警插件最终都是Http请求如果我们没有支持你常用插件可以使用Http来实现你的告警需求同时也欢迎将你常用插件贡献到社区。

View File

@ -203,7 +203,7 @@ A 1在 **流程定义列表**,点击 **启动** 按钮
## QPython 任务设置 Python 版本
A 只需要修改 `bin/env/dolphinscheduler_env.sh` 中的 PYTHON_HOME
A 只需要修改 conf/env/dolphinscheduler_env.sh 中的 PYTHON_HOME
```
export PYTHON_HOME=/bin/python

View File

@ -1,10 +1,6 @@
## 如何创建告警插件以及告警组
在2.0.0版本中,用户需要创建告警实例,在创建告警实例时,需要选择告警策略,有三个选项,成功发、失败发,以及成功和失败都发。在执行完工作流或任务时,如果触发告警,调用告警实例发送方法会进行逻辑判断,将告警实例与任务状态进行匹配,匹配则执行该告警实例发送逻辑,不匹配则过滤。创建完告警实例后,需要同告警组进行关联,一个告警组可以使用多个告警实例。
告警模块支持场景如下:
<img src="/img/alert/alert_scenarios_zh.png">
使用步骤如下:
在2.0.0版本中,用户需要创建告警实例,然后同告警组进行关联,一个告警组可以使用多个告警实例,我们会逐一进行进行告警通知。
首先需要进入到安全中心,选择告警组管理,然后点击左侧的告警实例管理,然后创建一个告警实例,然后选择对应的告警插件,填写相关告警参数。
@ -13,4 +9,4 @@
<img src="/img/alert/alert_step_1.png">
<img src="/img/alert/alert_step_2.png">
<img src="/img/alert/alert_step_3.png">
<img src="/img/alert/alert_step_4.png">
<img src="/img/alert/alert_step_4.png">

View File

@ -1,313 +0,0 @@
# 概述
## 任务类型介绍
数据质量任务是用于检查数据在集成、处理过程中的数据准确性。本版本的数据质量任务包括单表检查、单表自定义SQL检查、多表准确性以及两表值比对。数据质量任务的运行环境为Spark2.4.0,其他版本尚未进行过验证,用户可自行验证。
- 数据质量任务的执行逻辑如下:
> 用户在界面定义任务,用户输入值保存在`TaskParam`中
运行任务时,`Master`会解析`TaskParam`,封装`DataQualityTask`所需要的参数下发至`Worker。
Worker`运行数据质量任务,数据质量任务在运行结束之后将统计结果写入到指定的存储引擎中,当前数据质量任务结果存储在`dolphinscheduler`的`t_ds_dq_execute_result`表中
`Worker`发送任务结果给`Master``Master`收到`TaskResponse`之后会判断任务类型是否为`DataQualityTask`,如果是的话会根据`taskInstanceId`从`t_ds_dq_execute_result`中读取相应的结果,然后根据用户配置好的检查方式,操作符和阈值进行结果判断,如果结果为失败的话,会根据用户配置好的的失败策略进行相应的操作,告警或者中断
## 注意事项
添加配置信息:`<server-name>/conf/common.properties`
```properties
data-quality.jar.name=dolphinscheduler-data-quality-dev-SNAPSHOT.jar
```
这里的`data-quality.jar.name`请根据实际打包的名称来填写,
如果单独打包`data-quality`的话,记得修改包名和`data-quality.jar.name`一致。
如果是老版本升级使用,运行之前需要先执行`sql`更新脚本进行数据库初始化。
如果要用到`MySQL`数据,需要将`pom.xml`中`MySQL`的`scope`注释掉
当前只测试了`MySQL`、`PostgreSQL`和`HIVE`数据源,其他数据源暂时未测试过
`Spark`需要配置好读取`Hive`元数据,`Spark`不是采用`jdbc`的方式读取`Hive`
## 检查逻辑详解
- 校验公式:[校验方式][操作符][阈值],如果结果为真,则表明数据不符合期望,执行失败策略
- 校验方式:
- [Expected-Actual][期望值-实际值]
- [Actual-Expected][实际值-期望值]
- [Actual/Expected][实际值/期望值]x100%
- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
- 操作符:=、>、>=、<<=、!=
- 期望值类型
- 固定值
- 日均值
- 周均值
- 月均值
- 最近7天均值
- 最近30天均值
- 源表总行数
- 目标表总行数
- 例子
- 校验方式为:[Expected-Actual][期望值-实际值]
- [操作符]>
- [阈值]0
- 期望值类型:固定值=9。
假设实际值为10操作符为 >, 期望值为9那么结果 10 -9 > 0 为真,那就意味列为空的行数据已经超过阈值,任务被判定为失败
# 任务操作指南
## 单表检查之空值检查
### 检查介绍
空值检查的目标是检查出指定列为空的行数,可将为空的行数与总行数或者指定阈值进行比较,如果大于某个阈值则判定为失败
- 计算指定列为空的SQL语句如下
```sql
SELECT COUNT(*) AS miss FROM ${src_table} WHERE (${src_field} is null or ${src_field} = '') AND (${src_filter})
```
- 计算表总行数的SQL如下
```sql
SELECT COUNT(*) AS total FROM ${src_table} WHERE (${src_filter})
```
### 界面操作指南
![dataquality_null_check](/img/tasks/demo/null_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:下拉选择验证数据所在表
- 源过滤条件:如标题,统计表总行数的时候也会用到,选填
- 源表检查列:下拉选择检查列名
- 校验方式:
- [Expected-Actual][期望值-实际值]
- [Actual-Expected][实际值-期望值]
- [Actual/Expected][实际值/期望值]x100%
- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
- 校验操作符:=>、>=、<<=、!=
- 阈值:公式中用于比较的值
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
- 期望值类型:在下拉菜单中选择所要的类型
## 单表检查之及时性检查
### 检查介绍
及时性检查用于检查数据是否在预期时间内处理完成,可指定开始时间、结束时间来界定时间范围,如果在该时间范围内的数据量没有达到设定的阈值,那么会判断该检查任务为失败
### 界面操作指南
![dataquality_timeliness_check](/img/tasks/demo/timeliness_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:下拉选择验证数据所在表
- 源过滤条件:如标题,统计表总行数的时候也会用到,选填
- 源表检查列:下拉选择检查列名
- 起始时间:某个时间范围的开始时间
- 结束时间:某个时间范围的结束时间
- 时间格式:设置对应的时间格式
- 校验方式:
- [Expected-Actual][期望值-实际值]
- [Actual-Expected][实际值-期望值]
- [Actual/Expected][实际值/期望值]x100%
- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
- 校验操作符:=>、>=、<<=、!=
- 阈值:公式中用于比较的值
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
- 期望值类型:在下拉菜单中选择所要的类型
## 单表检查之字段长度校验
### 检查介绍
字段长度校验的目标是检查所选字段的长度是否满足预期,如果有存在不满足要求的数据,并且行数超过阈值则会判断任务为失败
### 界面操作指南
![dataquality_length_check](/img/tasks/demo/field_length_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:下拉选择验证数据所在表
- 源过滤条件:如标题,统计表总行数的时候也会用到,选填
- 源表检查列:下拉选择检查列名
- 逻辑操作符:=>、>=、<<=、!=
- 字段长度限制:如标题
- 校验方式:
- [Expected-Actual][期望值-实际值]
- [Actual-Expected][实际值-期望值]
- [Actual/Expected][实际值/期望值]x100%
- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
- 校验操作符:=>、>=、<<=、!=
- 阈值:公式中用于比较的值
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
- 期望值类型:在下拉菜单中选择所要的类型
## 单表检查之唯一性校验
### 检查介绍
唯一性校验的目标是检查字段是否存在重复的情况一般用于检验primary key是否有重复如果存在重复且达到阈值则会判断检查任务为失败
### 界面操作指南
![dataquality_uniqueness_check](/img/tasks/demo/uniqueness_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:下拉选择验证数据所在表
- 源过滤条件:如标题,统计表总行数的时候也会用到,选填
- 源表检查列:下拉选择检查列名
- 校验方式:
- [Expected-Actual][期望值-实际值]
- [Actual-Expected][实际值-期望值]
- [Actual/Expected][实际值/期望值]x100%
- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
- 校验操作符:=>、>=、<<=、!=
- 阈值:公式中用于比较的值
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
- 期望值类型:在下拉菜单中选择所要的类型
## 单表检查之正则表达式校验
### 检查介绍
正则表达式校验的目标是检查某字段的值的格式是否符合要求,例如时间格式、邮箱格式、身份证格式等等,如果存在不符合格式的数据并超过阈值,则会判断任务为失败
### 界面操作指南
![dataquality_regex_check](/img/tasks/demo/regexp_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:下拉选择验证数据所在表
- 源过滤条件:如标题,统计表总行数的时候也会用到,选填
- 源表检查列:下拉选择检查列名
- 正则表达式:如标题
- 校验方式:
- [Expected-Actual][期望值-实际值]
- [Actual-Expected][实际值-期望值]
- [Actual/Expected][实际值/期望值]x100%
- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
- 校验操作符:=>、>=、<<=、!=
- 阈值:公式中用于比较的值
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
- 期望值类型:在下拉菜单中选择所要的类型
## 单表检查之枚举值校验
### 检查介绍
枚举值校验的目标是检查某字段的值是否在枚举值的范围内,如果存在不在枚举值范围里的数据并超过阈值,则会判断任务为失败
### 界面操作指南
![dataquality_enum_check](/img/tasks/demo/enumeration_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:下拉选择验证数据所在表
- 源表过滤条件:如标题,统计表总行数的时候也会用到,选填
- 源表检查列:下拉选择检查列名
- 枚举值列表:用英文逗号,隔开
- 校验方式:
- [Expected-Actual][期望值-实际值]
- [Actual-Expected][实际值-期望值]
- [Actual/Expected][实际值/期望值]x100%
- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
- 校验操作符:=>、>=、<<=、!=
- 阈值:公式中用于比较的值
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
- 期望值类型:在下拉菜单中选择所要的类型
## 单表检查之表行数校验
### 检查介绍
表行数校验的目标是检查表的行数是否达到预期的值,如果行数未达标,则会判断任务为失败
### 界面操作指南
![dataquality_count_check](/img/tasks/demo/table_count_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:下拉选择验证数据所在表
- 源过滤条件:如标题,统计表总行数的时候也会用到,选填
- 源表检查列:下拉选择检查列名
- 校验方式:
- [Expected-Actual][期望值-实际值]
- [Actual-Expected][实际值-期望值]
- [Actual/Expected][实际值/期望值]x100%
- [(Expected-Actual)/Expected][(期望值-实际值)/期望值]x100%
- 校验操作符:=>、>=、<<=、!=
- 阈值:公式中用于比较的值
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
- 期望值类型:在下拉菜单中选择所要的类型
## 单表检查之自定义SQL检查
### 检查介绍
### 界面操作指南
![dataquality_custom_sql_check](/img/tasks/demo/custom_sql_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:下拉选择要验证数据所在表
- 实际值名为统计值计算SQL中的别名如max_num
- 实际值计算SQL: 用于输出实际值的SQL、
- 注意点该SQL必须为统计SQL例如统计行数计算最大值、最小值等
- select max(a) as max_num from ${src_table},表名必须这么填
- 源过滤条件:如标题,统计表总行数的时候也会用到,选填
- 校验方式:
- 校验操作符:=>、>=、<<=、!=
- 阈值:公式中用于比较的值
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
- 期望值类型:在下拉菜单中选择所要的类型
## 多表检查之准确性检查
### 检查介绍
准确性检查是通过比较两个表之间所选字段的数据记录的准确性差异,例子如下
- 表test1
| c1 | c2 |
| :---: | :---: |
| a | 1 |
| b | 2|
- 表test2
| c21 | c22 |
| :---: | :---: |
| a | 1 |
| b | 3|
如果对比c1和c21中的数据则表test1和test2完全一致。 如果对比c2和c22则表test1和表test2中的数据则存在不一致了。
### 界面操作指南
![dataquality_multi_table_accuracy_check](/img/tasks/demo/multi_table_accuracy_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:下拉选择要验证数据所在表
- 源过滤条件:如标题,统计表总行数的时候也会用到,选填
- 目标数据类型选择MySQL、PostgreSQL等
- 目标数据源:源数据类型下对应的数据源
- 目标数据表:下拉选择要验证数据所在表
- 目标过滤条件:如标题,统计表总行数的时候也会用到,选填
- 检查列:
- 分别填写 源数据列,操作符,目标数据列
- 校验方式:选择想要的校验方式
- 操作符:=>、>=、<<=、!=
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
- 期望值类型在下拉菜单中选择所要的类型这里只适合选择SrcTableTotalRow、TargetTableTotalRow和固定值
## 两表检查之值比对
### 检查介绍
两表值比对允许用户对两张表自定义不同的SQL统计出相应的值进行比对例如针对源表A统计出某一列的金额总值sum1针对目标表统计出某一列的金额总值sum2将sum1和sum2进行比较来判定检查结果
### 界面操作指南
![dataquality_multi_table_comparison_check](/img/tasks/demo/multi_table_comparison_check.png)
- 源数据类型选择MySQL、PostgreSQL等
- 源数据源:源数据类型下对应的数据源
- 源数据表:要验证数据所在表
- 实际值名为实际值计算SQL中的别名如max_age1
- 实际值计算SQL: 用于输出实际值的SQL、
- 注意点该SQL必须为统计SQL例如统计行数计算最大值、最小值等
- select max(age) as max_age1 from ${src_table} 表名必须这么填
- 目标数据类型选择MySQL、PostgreSQL等
- 目标数据源:源数据类型下对应的数据源
- 目标数据表:要验证数据所在表
- 期望值名为期望值计算SQL中的别名如max_age2
- 期望值计算SQL: 用于输出期望值的SQL、
- 注意点该SQL必须为统计SQL例如统计行数计算最大值、最小值等
- select max(age) as max_age2 from ${target_table} 表名必须这么填
- 校验方式:选择想要的校验方式
- 操作符:=>、>=、<<=、!=
- 失败策略
- 告警数据质量任务失败了DolphinScheduler任务结果为成功发送告警
- 阻断数据质量任务失败了DolphinScheduler任务结果为失败发送告警
## 任务结果查看
![dataquality_result](/img/tasks/demo/result.png)
## 规则查看
### 规则列表
![dataquality_rule_list](/img/tasks/demo/rule_list.png)
### 规则详情
![dataquality_rule_detail](/img/tasks/demo/rule_detail.png)

View File

@ -71,7 +71,7 @@ sed -i 's/Defaults requirett/#Defaults requirett/g' /etc/sudoers
datasource.properties 中的数据库连接信息.
zookeeper.properties 中的连接zk的信息.
common.properties 中关于资源存储的配置信息(如果设置了hadoop,请检查是否存在core-site.xml和hdfs-site.xml配置文件).
dolphinscheduler_env.sh 中的环境变量
env/dolphinscheduler_env.sh 中的环境变量
````
- 根据机器配置,修改 conf/env 目录下的 `dolphinscheduler_env.sh` 环境变量(以相关用到的软件都安装在/opt/soft下为例)

View File

@ -6,7 +6,7 @@
## 部署步骤
集群部署(Cluster)使用的脚本和配置文件与[伪集群部署](pseudo-cluster.md)中的配置一样,所以所需要的步骤也与伪集群部署大致一样。区别就是伪集群部署针对的是一台机器,而集群部署(Cluster)需要针对多台机器,且两者“修改相关配置”步骤区别较大
集群部署(Cluster)使用的脚本和配置文件与[伪集群部署](pseudo-cluster.md)中的配置一样,所以所需要的步骤也与[伪集群部署](pseudo-cluster.md)大致一样。区别就是[伪集群部署](pseudo-cluster.md)针对的是一台机器,而集群部署(Cluster)需要针对多台机器,且两者“修改相关配置”步骤区别较大
### 前置准备工作 && 准备 DolphinScheduler 启动环境
@ -14,7 +14,7 @@
### 修改相关配置
这个是与[伪集群部署](pseudo-cluster.md)差异较大的一步,因为部署脚本会通过 `scp` 的方式将安装需要的资源传输到各个机器上,所以这一步我们仅需要修改运行`install.sh`脚本的所在机器的配置即可。配置文件在路径在`conf/config/install_config.conf`下,此处我们仅需修改**INSTALL MACHINE****DolphinScheduler ENV、Database、Registry Server**与伪集群部署保持一致,下面对必须修改参数进行说明
这个是与[伪集群部署](pseudo-cluster.md)差异较大的一步,因为部署脚本会通过 `scp` 的方式将安装需要的资源传输到各个机器上,所以这一步我们仅需要修改运行`install.sh`脚本的所在机器的配置即可。配置文件在路径在`conf/config/install_config.conf`下,此处我们仅需修改**INSTALL MACHINE****DolphinScheduler ENV、Database、Registry Server**与[伪集群部署](pseudo-cluster.md)保持一致,下面对必须修改参数进行说明
```shell
# ---------------------------------------------------------

View File

@ -87,58 +87,52 @@ sh script/create-dolphinscheduler.sh
## 修改相关配置
完成基础环境的准备后,需要根据你的机器环境修改配置文件。配置文件可以在目录 `bin/env` 中找到,他们分别是 并命名为 `install_env.sh``dolphinscheduler_env.sh`
### 修改 `install_env.sh` 文件
文件 `install_env.sh` 描述了哪些机器将被安装 DolphinScheduler 以及每台机器对应安装哪些服务。您可以在路径 `bin/env/install_env.sh` 中找到此文件,配置详情如下。
完成了基础环境的准备后,在运行部署命令前,还需要根据环境修改配置文件。配置文件在路径在`conf/config/install_config.conf`下,一般部署只需要修改**INSTALL MACHINE、DolphinScheduler ENV、Database、Registry Server**部分即可完成部署,下面对必须修改参数进行说明
```shell
# ---------------------------------------------------------
# INSTALL MACHINE
# ---------------------------------------------------------
# Due to the master, worker, and API server being deployed on a single node, the IP of the server is the machine IP or localhost
# 因为是在单节点上部署master、worker、API server所以服务器的IP均为机器IP或者localhost
ips="localhost"
masters="localhost"
workers="localhost:default"
alertServer="localhost"
apiServers="localhost"
# DolphinScheduler installation path, it will auto-create if not exists
# DolphinScheduler安装路径,如果不存在会创建
installPath="~/dolphinscheduler"
# Deploy user, use the user you create in section **Configure machine SSH password-free login**
# 部署用户,填写在 **配置用户免密及权限** 中创建的用户
deployUser="dolphinscheduler"
```
### 修改 `dolphinscheduler_env.sh` 文件
# ---------------------------------------------------------
# DolphinScheduler ENV
# ---------------------------------------------------------
# JAVA_HOME 的路径,是在 **前置准备工作** 安装的JDK中 JAVA_HOME 所在的位置
javaHome="/your/java/home/here"
文件 `dolphinscheduler_env.sh` 描述了 DolphinScheduler 的数据库配置,一些任务类型外部依赖路径或库文件,注册中心,其中 `JAVA_HOME`
`SPARK_HOME`都是在这里定义的,其路径是 `bin/env/dolphinscheduler_env.sh`。如果您不使用某些任务类型,您可以忽略任务外部依赖项,
但您必须根据您的环境更改 `JAVA_HOME`、注册中心和数据库相关配置。
# ---------------------------------------------------------
# Database
# ---------------------------------------------------------
# 数据库的类型用户名密码IP端口元数据库db。其中dbtype目前支持 mysql 和 postgresql
dbtype="mysql"
dbhost="localhost:3306"
# 如果你不是以 dolphinscheduler/dolphinscheduler 作为用户名和密码的,需要进行修改
username="dolphinscheduler"
password="dolphinscheduler"
dbname="dolphinscheduler"
```sh
# JAVA_HOME, will use it to start DolphinScheduler server
export JAVA_HOME=${JAVA_HOME:-/custom/path}
# Database related configuration, set database type, username and password
export DATABASE=${DATABASE:-postgresql}
export SPRING_PROFILES_ACTIVE=${DATABASE}
export SPRING_DATASOURCE_DRIVER_CLASS_NAME=org.postgresql.Driver
export SPRING_DATASOURCE_URL="jdbc:postgresql://127.0.0.1:5432/dolphinscheduler"
export SPRING_DATASOURCE_USERNAME="username"
export SPRING_DATASOURCE_PASSWORD="password"
# Registry center configuration, determines the type and link of the registry center
export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}
export REGISTRY_ZOOKEEPER_CONNECT_STRING=${REGISTRY_ZOOKEEPER_CONNECT_STRING:-localhost:2181}
# ---------------------------------------------------------
# Registry Server
# ---------------------------------------------------------
# 注册中心地址zookeeper服务的地址
registryServers="localhost:2181"
```
## 初始化数据库
DolphinScheduler 元数据存储在关系型数据库中,目前支持 PostgreSQL 和 MySQL如果使用 MySQL 则需要手动下载 [mysql-connector-java 驱动][mysql] (8.0.16) 并移动到 DolphinScheduler 的 lib目录下`tools/libs/`)。下面以 MySQL 为例,说明如何初始化数据库
对于mysql 5.6 / 5.7
DolphinScheduler 元数据存储在关系型数据库中,目前支持 PostgreSQL 和 MySQL如果使用 MySQL 则需要手动下载 [mysql-connector-java 驱动][mysql] (8.0.16) 并移动到 DolphinScheduler 的 lib目录下。下面以 MySQL 为例,说明如何初始化数据库
```shell
mysql -uroot -p
@ -152,29 +146,10 @@ mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTI
mysql> flush privileges;
```
对于mysql 8
```shell
mysql -uroot -p
mysql> CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
# 修改 {user} 和 {password} 为你希望的用户名和密码
mysql> CREATE USER '{user}'@'%' IDENTIFIED BY '{password}';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%';
mysql> CREATE USER '{user}'@'localhost' IDENTIFIED BY '{password}';
mysql> GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost';
mysql> FLUSH PRIVILEGES;
```
将`tools/conf/application.yaml`中的username和password改成你在上一步中设置的用户名{user}和密码{password}
然后修改`tools/bin/dolphinscheduler_env.sh`将mysql设置为默认数据类型`export DATABASE=${DATABASE:-mysql}`.
完成上述步骤后,您已经为 DolphinScheduler 创建一个新数据库,现在你可以通过快速的 Shell 脚本来初始化数据库
```shell
sh tools/bin/create-schema.sh
sh script/create-dolphinscheduler.sh
```
## 启动 DolphinScheduler
@ -182,7 +157,7 @@ sh tools/bin/create-schema.sh
使用上面创建的**部署用户**运行以下命令完成部署,部署后的运行日志将存放在 logs 文件夹内
```shell
sh ./bin/install.sh
sh install.sh
```
> **_注意:_** 第一次部署的话,可能出现 5 次`sh: bin/dolphinscheduler-daemon.sh: No such file or directory`相关信息,次为非重要信息直接忽略即可
@ -217,13 +192,7 @@ sh ./bin/dolphinscheduler-daemon.sh start alert-server
sh ./bin/dolphinscheduler-daemon.sh stop alert-server
```
> **_注意1:_**: 每个服务在路径 `<server-name>/conf/dolphinscheduler_env.sh` 中都有 `dolphinscheduler_env.sh` 文件,这是可以为微
> 服务需求提供便利。意味着您可以基于不同的环境变量来启动各个服务,只需要在对应服务中配置 `bin/env/dolphinscheduler_env.sh` 然后通过 `<server-name>/bin/start.sh`
> 命令启动即可。但是如果您使用命令 `/bin/dolphinscheduler-daemon.sh start <server-name>` 启动服务器,它将会用文件 `bin/env/dolphinscheduler_env.sh`
> 覆盖 `<server-name>/conf/dolphinscheduler_env.sh` 然后启动服务,目的是为了减少用户修改配置的成本.
> **_注意2:_**服务用途请具体参见《系统架构设计》小节。Python gateway service 默认与 api-server 一起启动,如果您不想启动 Python gateway service
> 请通过更改 api-server 配置文件 `api-server/conf/application.yaml` 中的 `python-gateway.enabled : false` 来禁用它。
> **_注意:_**:服务用途请具体参见《系统架构设计》小节
[jdk]: https://www.oracle.com/technetwork/java/javase/downloads/index.html
[zookeeper]: https://zookeeper.apache.org/releases.html

View File

@ -40,13 +40,13 @@
```
$ tar -zxvf apache-dolphinscheduler-<version>-src.tar.gz
$ cd apache-dolphinscheduler-<version>-src/deploy/docker
$ cd apache-dolphinscheduler-<version>-src/docker/docker-swarm
$ docker pull dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:<version>
$ docker tag apache/dolphinscheduler:<version> apache/dolphinscheduler:latest
$ docker-compose up -d
```
> PowerShell 应该使用 `cd apache-dolphinscheduler-<version>-src\deploy\docker`
> PowerShell 应该使用 `cd apache-dolphinscheduler-<version>-src\docker\docker-swarm`
**PostgreSQL** (用户 `root`, 密码 `root`, 数据库 `dolphinscheduler`) 和 **ZooKeeper** 服务将会默认启动

View File

@ -40,7 +40,7 @@ DataX 任务类型,用于执行 DataX 程序。对于 DataX 节点worker
### 在 DolphinScheduler 中配置 DataX 环境
若生产环境中要是使用到 DataX 任务类型,则需要先配置好所需的环境。配置文件如下:`bin/env/dolphinscheduler_env.sh`。
若生产环境中要是使用到 DataX 任务类型,则需要先配置好所需的环境。配置文件如下:`/dolphinscheduler/conf/env/dolphinscheduler_env.sh`。
![datax_task01](/img/tasks/demo/datax_task01.png)

View File

@ -46,7 +46,7 @@ Flink 任务类型,用于执行 Flink 程序。对于 Flink 节点worker
#### 在 DolphinScheduler 中配置 flink 环境
若生产环境中要是使用到 flink 任务类型,则需要先配置好所需的环境。配置文件如下:`bin/env/dolphinscheduler_env.sh`。
若生产环境中要是使用到 flink 任务类型,则需要先配置好所需的环境。配置文件如下:`/dolphinscheduler/conf/env/dolphinscheduler_env.sh`。
![flink-configure](/img/tasks/demo/flink_task01.png)

View File

@ -54,7 +54,7 @@ MapReduce(MR) 任务类型,用于执行 MapReduce 程序。对于 MapReduce
#### 在 DolphinScheduler 中配置 MapReduce 环境
若生产环境中要是使用到 MapReduce 任务类型,则需要先配置好所需的环境。配置文件如下:`bin/env/dolphinscheduler_env.sh`。
若生产环境中要是使用到 MapReduce 任务类型,则需要先配置好所需的环境。配置文件如下:`/dolphinscheduler/conf/env/dolphinscheduler_env.sh`。
![mr_configure](/img/tasks/demo/mr_task01.png)

View File

@ -46,7 +46,7 @@ Spark 任务类型,用于执行 Spark 程序。对于 Spark 节点worker
#### 在 DolphinScheduler 中配置 Spark 环境
若生产环境中要是使用到 Spark 任务类型,则需要先配置好所需的环境。配置文件如下:`bin/env/dolphinscheduler_env.sh`。
若生产环境中要是使用到 Spark 任务类型,则需要先配置好所需的环境。配置文件如下:`/dolphinscheduler/conf/env/dolphinscheduler_env.sh`。
![spark_configure](/img/tasks/demo/spark_task01.png)

View File

@ -13,17 +13,24 @@
- 以下升级操作都需要在新版本的目录进行
## 4. 数据库升级
- 将`./tools/conf/application.yaml`中的username和password改成你设定数据库用户名和密码
- 修改conf/datasource.properties中的下列属性
- 如果选择 MySQL修改`./tools/bin/dolphinscheduler_env.sh`中的如下配置, 还需要手动添加 [ mysql-connector-java 驱动 jar ](https://downloads.MySQL.com/archives/c-j/) 包到 lib 目录`./tools/lib`这里下载的是mysql-connector-java-8.0.16.jar
- 如果选择 MySQL注释掉 PostgreSQL 相关配置(反之同理), 还需要手动添加 [[ mysql-connector-java 驱动 jar ](https://downloads.MySQL.com/archives/c-j/)] 包到 lib 目录下这里下载的是mysql-connector-java-8.0.16.jar,然后正确配置数据库连接相关信息
```shell
export DATABASE=${DATABASE:-mysql}
```properties
# postgre
#spring.datasource.driver-class-name=org.postgresql.Driver
#spring.datasource.url=jdbc:postgresql://localhost:5432/dolphinscheduler
# mysql
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://xxx:3306/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true 需要修改ip本机localhost即可
spring.datasource.username=xxx 需要修改为上面的{user}值
spring.datasource.password=xxx 需要修改为上面的{password}值
```
- 执行数据库升级脚本
`sh ./tools/bin/upgrade-schema.sh`
`sh ./script/upgrade-dolphinscheduler.sh`
## 5. 服务升级

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 234 KiB

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 141 KiB

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

View File

@ -19,14 +19,6 @@
package org.apache.dolphinscheduler.alert.api;
/**
* alert channel for sending alerts
*/
public interface AlertChannel {
/**
* process and send alert
* @param info alert info
* @return process alarm result
*/
AlertResult process(AlertInfo info);
}

View File

@ -23,21 +23,9 @@ import org.apache.dolphinscheduler.spi.params.base.PluginParams;
import java.util.List;
/**
* alert channel factory
*/
public interface AlertChannelFactory {
/**
* Returns the name of the alert channel
* @return the name of the alert channel
*/
String name();
/**
* Create an alert channel
*
* @return alert channel
*/
AlertChannel create();
/**

View File

@ -19,11 +19,6 @@
package org.apache.dolphinscheduler.alert.api;
import java.util.Objects;
/**
* alert data
*/
public class AlertData {
private int id;
private String title;
@ -90,7 +85,6 @@ public class AlertData {
this.warnType = warnType;
}
@Override
public boolean equals(final Object o) {
if (o == this) {
return true;
@ -99,7 +93,7 @@ public class AlertData {
return false;
}
final AlertData other = (AlertData) o;
if (!other.canEqual(this)) {
if (!other.canEqual((Object) this)) {
return false;
}
if (this.getId() != other.getId()) {
@ -108,41 +102,42 @@ public class AlertData {
if (this.getWarnType() != other.getWarnType()) {
return false;
}
final Object thisTitle = this.getTitle();
final Object otherTitle = other.getTitle();
if (!Objects.equals(thisTitle, otherTitle)) {
final Object this$title = this.getTitle();
final Object other$title = other.getTitle();
if (this$title == null ? other$title != null : !this$title.equals(other$title)) {
return false;
}
final Object thisContent = this.getContent();
final Object otherContent = other.getContent();
if (!Objects.equals(thisContent, otherContent)) {
final Object this$content = this.getContent();
final Object other$content = other.getContent();
if (this$content == null ? other$content != null : !this$content.equals(other$content)) {
return false;
}
final Object thisLog = this.getLog();
final Object otherLog = other.getLog();
return Objects.equals(thisLog, otherLog);
final Object this$log = this.getLog();
final Object other$log = other.getLog();
if (this$log == null ? other$log != null : !this$log.equals(other$log)) {
return false;
}
return true;
}
protected boolean canEqual(final Object other) {
return other instanceof AlertData;
}
@Override
public int hashCode() {
final int prime = 59;
final int PRIME = 59;
int result = 1;
result = result * prime + this.getId();
result = result * prime + this.getWarnType();
final Object title = this.getTitle();
result = result * prime + (title == null ? 43 : title.hashCode());
final Object content = this.getContent();
result = result * prime + (content == null ? 43 : content.hashCode());
final Object log = this.getLog();
result = result * prime + (log == null ? 43 : log.hashCode());
result = result * PRIME + this.getId();
result = result * PRIME + this.getWarnType();
final Object $title = this.getTitle();
result = result * PRIME + ($title == null ? 43 : $title.hashCode());
final Object $content = this.getContent();
result = result * PRIME + ($content == null ? 43 : $content.hashCode());
final Object $log = this.getLog();
result = result * PRIME + ($log == null ? 43 : $log.hashCode());
return result;
}
@Override
public String toString() {
return "AlertData(id=" + this.getId() + ", title=" + this.getTitle() + ", content=" + this.getContent() + ", log=" + this.getLog() + ", warnType=" + this.getWarnType() + ")";
}
@ -186,7 +181,6 @@ public class AlertData {
return new AlertData(id, title, content, log, warnType);
}
@Override
public String toString() {
return "AlertData.AlertDataBuilder(id=" + this.id + ", title=" + this.title + ", content=" + this.content + ", log=" + this.log + ", warnType=" + this.warnType + ")";
}

View File

@ -20,11 +20,7 @@
package org.apache.dolphinscheduler.alert.api;
import java.util.Map;
import java.util.Objects;
/**
* The alarm information includes the parameters of the alert channel and the alarm data
*/
public class AlertInfo {
private Map<String, String> alertParams;
private AlertData alertData;
@ -59,7 +55,6 @@ public class AlertInfo {
return this;
}
@Override
public boolean equals(final Object o) {
if (o == this) {
return true;
@ -71,32 +66,33 @@ public class AlertInfo {
if (!other.canEqual((Object) this)) {
return false;
}
final Object thisAlertParams = this.getAlertParams();
final Object otherAlertParams = other.getAlertParams();
if (!Objects.equals(thisAlertParams, otherAlertParams)) {
final Object this$alertParams = this.getAlertParams();
final Object other$alertParams = other.getAlertParams();
if (this$alertParams == null ? other$alertParams != null : !this$alertParams.equals(other$alertParams)) {
return false;
}
final Object thisAlertData = this.getAlertData();
final Object otherAlertData = other.getAlertData();
return Objects.equals(thisAlertData, otherAlertData);
final Object this$alertData = this.getAlertData();
final Object other$alertData = other.getAlertData();
if (this$alertData == null ? other$alertData != null : !this$alertData.equals(other$alertData)) {
return false;
}
return true;
}
protected boolean canEqual(final Object other) {
return other instanceof AlertInfo;
}
@Override
public int hashCode() {
final int prime = 59;
final int PRIME = 59;
int result = 1;
final Object alertParams = this.getAlertParams();
result = result * prime + (alertParams == null ? 43 : alertParams.hashCode());
final Object alertData = this.getAlertData();
result = result * prime + (alertData == null ? 43 : alertData.hashCode());
final Object $alertParams = this.getAlertParams();
result = result * PRIME + ($alertParams == null ? 43 : $alertParams.hashCode());
final Object $alertData = this.getAlertData();
result = result * PRIME + ($alertData == null ? 43 : $alertData.hashCode());
return result;
}
@Override
public String toString() {
return "AlertInfo(alertParams=" + this.getAlertParams() + ", alertData=" + this.getAlertData() + ")";
}
@ -122,7 +118,6 @@ public class AlertInfo {
return new AlertInfo(alertParams, alertData);
}
@Override
public String toString() {
return "AlertInfo.AlertInfoBuilder(alertParams=" + this.alertParams + ", alertData=" + this.alertData + ")";
}

View File

@ -19,11 +19,6 @@
package org.apache.dolphinscheduler.alert.api;
import java.util.Objects;
/**
* alert result
*/
public class AlertResult {
private String status;
private String message;
@ -58,7 +53,6 @@ public class AlertResult {
return this;
}
@Override
public boolean equals(final Object o) {
if (o == this) {
return true;
@ -67,35 +61,36 @@ public class AlertResult {
return false;
}
final AlertResult other = (AlertResult) o;
if (!other.canEqual(this)) {
if (!other.canEqual((Object) this)) {
return false;
}
final Object thisStatus = this.getStatus();
final Object otherStatus = other.getStatus();
if (!Objects.equals(thisStatus, otherStatus)) {
final Object this$status = this.getStatus();
final Object other$status = other.getStatus();
if (this$status == null ? other$status != null : !this$status.equals(other$status)) {
return false;
}
final Object thisMessage = this.getMessage();
final Object otherMessage = other.getMessage();
return Objects.equals(thisMessage, otherMessage);
final Object this$message = this.getMessage();
final Object other$message = other.getMessage();
if (this$message == null ? other$message != null : !this$message.equals(other$message)) {
return false;
}
return true;
}
protected boolean canEqual(final Object other) {
return other instanceof AlertResult;
}
@Override
public int hashCode() {
final int prime = 59;
final int PRIME = 59;
int result = 1;
final Object s = this.getStatus();
result = result * prime + (s == null ? 43 : s.hashCode());
final Object message = this.getMessage();
result = result * prime + (message == null ? 43 : message.hashCode());
final Object $status = this.getStatus();
result = result * PRIME + ($status == null ? 43 : $status.hashCode());
final Object $message = this.getMessage();
result = result * PRIME + ($message == null ? 43 : $message.hashCode());
return result;
}
@Override
public String toString() {
return "AlertResult(status=" + this.getStatus() + ", message=" + this.getMessage() + ")";
}
@ -121,7 +116,6 @@ public class AlertResult {
return new AlertResult(status, message);
}
@Override
public String toString() {
return "AlertResult.AlertResultBuilder(status=" + this.status + ", message=" + this.message + ")";
}

View File

@ -41,7 +41,7 @@
</fileSet>
<fileSet>
<directory>${basedir}/../../script/env</directory>
<outputDirectory>conf</outputDirectory>
<outputDirectory>bin</outputDirectory>
<includes>
<include>dolphinscheduler_env.sh</include>
</includes>

View File

@ -19,7 +19,7 @@
BIN_DIR=$(dirname $0)
DOLPHINSCHEDULER_HOME=${DOLPHINSCHEDULER_HOME:-$(cd $BIN_DIR/..; pwd)}
source "$DOLPHINSCHEDULER_HOME/conf/dolphinscheduler_env.sh"
source "$BIN_DIR/dolphinscheduler_env.sh"
JAVA_OPTS=${JAVA_OPTS:-"-server -Duser.timezone=${SPRING_JACKSON_TIME_ZONE} -Xms1g -Xmx1g -Xmn512m -XX:+PrintGCDetails -Xloggc:gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=dump.hprof"}

View File

@ -41,7 +41,7 @@
</fileSet>
<fileSet>
<directory>${basedir}/../script/env</directory>
<outputDirectory>conf</outputDirectory>
<outputDirectory>bin</outputDirectory>
<includes>
<include>dolphinscheduler_env.sh</include>
</includes>

View File

@ -19,7 +19,7 @@
BIN_DIR=$(dirname $0)
DOLPHINSCHEDULER_HOME=${DOLPHINSCHEDULER_HOME:-$(cd $BIN_DIR/..; pwd)}
source "$DOLPHINSCHEDULER_HOME/conf/dolphinscheduler_env.sh"
source "$BIN_DIR/dolphinscheduler_env.sh"
JAVA_OPTS=${JAVA_OPTS:-"-server -Duser.timezone=${SPRING_JACKSON_TIME_ZONE} -Xms1g -Xmx1g -Xmn512m -XX:+PrintGCDetails -Xloggc:gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=dump.hprof"}

View File

@ -17,31 +17,18 @@
package org.apache.dolphinscheduler.api;
import org.apache.dolphinscheduler.service.task.TaskPluginManager;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.context.event.ApplicationReadyEvent;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.event.EventListener;
@ServletComponentScan
@SpringBootApplication
@ComponentScan("org.apache.dolphinscheduler")
public class ApiApplicationServer {
@Autowired
private TaskPluginManager taskPluginManager;
public static void main(String[] args) {
SpringApplication.run(ApiApplicationServer.class);
}
@EventListener
public void run(ApplicationReadyEvent readyEvent) {
// install task plugin
taskPluginManager.installPlugin();
}
}

View File

@ -19,23 +19,18 @@ package org.apache.dolphinscheduler.api.controller;
import static org.apache.dolphinscheduler.api.enums.Status.CREATE_K8S_NAMESPACE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.DELETE_K8S_NAMESPACE_BY_ID_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.QUERY_AUTHORIZED_NAMESPACE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.QUERY_CAN_USE_K8S_CLUSTER_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.QUERY_K8S_NAMESPACE_LIST_PAGING_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.QUERY_UNAUTHORIZED_NAMESPACE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.UPDATE_K8S_NAMESPACE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.VERIFY_K8S_NAMESPACE_ERROR;
import org.apache.dolphinscheduler.api.aspect.AccessLogAnnotation;
import org.apache.dolphinscheduler.api.exceptions.ApiException;
import org.apache.dolphinscheduler.api.service.K8sNamespaceService;
import org.apache.dolphinscheduler.api.service.K8sNameSpaceService;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.utils.ParameterUtils;
import org.apache.dolphinscheduler.dao.entity.K8sNamespace;
import org.apache.dolphinscheduler.dao.entity.User;
import java.util.List;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
@ -65,7 +60,7 @@ import springfox.documentation.annotations.ApiIgnore;
public class K8sNamespaceController extends BaseController {
@Autowired
private K8sNamespaceService k8sNamespaceService;
private K8sNameSpaceService k8sNameSpaceService;
/**
* query namespace list paging
@ -97,7 +92,7 @@ public class K8sNamespaceController extends BaseController {
return result;
}
searchVal = ParameterUtils.handleEscapes(searchVal);
result = k8sNamespaceService.queryListPaging(loginUser, searchVal, pageNo, pageSize);
result = k8sNameSpaceService.queryListPaging(loginUser, searchVal, pageNo, pageSize);
return result;
}
@ -107,6 +102,8 @@ public class K8sNamespaceController extends BaseController {
*
* @param loginUser
* @param namespace k8s namespace
* @param owner owner
* @param tag Which type of job is available, can be empty, all available
* @param k8s k8s name
* @param limitsCpu max cpu
* @param limitsMemory max memory
@ -115,6 +112,8 @@ public class K8sNamespaceController extends BaseController {
@ApiOperation(value = "createK8sNamespace", notes = "CREATE_NAMESPACE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "namespace", value = "NAMESPACE", required = true, dataType = "String"),
@ApiImplicitParam(name = "owner", value = "OWNER", required = false, dataType = "String"),
@ApiImplicitParam(name = "tag", value = "TAG", required = false, dataType = "String"),
@ApiImplicitParam(name = "k8s", value = "K8S", required = true, dataType = "String"),
@ApiImplicitParam(name = "limits_cpu", value = "LIMITS_CPU", required = false, dataType = "Double"),
@ApiImplicitParam(name = "limits_memory", value = "LIMITS_MEMORY", required = false, dataType = "Integer")
@ -126,10 +125,12 @@ public class K8sNamespaceController extends BaseController {
public Result createNamespace(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@RequestParam(value = "namespace") String namespace,
@RequestParam(value = "k8s") String k8s,
@RequestParam(value = "owner", required = false) String owner,
@RequestParam(value = "tag", required = false) String tag,
@RequestParam(value = "limitsCpu", required = false) Double limitsCpu,
@RequestParam(value = "limitsMemory", required = false) Integer limitsMemory
) {
Map<String, Object> result = k8sNamespaceService.createK8sNamespace(loginUser, namespace, k8s, limitsCpu, limitsMemory);
Map<String, Object> result = k8sNameSpaceService.createK8sNamespace(loginUser, namespace, k8s, owner, tag, limitsCpu, limitsMemory);
return returnDataList(result);
}
@ -137,7 +138,8 @@ public class K8sNamespaceController extends BaseController {
* update namespace,namespace and k8s not allowed update, because may create on k8s,can delete and create new instead
*
* @param loginUser
* @param userName owner
* @param owner owner
* @param tag Which type of job is available,such as flink,means only flink job can use, can be empty, all available
* @param limitsCpu max cpu
* @param limitsMemory max memory
* @return
@ -145,7 +147,8 @@ public class K8sNamespaceController extends BaseController {
@ApiOperation(value = "updateK8sNamespace", notes = "UPDATE_NAMESPACE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "id", value = "K8S_NAMESPACE_ID", required = true, dataType = "Int", example = "100"),
@ApiImplicitParam(name = "userName", value = "OWNER", required = false, dataType = "String"),
@ApiImplicitParam(name = "owner", value = "OWNER", required = false, dataType = "String"),
@ApiImplicitParam(name = "tag", value = "TAG", required = false, dataType = "String"),
@ApiImplicitParam(name = "limitsCpu", value = "LIMITS_CPU", required = false, dataType = "Double"),
@ApiImplicitParam(name = "limitsMemory", value = "LIMITS_MEMORY", required = false, dataType = "Integer")})
@PutMapping(value = "/{id}")
@ -154,11 +157,11 @@ public class K8sNamespaceController extends BaseController {
@AccessLogAnnotation(ignoreRequestArgs = "loginUser")
public Result updateNamespace(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@PathVariable(value = "id") int id,
@RequestParam(value = "userName", required = false) String userName,
@RequestParam(value = "owner", required = false) String owner,
@RequestParam(value = "tag", required = false) String tag,
@RequestParam(value = "limitsCpu", required = false) Double limitsCpu,
@RequestParam(value = "limitsMemory", required = false) Integer limitsMemory) {
Map<String, Object> result = k8sNamespaceService.updateK8sNamespace(loginUser, id, userName, limitsCpu, limitsMemory);
Map<String, Object> result = k8sNameSpaceService.updateK8sNamespace(loginUser, id, owner, tag, limitsCpu, limitsMemory);
return returnDataList(result);
}
@ -184,7 +187,7 @@ public class K8sNamespaceController extends BaseController {
@RequestParam(value = "k8s") String k8s
) {
return k8sNamespaceService.verifyNamespaceK8s(namespace, k8s);
return k8sNameSpaceService.verifyNamespaceK8s(namespace, k8s);
}
@ -205,65 +208,7 @@ public class K8sNamespaceController extends BaseController {
@AccessLogAnnotation
public Result delNamespaceById(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@RequestParam(value = "id") int id) {
Map<String, Object> result = k8sNamespaceService.deleteNamespaceById(loginUser, id);
Map<String, Object> result = k8sNameSpaceService.deleteNamespaceById(loginUser, id);
return returnDataList(result);
}
/**
* query unauthorized namespace
*
* @param loginUser login user
* @param userId user id
* @return the namespaces which user have not permission to see
*/
@ApiOperation(value = "queryUnauthorizedNamespace", notes = "QUERY_UNAUTHORIZED_NAMESPACE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "userId", value = "USER_ID", dataType = "Int", example = "100")
})
@GetMapping(value = "/unauth-namespace")
@ResponseStatus(HttpStatus.OK)
@ApiException(QUERY_UNAUTHORIZED_NAMESPACE_ERROR)
@AccessLogAnnotation(ignoreRequestArgs = "loginUser")
public Result queryUnauthorizedNamespace(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@RequestParam("userId") Integer userId) {
Map<String, Object> result = k8sNamespaceService.queryUnauthorizedNamespace(loginUser, userId);
return returnDataList(result);
}
/**
* query unauthorized namespace
*
* @param loginUser login user
* @param userId user id
* @return namespaces which the user have permission to see
*/
@ApiOperation(value = "queryAuthorizedNamespace", notes = "QUERY_AUTHORIZED_NAMESPACE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "userId", value = "USER_ID", dataType = "Int", example = "100")
})
@GetMapping(value = "/authed-namespace")
@ResponseStatus(HttpStatus.OK)
@ApiException(QUERY_AUTHORIZED_NAMESPACE_ERROR)
@AccessLogAnnotation(ignoreRequestArgs = "loginUser")
public Result queryAuthorizedNamespace(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@RequestParam("userId") Integer userId) {
Map<String, Object> result = k8sNamespaceService.queryAuthorizedNamespace(loginUser, userId);
return returnDataList(result);
}
/**
* query namespace available
*
* @param loginUser login user
* @return namespace list
*/
@ApiOperation(value = "queryAvailableNamespaceList", notes = "QUERY_AVAILABLE_NAMESPACE_LIST_NOTES")
@GetMapping(value = "/available-list")
@ResponseStatus(HttpStatus.OK)
@ApiException(QUERY_CAN_USE_K8S_CLUSTER_ERROR)
@AccessLogAnnotation(ignoreRequestArgs = "loginUser")
public Result queryAvailableNamespaceList(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser) {
List<K8sNamespace> result = k8sNamespaceService.queryNamespaceAvailable(loginUser);
return success(result);
}
}

View File

@ -17,6 +17,24 @@
package org.apache.dolphinscheduler.api.controller;
import static org.apache.dolphinscheduler.api.enums.Status.BATCH_COPY_PROCESS_DEFINITION_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.BATCH_DELETE_PROCESS_DEFINE_BY_CODES_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.BATCH_MOVE_PROCESS_DEFINITION_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.CREATE_PROCESS_DEFINITION_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.DELETE_PROCESS_DEFINE_BY_CODE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.DELETE_PROCESS_DEFINITION_VERSION_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.ENCAPSULATION_TREEVIEW_STRUCTURE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.GET_TASKS_LIST_BY_PROCESS_DEFINITION_ID_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.IMPORT_PROCESS_DEFINE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.QUERY_DETAIL_OF_PROCESS_DEFINITION_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.QUERY_PROCESS_DEFINITION_LIST;
import static org.apache.dolphinscheduler.api.enums.Status.QUERY_PROCESS_DEFINITION_LIST_PAGING_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.QUERY_PROCESS_DEFINITION_VERSIONS_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.RELEASE_PROCESS_DEFINITION_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.SWITCH_PROCESS_DEFINITION_VERSION_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.UPDATE_PROCESS_DEFINITION_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.VERIFY_PROCESS_DEFINITION_NAME_UNIQUE_ERROR;
import org.apache.dolphinscheduler.api.aspect.AccessLogAnnotation;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.exceptions.ApiException;
@ -31,7 +49,6 @@ import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.commons.lang.StringUtils;
import java.text.MessageFormat;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
@ -63,8 +80,6 @@ import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiParam;
import springfox.documentation.annotations.ApiIgnore;
import static org.apache.dolphinscheduler.api.enums.Status.*;
/**
* process definition controller
*/
@ -551,49 +566,6 @@ public class ProcessDefinitionController extends BaseController {
return returnDataList(result);
}
/**
* get process definition list map by project code
*
* @param loginUser login user
* @param projectCode project code
* @return process definition list data
*/
@ApiOperation(value = "getProcessListByProjectCode", notes = "GET_PROCESS_LIST_BY_PROCESS_CODE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "projectCode", value = "PROJECT_CODE", required = true, type = "Long", example = "100")
})
@GetMapping(value = "/query-process-definition-list")
@ResponseStatus(HttpStatus.OK)
@ApiException(GET_TASKS_LIST_BY_PROCESS_DEFINITION_ID_ERROR)
public Result getProcessListByProjectCodes(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@ApiParam(name = "projectCode", value = "PROJECT_CODE", required = true) @PathVariable long projectCode
) {
Map<String, Object> result = processDefinitionService.queryProcessDefinitionListByProjectCode(projectCode);
return returnDataList(result);
}
/**
* get task definition list by process definition code
*
* @param loginUser login user
* @param projectCode project code
* @return process definition list data
*/
@ApiOperation(value = "getTaskListByProcessDefinitionCode", notes = "GET_TASK_LIST_BY_PROCESS_CODE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "projectCode", value = "PROJECT_CODE", required = true, type = "Long", example = "100"),
@ApiImplicitParam(name = "processDefinitionCode", value = "PROCESS_DEFINITION_CODE", required = true, dataType = "Long", example = "100"),
})
@GetMapping(value = "/query-task-definition-list")
@ResponseStatus(HttpStatus.OK)
@ApiException(GET_TASKS_LIST_BY_PROCESS_DEFINITION_ID_ERROR)
public Result getTaskListByProcessDefinitionCode(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@ApiParam(name = "projectCode", value = "PROJECT_CODE", required = true) @PathVariable long projectCode,
@RequestParam(value = "processDefinitionCode") Long processDefinitionCode) {
Map<String, Object> result = processDefinitionService.queryTaskDefinitionListByProcessDefinitionCode(projectCode, processDefinitionCode);
return returnDataList(result);
}
/**
* delete process definition by code
*
@ -645,17 +617,17 @@ public class ProcessDefinitionController extends BaseController {
try {
Map<String, Object> deleteResult = processDefinitionService.deleteProcessDefinitionByCode(loginUser, projectCode, code);
if (!Status.SUCCESS.equals(deleteResult.get(Constants.STATUS))) {
deleteFailedCodeList.add((String) deleteResult.get(Constants.MSG));
deleteFailedCodeList.add(strProcessDefinitionCode);
logger.error((String) deleteResult.get(Constants.MSG));
}
} catch (Exception e) {
deleteFailedCodeList.add(MessageFormat.format(Status.DELETE_PROCESS_DEFINE_BY_CODES_ERROR.getMsg(), strProcessDefinitionCode));
deleteFailedCodeList.add(strProcessDefinitionCode);
}
}
}
if (!deleteFailedCodeList.isEmpty()) {
putMsg(result, BATCH_DELETE_PROCESS_DEFINE_BY_CODES_ERROR, String.join("\n", deleteFailedCodeList));
putMsg(result, BATCH_DELETE_PROCESS_DEFINE_BY_CODES_ERROR, String.join(",", deleteFailedCodeList));
} else {
putMsg(result, Status.SUCCESS);
}

View File

@ -42,8 +42,10 @@ import org.apache.dolphinscheduler.plugin.task.api.enums.ExecutionStatus;
import org.apache.commons.lang.StringUtils;
import java.io.IOException;
import java.text.MessageFormat;
import java.util.*;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@ -401,16 +403,16 @@ public class ProcessInstanceController extends BaseController {
try {
Map<String, Object> deleteResult = processInstanceService.deleteProcessInstanceById(loginUser, projectCode, processInstanceId);
if (!Status.SUCCESS.equals(deleteResult.get(Constants.STATUS))) {
deleteFailedIdList.add((String) deleteResult.get(Constants.MSG));
deleteFailedIdList.add(strProcessInstanceId);
logger.error((String) deleteResult.get(Constants.MSG));
}
} catch (Exception e) {
deleteFailedIdList.add(MessageFormat.format(Status.PROCESS_INSTANCE_ERROR.getMsg(), strProcessInstanceId));
deleteFailedIdList.add(strProcessInstanceId);
}
}
}
if (!deleteFailedIdList.isEmpty()) {
putMsg(result, Status.BATCH_DELETE_PROCESS_INSTANCE_BY_IDS_ERROR, String.join("\n", deleteFailedIdList));
putMsg(result, Status.BATCH_DELETE_PROCESS_INSTANCE_BY_IDS_ERROR, String.join(",", deleteFailedIdList));
} else {
putMsg(result, Status.SUCCESS);
}

View File

@ -287,7 +287,7 @@ public class ProjectController extends BaseController {
@ApiException(LOGIN_USER_QUERY_PROJECT_LIST_PAGING_ERROR)
@AccessLogAnnotation(ignoreRequestArgs = "loginUser")
public Result queryAllProjectList(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser) {
Map<String, Object> result = projectService.queryAllProjectList(loginUser);
Map<String, Object> result = projectService.queryAllProjectList();
return returnDataList(result);
}
}

View File

@ -22,7 +22,6 @@ import static org.apache.dolphinscheduler.api.enums.Status.CREATE_USER_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.DELETE_USER_BY_ID_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.GET_USER_INFO_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.GRANT_DATASOURCE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.GRANT_K8S_NAMESPACE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.GRANT_PROJECT_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.GRANT_RESOURCE_ERROR;
import static org.apache.dolphinscheduler.api.enums.Status.GRANT_UDF_FUNCTION_ERROR;
@ -335,31 +334,6 @@ public class UsersController extends BaseController {
}
/**
* grant namespace
*
* @param loginUser login user
* @param userId user id
* @param namespaceIds namespace id array
* @return grant result code
*/
@ApiOperation(value = "grantNamespace", notes = "GRANT_NAMESPACE_NOTES")
@ApiImplicitParams({
@ApiImplicitParam(name = "userId", value = "USER_ID", required = true, dataType = "Int", example = "100"),
@ApiImplicitParam(name = "namespaceIds", value = "NAMESPACE_IDS", required = true, type = "String")
})
@PostMapping(value = "/grant-namespace")
@ResponseStatus(HttpStatus.OK)
@ApiException(GRANT_K8S_NAMESPACE_ERROR)
@AccessLogAnnotation
public Result grantNamespace(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@RequestParam(value = "userId") int userId,
@RequestParam(value = "namespaceIds") String namespaceIds) {
Map<String, Object> result = usersService.grantNamespaces(loginUser, userId, namespaceIds);
return returnDataList(result);
}
/**
* grant datasource
*

View File

@ -254,7 +254,6 @@ public enum Status {
COUNT_PROCESS_DEFINITION_USER_ERROR(50013, "count process definition user error", "查询各用户流程定义数错误"),
START_PROCESS_INSTANCE_ERROR(50014, "start process instance error", "运行工作流实例错误"),
BATCH_START_PROCESS_INSTANCE_ERROR(50014, "batch start process instance error: {0}", "批量运行工作流实例错误: {0}"),
PROCESS_INSTANCE_ERROR(50014, "process instance delete error: {0}", "工作流实例删除[{0}]错误"),
EXECUTE_PROCESS_INSTANCE_ERROR(50015, "execute process instance error", "操作工作流实例错误"),
CHECK_PROCESS_DEFINITION_ERROR(50016, "check process definition error", "工作流定义错误"),
QUERY_RECIPIENTS_AND_COPYERS_BY_PROCESS_DEFINITION_ERROR(50017, "query recipients and copyers by process definition error", "查询收件人和抄送人错误"),
@ -268,7 +267,6 @@ public enum Status {
DELETE_SCHEDULE_CRON_BY_ID_ERROR(50024, "delete schedule by id error", "删除调度配置错误"),
BATCH_DELETE_PROCESS_DEFINE_ERROR(50025, "batch delete process definition error", "批量删除工作流定义错误"),
BATCH_DELETE_PROCESS_DEFINE_BY_CODES_ERROR(50026, "batch delete process definition by codes {0} error", "批量删除工作流定义[{0}]错误"),
DELETE_PROCESS_DEFINE_BY_CODES_ERROR(50026, "delete process definition by codes {0} error", "删除工作流定义[{0}]错误"),
TENANT_NOT_SUITABLE(50027, "there is not any tenant suitable, please choose a tenant available.", "没有合适的租户,请选择可用的租户"),
EXPORT_PROCESS_DEFINE_BY_ID_ERROR(50028, "export process definition by id error", "导出工作流定义错误"),
BATCH_EXPORT_PROCESS_DEFINE_BY_IDS_ERROR(50028, "batch export process definition by ids error", "批量导出工作流定义错误"),
@ -396,11 +394,7 @@ public enum Status {
VERIFY_K8S_NAMESPACE_ERROR(1300007, "verify k8s and namespace error", "验证k8s命名空间信息错误"),
DELETE_K8S_NAMESPACE_BY_ID_ERROR(1300008, "delete k8s namespace by id error", "删除命名空间错误"),
VERIFY_PARAMETER_NAME_FAILED(1300009, "The file name verify failed", "文件命名校验失败"),
STORE_OPERATE_CREATE_ERROR(1300010, "create the resource failed", "存储操作失败"),
GRANT_K8S_NAMESPACE_ERROR(1300011, "grant namespace error", "授权资源错误"),
QUERY_UNAUTHORIZED_NAMESPACE_ERROR(1300012, "query unauthorized namespace error", "查询未授权命名空间错误"),
QUERY_AUTHORIZED_NAMESPACE_ERROR(1300013, "query authorized namespace error", "查询授权命名空间错误"),
QUERY_CAN_USE_K8S_CLUSTER_ERROR(1300014, "login user query can used k8s cluster list error", "查询可用k8s集群错误");
STORE_OPERATE_CREATE_ERROR(1300010, "create the resource failed", "存储操作失败");
private final int code;
private final String enMsg;

View File

@ -17,12 +17,8 @@
package org.apache.dolphinscheduler.api.service;
import org.apache.dolphinscheduler.dao.entity.ExecuteStatusCount;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.ibatis.annotations.Param;
import java.util.Date;
import java.util.List;
import java.util.Map;
/**
@ -33,10 +29,10 @@ public interface DataAnalysisService {
/**
* statistical task instance status data
*
* @param loginUser login user
* @param loginUser login user
* @param projectCode project code
* @param startDate start date
* @param endDate end date
* @param startDate start date
* @param endDate end date
* @return task state count data
*/
Map<String, Object> countTaskStateByProject(User loginUser, long projectCode, String startDate, String endDate);
@ -44,20 +40,20 @@ public interface DataAnalysisService {
/**
* statistical process instance status data
*
* @param loginUser login user
* @param loginUser login user
* @param projectCode project code
* @param startDate start date
* @param endDate end date
* @param startDate start date
* @param endDate end date
* @return process instance state count data
*/
Map<String, Object> countProcessInstanceStateByProject(User loginUser, long projectCode, String startDate, String endDate);
/**
* statistics the process definition quantities of a certain person
* <p>
*
* We only need projects which users have permission to see to determine whether the definition belongs to the user or not.
*
* @param loginUser login user
* @param loginUser login user
* @param projectCode project code
* @return definition count data
*/
@ -79,17 +75,4 @@ public interface DataAnalysisService {
*/
Map<String, Object> countQueueState(User loginUser);
/**
* Statistics task instance group by given project codes list
* <p>
* We only need project codes to determine whether the task instance belongs to the user or not.
*
* @param startTime Statistics start time
* @param endTime Statistics end time
* @param projectCodes Project codes list to filter
* @return List of ExecuteStatusCount
*/
List<ExecuteStatusCount> countTaskInstanceAllStatesByProjectCodes(@Param("startTime") Date startTime,
@Param("endTime") Date endTime,
@Param("projectCodes") Long[] projectCodes);
}

View File

@ -18,16 +18,14 @@
package org.apache.dolphinscheduler.api.service;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.dao.entity.K8sNamespace;
import org.apache.dolphinscheduler.dao.entity.User;
import java.util.List;
import java.util.Map;
/**
* k8s namespace service impl
*/
public interface K8sNamespaceService {
public interface K8sNameSpaceService {
/**
* query namespace list paging
*
@ -46,23 +44,26 @@ public interface K8sNamespaceService {
* @param loginUser login user
* @param namespace namespace
* @param k8s k8s not null
* @param owner owner can null
* @param tag can null,if set means just used for one type job,such as flink,spark
* @param limitsCpu limits cpu, can null means not limit
* @param limitsMemory limits memory, can null means not limit
* @return
*/
Map<String, Object> createK8sNamespace(User loginUser, String namespace, String k8s, Double limitsCpu, Integer limitsMemory);
Map<String, Object> createK8sNamespace(User loginUser, String namespace, String k8s, String owner, String tag, Double limitsCpu, Integer limitsMemory);
/**
* update K8s Namespace tag and resource limit
*
* @param loginUser login user
* @param userName owner
* @param owner owner
* @param tag Which type of job is available,such as flink,means only flink job can use, can be empty, all available
* @param limitsCpu max cpu
* @param limitsMemory max memory
* @return
*/
Map<String, Object> updateK8sNamespace(User loginUser, int id, String userName, Double limitsCpu, Integer limitsMemory);
Map<String, Object> updateK8sNamespace(User loginUser, int id, String owner, String tag, Double limitsCpu, Integer limitsMemory);
/**
* verify namespace and k8s
@ -81,30 +82,4 @@ public interface K8sNamespaceService {
* @return
*/
Map<String, Object> deleteNamespaceById(User loginUser, int id);
/**
* query unauthorized namespace
*
* @param loginUser login user
* @param userId user id
* @return the namespaces which user have not permission to see
*/
Map<String, Object> queryUnauthorizedNamespace(User loginUser, Integer userId);
/**
* query unauthorized namespace
*
* @param loginUser login user
* @param userId user id
* @return namespaces which the user have permission to see
*/
Map<String, Object> queryAuthorizedNamespace(User loginUser, Integer userId);
/**
* query namespace can use
*
* @param loginUser login user
* @return namespace list
*/
List<K8sNamespace> queryNamespaceAvailable(User loginUser);
}

View File

@ -296,23 +296,6 @@ public interface ProcessDefinitionService {
*/
Map<String, Object> queryAllProcessDefinitionByProjectCode(User loginUser, long projectCode);
/**
* query process definition list by project code
*
* @param projectCode project code
* @return process definitions in the project
*/
Map<String, Object> queryProcessDefinitionListByProjectCode(long projectCode);
/**
* query process definition list by project code
*
* @param projectCode project code
* @param processDefinitionCode process definition code
* @return process definitions in the project
*/
Map<String, Object> queryTaskDefinitionListByProcessDefinitionCode(long projectCode, Long processDefinitionCode);
/**
* Encapsulates the TreeView structure
*

View File

@ -138,10 +138,10 @@ public interface ProjectService {
/**
* query all project list that have one or more process definitions.
* @param loginUser
*
* @return project list
*/
Map<String, Object> queryAllProjectList(User loginUser);
Map<String, Object> queryAllProjectList();
/**
* query authorized and user create project list by user id

View File

@ -194,17 +194,6 @@ public interface UsersService {
Map<String, Object> grantUDFFunction(User loginUser, int userId, String udfIds);
/**
* grant namespace
*
* @param loginUser login user
* @param userId user id
* @param namespaceIds namespace id array
* @return grant result code
*/
Map<String, Object> grantNamespaces(User loginUser, int userId, String namespaceIds);
/**
* grant datasource
*

View File

@ -20,7 +20,6 @@ package org.apache.dolphinscheduler.api.service.impl;
import org.apache.dolphinscheduler.api.dto.CommandStateCount;
import org.apache.dolphinscheduler.api.dto.DefineUserDto;
import org.apache.dolphinscheduler.api.dto.TaskCountDto;
import org.apache.dolphinscheduler.api.dto.TaskStateCount;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.DataAnalysisService;
import org.apache.dolphinscheduler.api.service.ProjectService;
@ -53,7 +52,6 @@ import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.stream.Collectors;
@ -103,11 +101,11 @@ public class DataAnalysisServiceImpl extends BaseServiceImpl implements DataAnal
public Map<String, Object> countTaskStateByProject(User loginUser, long projectCode, String startDate, String endDate) {
return countStateByProject(
loginUser,
projectCode,
startDate,
endDate,
this::countTaskInstanceAllStatesByProjectCodes);
loginUser,
projectCode,
startDate,
endDate,
(start, end, projectCodes) -> this.taskInstanceMapper.countTaskInstanceStateByProjectCodes(start, end, projectCodes));
}
/**
@ -121,15 +119,15 @@ public class DataAnalysisServiceImpl extends BaseServiceImpl implements DataAnal
*/
@Override
public Map<String, Object> countProcessInstanceStateByProject(User loginUser, long projectCode, String startDate, String endDate) {
Map<String, Object> result = this.countStateByProject(
Map<String, Object> result = this.countStateByProject(
loginUser,
projectCode,
startDate,
endDate,
(start, end, projectCodes) -> this.processInstanceMapper.countInstanceStateByProjectCodes(start, end, projectCodes));
(start, end, projectCodes) -> this.processInstanceMapper.countInstanceStateByProjectCodes(start, end, projectCodes));
// process state count needs to remove state of forced success
if (result.containsKey(Constants.STATUS) && result.get(Constants.STATUS).equals(Status.SUCCESS)) {
((TaskCountDto) result.get(Constants.DATA_LIST)).removeStateFromCountList(ExecutionStatus.FORCED_SUCCESS);
((TaskCountDto)result.get(Constants.DATA_LIST)).removeStateFromCountList(ExecutionStatus.FORCED_SUCCESS);
}
return result;
}
@ -165,9 +163,9 @@ public class DataAnalysisServiceImpl extends BaseServiceImpl implements DataAnal
}
}
Long[] projectCodeArray = projectCode == 0 ? getProjectCodesArrays(loginUser)
: new Long[]{projectCode};
List<ExecuteStatusCount> processInstanceStateCounts = new ArrayList<>();
Long[] projectCodeArray = projectCode == 0 ? getProjectCodesArrays(loginUser)
: new Long[] {projectCode};
if (projectCodeArray.length != 0 || loginUser.getUserType() == UserType.ADMIN_USER) {
processInstanceStateCounts = instanceStateCounter.apply(start, end, projectCodeArray);
@ -205,7 +203,7 @@ public class DataAnalysisServiceImpl extends BaseServiceImpl implements DataAnal
List<DefinitionGroupByUser> defineGroupByUsers = new ArrayList<>();
Long[] projectCodeArray = projectCode == 0 ? getProjectCodesArrays(loginUser)
: new Long[]{projectCode};
: new Long[] {projectCode};
if (projectCodeArray.length != 0 || loginUser.getUserType() == UserType.ADMIN_USER) {
defineGroupByUsers = processDefinitionMapper.countDefinitionByProjectCodes(projectCodeArray);
}
@ -290,29 +288,4 @@ public class DataAnalysisServiceImpl extends BaseServiceImpl implements DataAnal
return result;
}
@Override
public List<ExecuteStatusCount> countTaskInstanceAllStatesByProjectCodes(Date startTime, Date endTime, Long[] projectCodes) {
Optional<List<ExecuteStatusCount>> startTimeStates = Optional.ofNullable(this.taskInstanceMapper.countTaskInstanceStateByProjectCodes(startTime, endTime, projectCodes));
List<ExecutionStatus> allState = Arrays.stream(ExecutionStatus.values()).collect(Collectors.toList());
List<ExecutionStatus> needRecountState;
if (startTimeStates.isPresent() && startTimeStates.get().size() != 0) {
List<ExecutionStatus> instanceState = startTimeStates.get().stream().map(ExecuteStatusCount::getExecutionStatus).collect(Collectors.toList());
//value 0 state need to recount by submit time
needRecountState = allState.stream().filter(ele -> !instanceState.contains(ele)).collect(Collectors.toList());
if (needRecountState.size() == 0) {
return startTimeStates.get();
}
} else {
needRecountState = allState;
}
//use submit time to recount when 0
//if have any issues with this code, should change to specified states 0 8 9 17 not state count is 0
List<ExecuteStatusCount> recounts = this.taskInstanceMapper
.countTaskInstanceStateByProjectCodesAndStatesBySubmitTime(startTime, endTime, projectCodes, needRecountState);
startTimeStates.orElseGet(ArrayList::new).addAll(recounts);
return startTimeStates.orElse(null);
}
}

View File

@ -215,10 +215,10 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
Map<String, Object> result = new HashMap<>();
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
// check process definition exists
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(processDefineCode));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processDefineCode);
} else if (processDefinition.getReleaseState() != ReleaseState.ONLINE) {
// check process definition online
putMsg(result, Status.PROCESS_DEFINE_NOT_RELEASE, String.valueOf(processDefineCode));
putMsg(result, Status.PROCESS_DEFINE_NOT_RELEASE, processDefineCode);
} else if (!checkSubProcessDefinitionValid(processDefinition)){
// check sub process definition online
putMsg(result, Status.SUB_PROCESS_DEFINE_NOT_RELEASE);
@ -248,9 +248,6 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
taskDefinitions.stream()
.filter(task -> TaskConstants.TASK_TYPE_SUB_PROCESS.equalsIgnoreCase(task.getTaskType()))
.forEach(taskDefinition -> processDefinitionCodeSet.add(Long.valueOf(JSONUtils.getNodeString(taskDefinition.getTaskParams(), Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE))));
if (processDefinitionCodeSet.isEmpty()){
return true;
}
// check sub releaseState
List<ProcessDefinition> processDefinitions = processDefinitionMapper.queryByCodes(processDefinitionCodeSet);
@ -488,7 +485,7 @@ public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorServ
command.setProcessInstanceId(instanceId);
if (!processService.verifyIsNeedCreateCommand(command)) {
putMsg(result, Status.PROCESS_INSTANCE_EXECUTING_COMMAND, String.valueOf(processDefinitionCode));
putMsg(result, Status.PROCESS_INSTANCE_EXECUTING_COMMAND, processDefinitionCode);
return result;
}

View File

@ -18,7 +18,7 @@
package org.apache.dolphinscheduler.api.service.impl;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.K8sNamespaceService;
import org.apache.dolphinscheduler.api.service.K8sNameSpaceService;
import org.apache.dolphinscheduler.api.utils.PageInfo;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
@ -29,20 +29,15 @@ import org.apache.dolphinscheduler.service.k8s.K8sClientService;
import org.apache.commons.lang.StringUtils;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
@ -50,9 +45,9 @@ import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
* k8s namespace service impl
*/
@Service
public class K8SNamespaceServiceImpl extends BaseServiceImpl implements K8sNamespaceService {
public class K8sNameSpaceServiceImpl extends BaseServiceImpl implements K8sNameSpaceService {
private static final Logger logger = LoggerFactory.getLogger(K8SNamespaceServiceImpl.class);
private static final Logger logger = LoggerFactory.getLogger(K8sNameSpaceServiceImpl.class);
private static String resourceYaml = "apiVersion: v1\n"
+ "kind: ResourceQuota\n"
@ -105,12 +100,14 @@ public class K8SNamespaceServiceImpl extends BaseServiceImpl implements K8sNames
* @param loginUser login user
* @param namespace namespace
* @param k8s k8s not null
* @param owner owner can null
* @param tag can null,if set means just used for one type job,such as flink,spark
* @param limitsCpu limits cpu, can null means not limit
* @param limitsMemory limits memory, can null means not limit
* @return
*/
@Override
public Map<String, Object> createK8sNamespace(User loginUser, String namespace, String k8s, Double limitsCpu, Integer limitsMemory) {
public Map<String, Object> createK8sNamespace(User loginUser, String namespace, String k8s, String owner, String tag, Double limitsCpu, Integer limitsMemory) {
Map<String, Object> result = new HashMap<>();
if (isNotAdmin(loginUser, result)) {
return result;
@ -146,7 +143,8 @@ public class K8SNamespaceServiceImpl extends BaseServiceImpl implements K8sNames
k8sNamespaceObj.setNamespace(namespace);
k8sNamespaceObj.setK8s(k8s);
k8sNamespaceObj.setUserId(loginUser.getId());
k8sNamespaceObj.setOwner(owner);
k8sNamespaceObj.setTag(tag);
k8sNamespaceObj.setLimitsCpu(limitsCpu);
k8sNamespaceObj.setLimitsMemory(limitsMemory);
k8sNamespaceObj.setOnlineJobNum(0);
@ -156,15 +154,13 @@ public class K8SNamespaceServiceImpl extends BaseServiceImpl implements K8sNames
k8sNamespaceObj.setCreateTime(now);
k8sNamespaceObj.setUpdateTime(now);
if (!Constants.K8S_LOCAL_TEST_CLUSTER.equals(k8sNamespaceObj.getK8s())) {
try {
String yamlStr = genDefaultResourceYaml(k8sNamespaceObj);
k8sClientService.upsertNamespaceAndResourceToK8s(k8sNamespaceObj, yamlStr);
} catch (Exception e) {
logger.error("namespace create to k8s error", e);
putMsg(result, Status.K8S_CLIENT_OPS_ERROR, e.getMessage());
return result;
}
try {
String yamlStr = genDefaultResourceYaml(k8sNamespaceObj);
k8sClientService.upsertNamespaceAndResourceToK8s(k8sNamespaceObj, yamlStr);
} catch (Exception e) {
logger.error("namespace create to k8s error", e);
putMsg(result, Status.K8S_CLIENT_OPS_ERROR, e.getMessage());
return result;
}
k8sNamespaceMapper.insert(k8sNamespaceObj);
@ -177,13 +173,14 @@ public class K8SNamespaceServiceImpl extends BaseServiceImpl implements K8sNames
* update K8s Namespace tag and resource limit
*
* @param loginUser login user
* @param userName owner
* @param owner owner
* @param tag Which type of job is available,such as flink,means only flink job can use, can be empty, all available
* @param limitsCpu max cpu
* @param limitsMemory max memory
* @return
*/
@Override
public Map<String, Object> updateK8sNamespace(User loginUser, int id, String userName, Double limitsCpu, Integer limitsMemory) {
public Map<String, Object> updateK8sNamespace(User loginUser, int id, String owner, String tag, Double limitsCpu, Integer limitsMemory) {
Map<String, Object> result = new HashMap<>();
if (isNotAdmin(loginUser, result)) {
return result;
@ -206,19 +203,18 @@ public class K8SNamespaceServiceImpl extends BaseServiceImpl implements K8sNames
}
Date now = new Date();
k8sNamespaceObj.setTag(tag);
k8sNamespaceObj.setLimitsCpu(limitsCpu);
k8sNamespaceObj.setLimitsMemory(limitsMemory);
k8sNamespaceObj.setUpdateTime(now);
if (!Constants.K8S_LOCAL_TEST_CLUSTER.equals(k8sNamespaceObj.getK8s())) {
try {
String yamlStr = genDefaultResourceYaml(k8sNamespaceObj);
k8sClientService.upsertNamespaceAndResourceToK8s(k8sNamespaceObj, yamlStr);
} catch (Exception e) {
logger.error("namespace update to k8s error", e);
putMsg(result, Status.K8S_CLIENT_OPS_ERROR, e.getMessage());
return result;
}
k8sNamespaceObj.setOwner(owner);
try {
String yamlStr = genDefaultResourceYaml(k8sNamespaceObj);
k8sClientService.upsertNamespaceAndResourceToK8s(k8sNamespaceObj, yamlStr);
} catch (Exception e) {
logger.error("namespace update to k8s error", e);
putMsg(result, Status.K8S_CLIENT_OPS_ERROR, e.getMessage());
return result;
}
// update to db
k8sNamespaceMapper.updateById(k8sNamespaceObj);
@ -275,9 +271,8 @@ public class K8SNamespaceServiceImpl extends BaseServiceImpl implements K8sNames
putMsg(result, Status.K8S_NAMESPACE_NOT_EXIST, id);
return result;
}
if (!Constants.K8S_LOCAL_TEST_CLUSTER.equals(k8sNamespaceObj.getK8s())) {
k8sClientService.deleteNamespaceToK8s(k8sNamespaceObj.getNamespace(), k8sNamespaceObj.getK8s());
}
k8sClientService.deleteNamespaceToK8s(k8sNamespaceObj.getNamespace(), k8sNamespaceObj.getK8s());
k8sNamespaceMapper.deleteById(id);
putMsg(result, Status.SUCCESS);
return result;
@ -328,96 +323,4 @@ public class K8SNamespaceServiceImpl extends BaseServiceImpl implements K8sNames
}
return result;
}
/**
* query unauthorized namespace
*
* @param loginUser login user
* @param userId user id
* @return the namespaces which user have not permission to see
*/
@Override
public Map<String, Object> queryUnauthorizedNamespace(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
if (loginUser.getId() != userId && isNotAdmin(loginUser, result)) {
return result;
}
// query all namespace list,this auth does not like project
List<K8sNamespace> namespaceList = k8sNamespaceMapper.selectList(null);
List<K8sNamespace> resultList = new ArrayList<>();
Set<K8sNamespace> namespaceSet;
if (namespaceList != null && !namespaceList.isEmpty()) {
namespaceSet = new HashSet<>(namespaceList);
List<K8sNamespace> authedProjectList = k8sNamespaceMapper.queryAuthedNamespaceListByUserId(userId);
resultList = getUnauthorizedNamespaces(namespaceSet, authedProjectList);
}
result.put(Constants.DATA_LIST, resultList);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* query authorized namespace
*
* @param loginUser login user
* @param userId user id
* @return namespaces which the user have permission to see
*/
@Override
public Map<String, Object> queryAuthorizedNamespace(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
if (loginUser.getId() != userId && isNotAdmin(loginUser, result)) {
return result;
}
List<K8sNamespace> namespaces = k8sNamespaceMapper.queryAuthedNamespaceListByUserId(userId);
result.put(Constants.DATA_LIST, namespaces);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* query namespace can use
*
* @param loginUser login user
* @return namespace list
*/
@Override
public List<K8sNamespace> queryNamespaceAvailable(User loginUser) {
if (isAdmin(loginUser)) {
return k8sNamespaceMapper.selectList(null);
} else {
return k8sNamespaceMapper.queryNamespaceAvailable(loginUser.getId());
}
}
/**
* get unauthorized namespace
*
* @param namespaceSet namespace set
* @param authedNamespaceList authed namespace list
* @return namespace list that authorization
*/
private List<K8sNamespace> getUnauthorizedNamespaces(Set<K8sNamespace> namespaceSet, List<K8sNamespace> authedNamespaceList) {
List<K8sNamespace> resultList = new ArrayList<>();
for (K8sNamespace k8sNamespace : namespaceSet) {
boolean existAuth = false;
if (authedNamespaceList != null && !authedNamespaceList.isEmpty()) {
for (K8sNamespace k8sNamespaceAuth : authedNamespaceList) {
if (k8sNamespace.equals(k8sNamespaceAuth)) {
existAuth = true;
}
}
}
if (!existAuth) {
resultList.add(k8sNamespace);
}
}
return resultList;
}
}

View File

@ -43,7 +43,6 @@ import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.Priority;
import org.apache.dolphinscheduler.common.enums.ProcessExecutionTypeEnum;
import org.apache.dolphinscheduler.common.enums.ReleaseState;
import org.apache.dolphinscheduler.dao.entity.DependentSimplifyDefinition;
import org.apache.dolphinscheduler.plugin.task.api.enums.TaskTimeoutStrategy;
import org.apache.dolphinscheduler.common.enums.TimeoutFlag;
import org.apache.dolphinscheduler.common.enums.UserType;
@ -102,7 +101,6 @@ import java.util.Collection;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
@ -482,7 +480,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(code);
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, code);
} else {
Tenant tenant = tenantMapper.queryById(processDefinition.getTenantId());
if (tenant != null) {
@ -576,7 +574,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(code);
// check process definition exists
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, code);
return result;
}
if (processDefinition.getReleaseState() == ReleaseState.ONLINE) {
@ -699,7 +697,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
}
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(code);
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, code);
return result;
}
@ -711,7 +709,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
// check process definition is already online
if (processDefinition.getReleaseState() == ReleaseState.ONLINE) {
putMsg(result, Status.PROCESS_DEFINE_STATE_ONLINE, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_STATE_ONLINE, code);
return result;
}
// check process instances is already running
@ -777,7 +775,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(code);
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, code);
return result;
}
switch (releaseState) {
@ -1341,7 +1339,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(code);
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
logger.info("process define not exists");
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, code);
return result;
}
DagData dagData = processService.genDagData(processDefinition);
@ -1421,58 +1419,6 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
return result;
}
/**
* query process definition list by project code
*
* @param projectCode project code
* @return process definition list in the project
*/
@Override
public Map<String, Object> queryProcessDefinitionListByProjectCode(long projectCode) {
Map<String, Object> result = new HashMap<>();
List<DependentSimplifyDefinition> processDefinitions = processDefinitionMapper.queryDefinitionListByProjectCodeAndProcessDefinitionCodes(projectCode, null);
result.put(Constants.DATA_LIST, processDefinitions);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* query process definition list by process definition code
*
* @param projectCode project code
* @param processDefinitionCode process definition code
* @return task definition list in the process definition
*/
@Override
public Map<String, Object> queryTaskDefinitionListByProcessDefinitionCode(long projectCode, Long processDefinitionCode) {
Map<String, Object> result = new HashMap<>();
Set<Long> definitionCodesSet = new HashSet<>();
definitionCodesSet.add(processDefinitionCode);
List<DependentSimplifyDefinition> processDefinitions = processDefinitionMapper.queryDefinitionListByProjectCodeAndProcessDefinitionCodes(projectCode, definitionCodesSet);
//query process task relation
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryProcessTaskRelationsByProcessDefinitionCode(
processDefinitions.get(0).getCode(),
processDefinitions.get(0).getVersion());
//query task definition log
List<TaskDefinitionLog> taskDefinitionLogsList = processService.genTaskDefineList(processTaskRelations);
List<DependentSimplifyDefinition> taskDefinitionList = new ArrayList<>();
for (TaskDefinitionLog taskDefinitionLog : taskDefinitionLogsList) {
DependentSimplifyDefinition dependentSimplifyDefinition = new DependentSimplifyDefinition();
dependentSimplifyDefinition.setCode(taskDefinitionLog.getCode());
dependentSimplifyDefinition.setName(taskDefinitionLog.getName());
dependentSimplifyDefinition.setVersion(taskDefinitionLog.getVersion());
taskDefinitionList.add(dependentSimplifyDefinition);
}
result.put(Constants.DATA_LIST, taskDefinitionList);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* Encapsulates the TreeView structure
*
@ -1487,7 +1433,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(code);
if (null == processDefinition || projectCode != processDefinition.getProjectCode()) {
logger.info("process define not exists");
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, code);
return result;
}
DAG<String, TaskNode, TaskNodeRelation> dag = processService.genDagGraph(processDefinition);
@ -1897,7 +1843,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(code);
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, code);
} else {
if (processDefinition.getVersion() == version) {
putMsg(result, Status.MAIN_TABLE_USING_VERSION);
@ -2085,7 +2031,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(code);
// check process definition exists
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, code);
return result;
}
if (processDefinition.getReleaseState() == ReleaseState.ONLINE) {
@ -2186,7 +2132,7 @@ public class ProcessDefinitionServiceImpl extends BaseServiceImpl implements Pro
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(code);
if (processDefinition == null) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, code);
return result;
}
Schedule scheduleObj = scheduleMapper.queryByProcessDefinitionCode(code);

View File

@ -59,7 +59,6 @@ import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper;
@ -150,9 +149,6 @@ public class ProcessInstanceServiceImpl extends BaseServiceImpl implements Proce
@Autowired
private TaskPluginManager taskPluginManager;
@Autowired
private ScheduleMapper scheduleMapper;
/**
* return top n SUCCESS process instance order by running time which started between startTime and endTime
*/
@ -476,17 +472,7 @@ public class ProcessInstanceServiceImpl extends BaseServiceImpl implements Proce
processInstance.getName(), processInstance.getState().toString(), "update");
return result;
}
//
Map<String, String> commandParamMap = JSONUtils.toMap(processInstance.getCommandParam());
String timezoneId = null;
if (commandParamMap == null || StringUtils.isBlank(commandParamMap.get(Constants.SCHEDULE_TIMEZONE))) {
timezoneId = loginUser.getTimeZone();
} else {
timezoneId = commandParamMap.get(Constants.SCHEDULE_TIMEZONE);
}
setProcessInstance(processInstance, tenantCode, scheduleTime, globalParams, timeout, timezoneId);
setProcessInstance(processInstance, tenantCode, scheduleTime, globalParams, timeout);
List<TaskDefinitionLog> taskDefinitionLogs = JSONUtils.toList(taskDefinitionJson, TaskDefinitionLog.class);
if (taskDefinitionLogs.isEmpty()) {
putMsg(result, Status.DATA_IS_NOT_VALID, taskDefinitionJson);
@ -552,7 +538,7 @@ public class ProcessInstanceServiceImpl extends BaseServiceImpl implements Proce
/**
* update process instance attributes
*/
private void setProcessInstance(ProcessInstance processInstance, String tenantCode, String scheduleTime, String globalParams, int timeout, String timezone) {
private void setProcessInstance(ProcessInstance processInstance, String tenantCode, String scheduleTime, String globalParams, int timeout) {
Date schedule = processInstance.getScheduleTime();
if (scheduleTime != null) {
schedule = DateUtils.getScheduleDate(scheduleTime);
@ -560,7 +546,7 @@ public class ProcessInstanceServiceImpl extends BaseServiceImpl implements Proce
processInstance.setScheduleTime(schedule);
List<Property> globalParamList = JSONUtils.toList(globalParams, Property.class);
Map<String, String> globalParamMap = globalParamList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue));
globalParams = ParameterUtils.curingGlobalParams(globalParamMap, globalParamList, processInstance.getCmdTypeIfComplement(), schedule, timezone);
globalParams = ParameterUtils.curingGlobalParams(globalParamMap, globalParamList, processInstance.getCmdTypeIfComplement(), schedule);
processInstance.setTimeout(timeout);
processInstance.setTenantCode(tenantCode);
processInstance.setGlobalParams(globalParams);
@ -686,14 +672,9 @@ public class ProcessInstanceServiceImpl extends BaseServiceImpl implements Proce
return result;
}
Map<String, String> commandParam = JSONUtils.toMap(processInstance.getCommandParam());
String timezone = null;
if (commandParam != null) {
timezone = commandParam.get(Constants.SCHEDULE_TIMEZONE);
}
Map<String, String> timeParams = BusinessTimeUtils
.getBusinessTime(processInstance.getCmdTypeIfComplement(),
processInstance.getScheduleTime(), timezone);
processInstance.getScheduleTime());
String userDefinedParams = processInstance.getGlobalParams();
// global params
List<Property> globalParams = new ArrayList<>();

View File

@ -107,7 +107,7 @@ public class ProcessTaskRelationServiceImpl extends BaseServiceImpl implements P
}
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(processDefinitionCode);
if (processDefinition == null) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(processDefinitionCode));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processDefinitionCode);
return result;
}
if (processDefinition.getProjectCode() != projectCode) {
@ -122,7 +122,7 @@ public class ProcessTaskRelationServiceImpl extends BaseServiceImpl implements P
.collect(Collectors.toMap(ProcessTaskRelation::getPreTaskCode, processTaskRelation -> processTaskRelation));
if (!preTaskCodeMap.isEmpty()) {
if (preTaskCodeMap.containsKey(preTaskCode) || (!preTaskCodeMap.containsKey(0L) && preTaskCode == 0L)) {
putMsg(result, Status.PROCESS_TASK_RELATION_EXIST, String.valueOf(processDefinitionCode));
putMsg(result, Status.PROCESS_TASK_RELATION_EXIST, processDefinitionCode);
return result;
}
if (preTaskCodeMap.containsKey(0L) && preTaskCode != 0L) {
@ -202,12 +202,12 @@ public class ProcessTaskRelationServiceImpl extends BaseServiceImpl implements P
}
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(processDefinitionCode);
if (processDefinition == null) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(processDefinitionCode));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processDefinitionCode);
return result;
}
TaskDefinition taskDefinition = taskDefinitionMapper.queryByCode(taskCode);
if (null == taskDefinition) {
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, String.valueOf(taskCode));
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, taskCode);
return result;
}
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode);
@ -291,38 +291,29 @@ public class ProcessTaskRelationServiceImpl extends BaseServiceImpl implements P
putMsg(result, Status.DATA_IS_NULL, "preTaskCodes");
return result;
}
List<Long> currentUpstreamList = upstreamList.stream().map(ProcessTaskRelation::getPreTaskCode).collect(Collectors.toList());
if (currentUpstreamList.contains(0L)) {
putMsg(result, Status.DATA_IS_NOT_VALID, "currentUpstreamList");
return result;
}
List<Long> tmpCurrent = Lists.newArrayList(currentUpstreamList);
tmpCurrent.removeAll(preTaskCodeList);
preTaskCodeList.removeAll(currentUpstreamList);
if (!preTaskCodeList.isEmpty()) {
putMsg(result, Status.DATA_IS_NOT_VALID, StringUtils.join(preTaskCodeList, Constants.COMMA));
return result;
}
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(upstreamList.get(0).getProcessDefinitionCode());
if (processDefinition == null) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(upstreamList.get(0).getProcessDefinitionCode()));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, upstreamList.get(0).getProcessDefinitionCode());
return result;
}
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(projectCode, processDefinition.getCode());
List<ProcessTaskRelation> processTaskRelationList = Lists.newArrayList(processTaskRelations);
List<ProcessTaskRelation> processTaskRelationWaitRemove = Lists.newArrayList();
for (ProcessTaskRelation processTaskRelation : processTaskRelationList) {
if (currentUpstreamList.size() > 1) {
if (currentUpstreamList.contains(processTaskRelation.getPreTaskCode())) {
currentUpstreamList.remove(processTaskRelation.getPreTaskCode());
if (preTaskCodeList.size() > 1) {
if (preTaskCodeList.contains(processTaskRelation.getPreTaskCode())) {
preTaskCodeList.remove(processTaskRelation.getPreTaskCode());
processTaskRelationWaitRemove.add(processTaskRelation);
}
} else {
if (processTaskRelation.getPostTaskCode() == taskCode && (currentUpstreamList.isEmpty() || tmpCurrent.isEmpty())) {
if (processTaskRelation.getPostTaskCode() == taskCode) {
processTaskRelation.setPreTaskVersion(0);
processTaskRelation.setPreTaskCode(0L);
}
}
if (preTaskCodeList.contains(processTaskRelation.getPostTaskCode())) {
processTaskRelationWaitRemove.add(processTaskRelation);
}
}
processTaskRelationList.removeAll(processTaskRelationWaitRemove);
updateProcessDefiniteVersion(loginUser, result, processDefinition);
@ -364,7 +355,7 @@ public class ProcessTaskRelationServiceImpl extends BaseServiceImpl implements P
}
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(downstreamList.get(0).getProcessDefinitionCode());
if (processDefinition == null) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(downstreamList.get(0).getProcessDefinitionCode()));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, downstreamList.get(0).getProcessDefinitionCode());
return result;
}
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(projectCode, processDefinition.getCode());
@ -468,7 +459,7 @@ public class ProcessTaskRelationServiceImpl extends BaseServiceImpl implements P
}
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(processDefinitionCode);
if (processDefinition == null) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(processDefinitionCode));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processDefinitionCode);
return result;
}
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode);

View File

@ -38,13 +38,7 @@ import org.apache.dolphinscheduler.dao.mapper.UserMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.*;
import static org.apache.dolphinscheduler.api.utils.CheckUtils.checkDesc;
@ -506,13 +500,13 @@ public class ProjectServiceImpl extends BaseServiceImpl implements ProjectServic
/**
* query all project list
* @param user
*
* @return project list
*/
@Override
public Map<String, Object> queryAllProjectList(User user) {
public Map<String, Object> queryAllProjectList() {
Map<String, Object> result = new HashMap<>();
List<Project> projects = projectMapper.queryAllProject(user.getUserType() == UserType.ADMIN_USER ? 0 : user.getId());
List<Project> projects = projectMapper.queryAllProject();
result.put(Constants.DATA_LIST, projects);
putMsg(result, Status.SUCCESS);

View File

@ -66,7 +66,16 @@ import org.springframework.web.multipart.MultipartFile;
import java.io.IOException;
import java.rmi.ServerException;
import java.text.MessageFormat;
import java.util.*;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.UUID;
import java.util.regex.Matcher;
import java.util.stream.Collectors;
@ -318,11 +327,6 @@ public class ResourcesServiceImpl extends BaseServiceImpl implements ResourcesSe
return result;
}
if (!PropertyUtils.getResUploadStartupState()){
putMsg(result, Status.STORAGE_NOT_STARTUP);
return result;
}
if (resource.isDirectory() && storageOperate.returnStorageType().equals(ResUploadType.S3) && !resource.getFileName().equals(name)) {
putMsg(result, Status.S3_CANNOT_RENAME);
return result;
@ -525,7 +529,7 @@ public class ResourcesServiceImpl extends BaseServiceImpl implements ResourcesSe
String nameSuffix = Files.getFileExtension(name);
// determine file suffix
if (!fileSuffix.equalsIgnoreCase(nameSuffix)) {
if (!(StringUtils.isNotEmpty(fileSuffix) && fileSuffix.equalsIgnoreCase(nameSuffix))) {
// rename file suffix and original suffix must be consistent
logger.error("rename file suffix and original suffix must be consistent: {}", RegexUtils.escapeNRT(file.getOriginalFilename()));
putMsg(result, Status.RESOURCE_SUFFIX_FORBID_CHANGE);
@ -629,7 +633,7 @@ public class ResourcesServiceImpl extends BaseServiceImpl implements ResourcesSe
String nameSuffix = Files.getFileExtension(fullName);
// determine file suffix
if (!fileSuffix.equalsIgnoreCase(nameSuffix)) {
if (!(StringUtils.isNotEmpty(fileSuffix) && fileSuffix.equalsIgnoreCase(nameSuffix))) {
return false;
}
// query tenant

View File

@ -257,7 +257,7 @@ public class SchedulerServiceImpl extends BaseServiceImpl implements SchedulerSe
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(schedule.getProcessDefinitionCode());
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(schedule.getProcessDefinitionCode()));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, schedule.getProcessDefinitionCode());
return result;
}
@ -306,7 +306,7 @@ public class SchedulerServiceImpl extends BaseServiceImpl implements SchedulerSe
}
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(scheduleObj.getProcessDefinitionCode());
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(scheduleObj.getProcessDefinitionCode()));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, scheduleObj.getProcessDefinitionCode());
return result;
}
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(projectCode, scheduleObj.getProcessDefinitionCode());
@ -336,7 +336,7 @@ public class SchedulerServiceImpl extends BaseServiceImpl implements SchedulerSe
if (subProcessDefinition.getReleaseState() != ReleaseState.ONLINE) {
logger.info("not release process definition id: {} , name : {}",
subProcessDefinition.getId(), subProcessDefinition.getName());
putMsg(result, Status.PROCESS_DEFINE_NOT_RELEASE, String.valueOf(subProcessDefinition.getId()));
putMsg(result, Status.PROCESS_DEFINE_NOT_RELEASE, subProcessDefinition.getId());
return result;
}
}
@ -406,7 +406,7 @@ public class SchedulerServiceImpl extends BaseServiceImpl implements SchedulerSe
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(processDefineCode);
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(processDefineCode));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processDefineCode);
return result;
}
@ -618,7 +618,7 @@ public class SchedulerServiceImpl extends BaseServiceImpl implements SchedulerSe
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(processDefinitionCode);
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(processDefinitionCode));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processDefinitionCode);
return result;
}

View File

@ -180,11 +180,11 @@ public class TaskDefinitionServiceImpl extends BaseServiceImpl implements TaskDe
}
ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(processDefinitionCode);
if (processDefinition == null || projectCode != processDefinition.getProjectCode()) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, String.valueOf(processDefinitionCode));
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processDefinitionCode);
return result;
}
if (processDefinition.getReleaseState() == ReleaseState.ONLINE) {
putMsg(result, Status.PROCESS_DEFINE_STATE_ONLINE, String.valueOf(processDefinitionCode));
putMsg(result, Status.PROCESS_DEFINE_STATE_ONLINE, processDefinitionCode);
return result;
}
TaskDefinitionLog taskDefinition = JSONUtils.parseObject(taskDefinitionJsonObj, TaskDefinitionLog.class);
@ -314,7 +314,7 @@ public class TaskDefinitionServiceImpl extends BaseServiceImpl implements TaskDe
}
TaskDefinition taskDefinition = taskDefinitionMapper.queryByCode(taskCode);
if (taskDefinition == null || projectCode != taskDefinition.getProjectCode()) {
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, String.valueOf(taskCode));
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, taskCode);
return result;
}
if (processService.isTaskOnline(taskCode) && taskDefinition.getFlag() == Flag.YES) {
@ -406,7 +406,7 @@ public class TaskDefinitionServiceImpl extends BaseServiceImpl implements TaskDe
}
TaskDefinition taskDefinition = taskDefinitionMapper.queryByCode(taskCode);
if (taskDefinition == null) {
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, String.valueOf(taskCode));
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, taskCode);
return null;
}
if (processService.isTaskOnline(taskCode) && taskDefinition.getFlag() == Flag.YES) {
@ -557,7 +557,7 @@ public class TaskDefinitionServiceImpl extends BaseServiceImpl implements TaskDe
}
TaskDefinition taskDefinition = taskDefinitionMapper.queryByCode(taskCode);
if (taskDefinition == null || projectCode != taskDefinition.getProjectCode()) {
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, String.valueOf(taskCode));
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, taskCode);
return result;
}
TaskDefinitionLog taskDefinitionUpdate = taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(taskCode, version);
@ -618,7 +618,7 @@ public class TaskDefinitionServiceImpl extends BaseServiceImpl implements TaskDe
TaskDefinition taskDefinition = taskDefinitionMapper.queryByCode(taskCode);
if (taskDefinition == null) {
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, String.valueOf(taskCode));
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, taskCode);
} else {
if (taskDefinition.getVersion() == version) {
putMsg(result, Status.MAIN_TABLE_USING_VERSION);
@ -645,7 +645,7 @@ public class TaskDefinitionServiceImpl extends BaseServiceImpl implements TaskDe
TaskDefinition taskDefinition = taskDefinitionMapper.queryByCode(taskCode);
if (taskDefinition == null || projectCode != taskDefinition.getProjectCode()) {
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, String.valueOf(taskCode));
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, taskCode);
} else {
result.put(Constants.DATA_LIST, taskDefinition);
putMsg(result, Status.SUCCESS);
@ -752,12 +752,12 @@ public class TaskDefinitionServiceImpl extends BaseServiceImpl implements TaskDe
}
TaskDefinition taskDefinition = taskDefinitionMapper.queryByCode(code);
if (taskDefinition == null || projectCode != taskDefinition.getProjectCode()) {
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, code);
return result;
}
TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(code, taskDefinition.getVersion());
if (taskDefinitionLog == null) {
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, String.valueOf(code));
putMsg(result, Status.TASK_DEFINE_NOT_EXIST, code);
return result;
}
switch (releaseState) {

View File

@ -36,7 +36,6 @@ import org.apache.dolphinscheduler.common.utils.EncryptionUtils;
import org.apache.dolphinscheduler.common.utils.PropertyUtils;
import org.apache.dolphinscheduler.dao.entity.AlertGroup;
import org.apache.dolphinscheduler.dao.entity.DatasourceUser;
import org.apache.dolphinscheduler.dao.entity.K8sNamespaceUser;
import org.apache.dolphinscheduler.dao.entity.Project;
import org.apache.dolphinscheduler.dao.entity.ProjectUser;
import org.apache.dolphinscheduler.dao.entity.Resource;
@ -47,7 +46,6 @@ import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.AccessTokenMapper;
import org.apache.dolphinscheduler.dao.mapper.AlertGroupMapper;
import org.apache.dolphinscheduler.dao.mapper.DataSourceUserMapper;
import org.apache.dolphinscheduler.dao.mapper.K8sNamespaceUserMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectUserMapper;
@ -119,9 +117,6 @@ public class UsersServiceImpl extends BaseServiceImpl implements UsersService {
@Autowired(required = false)
private StorageOperate storageOperate;
@Autowired
private K8sNamespaceUserMapper k8sNamespaceUserMapper;
/**
* create user, only system admin have permission
*
@ -803,54 +798,6 @@ public class UsersServiceImpl extends BaseServiceImpl implements UsersService {
return result;
}
/**
* grant namespace
*
* @param loginUser login user
* @param userId user id
* @param namespaceIds namespace id array
* @return grant result code
*/
@Override
@Transactional(rollbackFor = RuntimeException.class)
public Map<String, Object> grantNamespaces(User loginUser, int userId, String namespaceIds) {
Map<String, Object> result = new HashMap<>();
result.put(Constants.STATUS, false);
//only admin can operate
if (check(result, !isAdmin(loginUser), Status.USER_NO_OPERATION_PERM)) {
return result;
}
//check exist
User tempUser = userMapper.selectById(userId);
if (tempUser == null) {
putMsg(result, Status.USER_NOT_EXIST, userId);
return result;
}
k8sNamespaceUserMapper.deleteNamespaceRelation(0, userId);
if (StringUtils.isNotEmpty(namespaceIds)) {
String[] namespaceIdArr = namespaceIds.split(",");
for (String namespaceId : namespaceIdArr) {
Date now = new Date();
K8sNamespaceUser namespaceUser = new K8sNamespaceUser();
namespaceUser.setUserId(userId);
namespaceUser.setNamespaceId(Integer.parseInt(namespaceId));
namespaceUser.setPerm(7);
namespaceUser.setCreateTime(now);
namespaceUser.setUpdateTime(now);
k8sNamespaceUserMapper.insert(namespaceUser);
}
}
putMsg(result, Status.SUCCESS);
return result;
}
/**
* grant datasource
*

View File

@ -26,7 +26,7 @@ import java.util.regex.Pattern;
*/
public class RegexUtils {
private static final String LINUX_USERNAME_PATTERN = "^[a-zA-Z0-9_].{0,30}";
private static final String LINUX_USERNAME_PATTERN = "[a-z_][a-z\\d_]{0,30}";
private RegexUtils() {
}

View File

@ -27,16 +27,16 @@ RUN_MODE=run mode
TIMEOUT=timeout
EXECUTE_ACTION_TO_PROCESS_INSTANCE_NOTES=execute action to process instance
EXECUTE_TYPE=execute type
START_CHECK_PROCESS_DEFINITION_NOTES=start check process definition
GET_RECEIVER_CC_NOTES=query receiver cc
START_CHECK_PROCESS_DEFINITION_NOTES=start check process definition
GET_RECEIVER_CC_NOTES=query receiver cc
DESC=description
GROUP_NAME=group name
GROUP_TYPE=group type
QUERY_ALERT_GROUP_LIST_NOTES=query alert group list
UPDATE_ALERT_GROUP_NOTES=update alert group
DELETE_ALERT_GROUP_BY_ID_NOTES=delete alert group by id
VERIFY_ALERT_GROUP_NAME_NOTES=verify alert group name, check alert group exist or not
GRANT_ALERT_GROUP_NOTES=grant alert group
QUERY_ALERT_GROUP_LIST_NOTES=query alert group list
UPDATE_ALERT_GROUP_NOTES=update alert group
DELETE_ALERT_GROUP_BY_ID_NOTES=delete alert group by id
VERIFY_ALERT_GROUP_NAME_NOTES=verify alert group name, check alert group exist or not
GRANT_ALERT_GROUP_NOTES=grant alert group
USER_IDS=user id list
ALERT_GROUP_TAG=alert group related operation
ALERT_PLUGIN_INSTANCE_TAG=alert plugin instance related operation
@ -44,27 +44,27 @@ UPDATE_ALERT_PLUGIN_INSTANCE_NOTES=update alert plugin instance operation
CREATE_ALERT_PLUGIN_INSTANCE_NOTES=create alert plugin instance operation
DELETE_ALERT_PLUGIN_INSTANCE_NOTES=delete alert plugin instance operation
GET_ALERT_PLUGIN_INSTANCE_NOTES=get alert plugin instance operation
CREATE_ALERT_GROUP_NOTES=create alert group
CREATE_ALERT_GROUP_NOTES=create alert group
WORKER_GROUP_TAG=worker group related operation
SAVE_WORKER_GROUP_NOTES=create worker group
WORKER_GROUP_NAME=worker group name
WORKER_IP_LIST=worker ip list, eg. 192.168.1.1,192.168.1.2
QUERY_WORKER_GROUP_PAGING_NOTES=query worker group paging
QUERY_WORKER_GROUP_LIST_NOTES=query worker group list
DELETE_WORKER_GROUP_BY_ID_NOTES=delete worker group by id
QUERY_WORKER_GROUP_LIST_NOTES=query worker group list
DELETE_WORKER_GROUP_BY_ID_NOTES=delete worker group by id
DATA_ANALYSIS_TAG=analysis related operation of task state
COUNT_TASK_STATE_NOTES=count task state
COUNT_TASK_STATE_NOTES=count task state
COUNT_PROCESS_INSTANCE_NOTES=count process instance state
COUNT_PROCESS_DEFINITION_BY_USER_NOTES=count process definition by user
COUNT_COMMAND_STATE_NOTES=count command state
COUNT_PROCESS_DEFINITION_BY_USER_NOTES=count process definition by user
COUNT_COMMAND_STATE_NOTES=count command state
COUNT_QUEUE_STATE_NOTES=count the running status of the task in the queue
ACCESS_TOKEN_TAG=access token related operation
MONITOR_TAG=monitor related operation
MASTER_LIST_NOTES=master server list
WORKER_LIST_NOTES=worker server list
QUERY_DATABASE_STATE_NOTES=query database state
QUERY_ZOOKEEPER_STATE_NOTES=QUERY ZOOKEEPER STATE
QUERY_DATABASE_STATE_NOTES=query database state
QUERY_ZOOKEEPER_STATE_NOTES=QUERY ZOOKEEPER STATE
TASK_STATE=task instance state
SOURCE_TABLE=SOURCE TABLE
DEST_TABLE=dest table
@ -79,18 +79,18 @@ DATA_SOURCE_HOST=DATA SOURCE HOST
DATA_SOURCE_PORT=data source port
DATABASE_NAME=database name
QUEUE_TAG=queue related operation
QUERY_QUEUE_LIST_NOTES=query queue list
QUERY_QUEUE_LIST_PAGING_NOTES=query queue list paging
QUERY_QUEUE_LIST_NOTES=query queue list
QUERY_QUEUE_LIST_PAGING_NOTES=query queue list paging
CREATE_QUEUE_NOTES=create queue
YARN_QUEUE_NAME=yarn(hadoop) queue name
QUEUE_ID=queue id
TENANT_DESC=tenant desc
QUERY_TENANT_LIST_PAGING_NOTES=query tenant list paging
QUERY_TENANT_LIST_NOTES=query tenant list
UPDATE_TENANT_NOTES=update tenant
DELETE_TENANT_NOTES=delete tenant
QUERY_TENANT_LIST_PAGING_NOTES=query tenant list paging
QUERY_TENANT_LIST_NOTES=query tenant list
UPDATE_TENANT_NOTES=update tenant
DELETE_TENANT_NOTES=delete tenant
RESOURCES_TAG=resource center related operation
CREATE_RESOURCE_NOTES=create resource
CREATE_RESOURCE_NOTES=create resource
RESOURCE_TYPE=resource file type
RESOURCE_NAME=resource name
RESOURCE_DESC=resource file desc
@ -99,29 +99,29 @@ RESOURCE_ID=resource id
QUERY_RESOURCE_LIST_NOTES=query resource list
DELETE_RESOURCE_BY_ID_NOTES=delete resource by id
VIEW_RESOURCE_BY_ID_NOTES=view resource by id
ONLINE_CREATE_RESOURCE_NOTES=online create resource
ONLINE_CREATE_RESOURCE_NOTES=online create resource
SUFFIX=resource file suffix
CONTENT=resource file content
UPDATE_RESOURCE_NOTES=edit resource file online
DOWNLOAD_RESOURCE_NOTES=download resource file
CREATE_UDF_FUNCTION_NOTES=create udf function
CREATE_UDF_FUNCTION_NOTES=create udf function
UDF_TYPE=UDF type
FUNC_NAME=function name
CLASS_NAME=package and class name
ARG_TYPES=arguments
UDF_DESC=udf desc
VIEW_UDF_FUNCTION_NOTES=view udf function
UPDATE_UDF_FUNCTION_NOTES=update udf function
QUERY_UDF_FUNCTION_LIST_PAGING_NOTES=query udf function list paging
VERIFY_UDF_FUNCTION_NAME_NOTES=verify udf function name
DELETE_UDF_FUNCTION_NOTES=delete udf function
AUTHORIZED_FILE_NOTES=authorized file
UNAUTHORIZED_FILE_NOTES=unauthorized file
AUTHORIZED_UDF_FUNC_NOTES=authorized udf func
UNAUTHORIZED_UDF_FUNC_NOTES=unauthorized udf func
VERIFY_QUEUE_NOTES=verify queue
VIEW_UDF_FUNCTION_NOTES=view udf function
UPDATE_UDF_FUNCTION_NOTES=update udf function
QUERY_UDF_FUNCTION_LIST_PAGING_NOTES=query udf function list paging
VERIFY_UDF_FUNCTION_NAME_NOTES=verify udf function name
DELETE_UDF_FUNCTION_NOTES=delete udf function
AUTHORIZED_FILE_NOTES=authorized file
UNAUTHORIZED_FILE_NOTES=unauthorized file
AUTHORIZED_UDF_FUNC_NOTES=authorized udf func
UNAUTHORIZED_UDF_FUNC_NOTES=unauthorized udf func
VERIFY_QUEUE_NOTES=verify queue
TENANT_TAG=tenant related operation
CREATE_TENANT_NOTES=create tenant
CREATE_TENANT_NOTES=create tenant
TENANT_CODE=os tenant code
QUEUE_NAME=queue name
PASSWORD=password
@ -131,19 +131,19 @@ DATA_SOURCE_KERBEROS_KRB5_CONF=the kerberos authentication parameter java.securi
DATA_SOURCE_KERBEROS_KEYTAB_USERNAME=the kerberos authentication parameter login.user.keytab.username
DATA_SOURCE_KERBEROS_KEYTAB_PATH=the kerberos authentication parameter login.user.keytab.path
PROJECT_TAG=project related operation
CREATE_PROJECT_NOTES=create project
CREATE_PROJECT_NOTES=create project
PROJECT_DESC=project description
UPDATE_PROJECT_NOTES=update project
UPDATE_PROJECT_NOTES=update project
PROJECT_ID=project id
QUERY_PROJECT_BY_ID_NOTES=query project info by project id
QUERY_PROJECT_LIST_PAGING_NOTES=QUERY PROJECT LIST PAGING
DELETE_PROJECT_BY_ID_NOTES=delete project by id
QUERY_PROJECT_LIST_PAGING_NOTES=QUERY PROJECT LIST PAGING
DELETE_PROJECT_BY_ID_NOTES=delete project by id
QUERY_UNAUTHORIZED_PROJECT_NOTES=query unauthorized project
QUERY_ALL_PROJECT_LIST_NOTES=query all project list
QUERY_AUTHORIZED_PROJECT_NOTES=query authorized project
QUERY_AUTHORIZED_USER_NOTES=query authorized user
TASK_RECORD_TAG=task record related operation
QUERY_TASK_RECORD_LIST_PAGING_NOTES=query task record list paging
QUERY_TASK_RECORD_LIST_PAGING_NOTES=query task record list paging
CREATE_TOKEN_NOTES=create access token for specified user
UPDATE_TOKEN_NOTES=update access token for specified user
TOKEN=access token string, it will be automatically generated when it absent
@ -159,11 +159,11 @@ RECEIVERS=receivers
RECEIVERS_CC=receivers cc
WORKER_GROUP_ID=worker server group id
PROCESS_INSTANCE_PRIORITY=process instance priority
UPDATE_SCHEDULE_NOTES=update schedule
UPDATE_SCHEDULE_NOTES=update schedule
SCHEDULE_ID=schedule id
ONLINE_SCHEDULE_NOTES=online schedule
OFFLINE_SCHEDULE_NOTES=offline schedule
QUERY_SCHEDULE_NOTES=query schedule
OFFLINE_SCHEDULE_NOTES=offline schedule
QUERY_SCHEDULE_NOTES=query schedule
QUERY_SCHEDULE_LIST_PAGING_NOTES=query schedule list paging
LOGIN_TAG=User login related operations
USER_NAME=user name
@ -198,7 +198,7 @@ PROCESS_INSTANCE_JSON=process instance info(json format)
SCHEDULE_TIME=schedule time
SYNC_DEFINE=update the information of the process instance to the process definition
RECOVERY_PROCESS_INSTANCE_FLAG=whether to recovery process instance
RECOVERY_PROCESS_INSTANCE_FLAG=whether to recovery process instance
SEARCH_VAL=search val
USER_ID=user id
PAGE_SIZE=page size
@ -213,27 +213,27 @@ QUERY_PROCESS_INSTANCE_BY_ID_NOTES=query process instance by process instance id
DELETE_PROCESS_INSTANCE_BY_ID_NOTES=delete process instance by process instance id
TASK_ID=task instance id
SKIP_LINE_NUM=skip line num
QUERY_TASK_INSTANCE_LOG_NOTES=query task instance log
QUERY_TASK_INSTANCE_LOG_NOTES=query task instance log
DOWNLOAD_TASK_INSTANCE_LOG_NOTES=download task instance log
USERS_TAG=users related operation
SCHEDULER_TAG=scheduler related operation
CREATE_SCHEDULE_NOTES=create schedule
CREATE_SCHEDULE_NOTES=create schedule
CREATE_USER_NOTES=create user
TENANT_ID=tenant id
QUEUE=queue
EMAIL=email
PHONE=phone
QUERY_USER_LIST_NOTES=query user list
QUERY_USER_LIST_NOTES=query user list
UPDATE_USER_NOTES=update user
DELETE_USER_BY_ID_NOTES=delete user by id
GRANT_PROJECT_NOTES=GRANT PROJECT
GRANT_PROJECT_NOTES=GRANT PROJECT
PROJECT_IDS=project ids(string format, multiple projects separated by ",")
GRANT_PROJECT_BY_CODE_NOTES=GRANT PROJECT BY CODE
REVOKE_PROJECT_NOTES=REVOKE PROJECT FOR USER
PROJECT_CODE=project code
GRANT_RESOURCE_NOTES=grant resource file
RESOURCE_IDS=resource ids(string format, multiple resources separated by ",")
GET_USER_INFO_NOTES=get user info
GET_USER_INFO_NOTES=get user info
LIST_USER_NOTES=list user
VERIFY_USER_NAME_NOTES=verify user name
UNAUTHORIZED_USER_NOTES=cancel authorization
@ -241,12 +241,12 @@ ALERT_GROUP_ID=alert group id
AUTHORIZED_USER_NOTES=authorized user
GRANT_UDF_FUNC_NOTES=grant udf function
UDF_IDS=udf ids(string format, multiple udf functions separated by ",")
GRANT_DATASOURCE_NOTES=grant datasource
GRANT_DATASOURCE_NOTES=grant datasource
DATASOURCE_IDS=datasource ids(string format, multiple datasources separated by ",")
QUERY_SUBPROCESS_INSTANCE_BY_TASK_ID_NOTES=query subprocess instance by task instance id
QUERY_PARENT_PROCESS_INSTANCE_BY_SUB_PROCESS_INSTANCE_ID_NOTES=query parent process instance info by sub process instance id
QUERY_PROCESS_INSTANCE_GLOBAL_VARIABLES_AND_LOCAL_VARIABLES_NOTES=query process instance global variables and local variables
VIEW_GANTT_NOTES=view gantt
VIEW_GANTT_NOTES=view gantt
SUB_PROCESS_INSTANCE_ID=sub process instance id
TASK_NAME=task instance name
TASK_INSTANCE_TAG=task instance related operation
@ -262,9 +262,9 @@ DATA_SOURCE_ID=DATA SOURCE ID
QUERY_DATA_SOURCE_NOTES=query data source by id
QUERY_DATA_SOURCE_LIST_BY_TYPE_NOTES=query data source list by database type
QUERY_DATA_SOURCE_LIST_PAGING_NOTES=query data source list paging
CONNECT_DATA_SOURCE_NOTES=CONNECT DATA SOURCE
CONNECT_DATA_SOURCE_TEST_NOTES=connect data source test
DELETE_DATA_SOURCE_NOTES=delete data source
CONNECT_DATA_SOURCE_NOTES=CONNECT DATA SOURCE
CONNECT_DATA_SOURCE_TEST_NOTES=connect data source test
DELETE_DATA_SOURCE_NOTES=delete data source
VERIFY_DATA_SOURCE_NOTES=verify data source
UNAUTHORIZED_DATA_SOURCE_NOTES=unauthorized data source
AUTHORIZED_DATA_SOURCE_NOTES=authorized data source
@ -299,5 +299,3 @@ OPERATION_TYPE=operation type
TASK_DEFINITION_TAG=task definition related operation
PROCESS_TASK_RELATION_TAG=process task relation related operation
ENVIRONMENT_TAG=environment related operation
GET_PROCESS_LIST_BY_PROCESS_CODE_NOTES=query process definition list by project code
GET_TASK_LIST_BY_PROCESS_CODE_NOTES=query task definition list by process definition code

View File

@ -358,5 +358,3 @@ OPERATION_TYPE=operation type
TASK_DEFINITION_TAG=task definition related operation
PROCESS_TASK_RELATION_TAG=process task relation related operation
ENVIRONMENT_TAG=environment related operation
GET_PROCESS_LIST_BY_PROCESS_CODE_NOTES=query process definition list by project code
GET_TASK_LIST_BY_PROCESS_CODE_NOTES=query task definition list by process definition code

View File

@ -355,5 +355,3 @@ OPERATION_TYPE=操作类型
TASK_DEFINITION_TAG=任务定义相关操作
PROCESS_TASK_RELATION_TAG=工作流关系相关操作
ENVIRONMENT_TAG=环境相关操作
GET_PROCESS_LIST_BY_PROCESS_CODE_NOTES=通过项目代码查询工作流定义
GET_TASK_LIST_BY_PROCESS_CODE_NOTES=通过工作流定义代码查询任务定义

View File

@ -28,12 +28,8 @@ import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.dao.entity.User;
import java.util.HashMap;
import java.util.Map;
import org.junit.Assert;
import org.junit.Test;
import org.mockito.Mockito;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.MediaType;
@ -100,7 +96,7 @@ public class K8sNamespaceControllerTest extends AbstractControllerTest {
.andExpect(content().contentType(MediaType.APPLICATION_JSON))
.andReturn();
Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class);
Assert.assertEquals(Status.SUCCESS.getCode(), result.getCode().intValue());
Assert.assertEquals(Status.K8S_CLIENT_OPS_ERROR.getCode(), result.getCode().intValue());
logger.info("update queue return result:{}", mvcResult.getResponse().getContentAsString());
}
@ -141,7 +137,7 @@ public class K8sNamespaceControllerTest extends AbstractControllerTest {
}
@Test
public void deleteNamespaceById() throws Exception {
public void delNamespaceById() throws Exception {
MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>();
paramsMap.add("id", "1");
@ -153,42 +149,7 @@ public class K8sNamespaceControllerTest extends AbstractControllerTest {
.andReturn();
Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class);
Assert.assertEquals(Status.SUCCESS.getCode(), result.getCode().intValue());//there is no k8s cluster in test env
logger.info(mvcResult.getResponse().getContentAsString());
}
@Test
public void testQueryUnauthorizedNamespace() throws Exception {
MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>();
paramsMap.add("userId", "1");
MvcResult mvcResult = mockMvc.perform(get("/k8s-namespace/unauth-namespace")
.header(SESSION_ID, sessionId)
.params(paramsMap))
.andExpect(status().isOk())
.andExpect(content().contentType(MediaType.APPLICATION_JSON))
.andReturn();
Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class);
Assert.assertEquals(Status.SUCCESS.getCode(), result.getCode().intValue());
logger.info(mvcResult.getResponse().getContentAsString());
}
@Test
public void testQueryAuthorizedNamespace() throws Exception {
MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>();
paramsMap.add("userId", "1");
MvcResult mvcResult = mockMvc.perform(get("/k8s-namespace/authed-namespace")
.header(SESSION_ID, sessionId)
.params(paramsMap))
.andExpect(status().isOk())
.andExpect(content().contentType(MediaType.APPLICATION_JSON))
.andReturn();
Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class);
Assert.assertEquals(Status.SUCCESS.getCode(), result.getCode().intValue());
Assert.assertEquals(Status.DELETE_K8S_NAMESPACE_BY_ID_ERROR.getCode(), result.getCode().intValue());//there is no k8s cluster in test env
logger.info(mvcResult.getResponse().getContentAsString());
}
}

View File

@ -137,11 +137,9 @@ public class ProjectControllerTest {
@Test
public void testQueryAllProjectList() {
User user = new User();
user.setId(0);
Map<String, Object> result = new HashMap<>();
putMsg(result, Status.SUCCESS);
Mockito.when(projectService.queryAllProjectList(user)).thenReturn(result);
Mockito.when(projectService.queryAllProjectList()).thenReturn(result);
Result response = projectController.queryAllProjectList(user);
Assert.assertEquals(Status.SUCCESS.getCode(), response.getCode().intValue());
}

View File

@ -18,7 +18,7 @@
package org.apache.dolphinscheduler.api.service;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.impl.K8SNamespaceServiceImpl;
import org.apache.dolphinscheduler.api.service.impl.K8sNameSpaceServiceImpl;
import org.apache.dolphinscheduler.api.utils.PageInfo;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
@ -51,12 +51,12 @@ import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
@RunWith(MockitoJUnitRunner.class)
public class K8SNamespaceServiceTest {
public class K8sNameSpaceServiceTest {
private static final Logger logger = LoggerFactory.getLogger(K8SNamespaceServiceTest.class);
private static final Logger logger = LoggerFactory.getLogger(K8sNameSpaceServiceTest.class);
@InjectMocks
private K8SNamespaceServiceImpl k8sNamespaceService;
private K8sNameSpaceServiceImpl k8sNameSpaceService;
@Mock
private K8sNamespaceMapper k8sNamespaceMapper;
@ -86,7 +86,7 @@ public class K8SNamespaceServiceTest {
page.setTotal(1L);
page.setRecords(getNamespaceList());
Mockito.when(k8sNamespaceMapper.queryK8sNamespacePaging(Mockito.any(Page.class), Mockito.eq(namespace))).thenReturn(page);
Result result = k8sNamespaceService.queryListPaging(getLoginUser(), namespace, 1, 10);
Result result = k8sNameSpaceService.queryListPaging(getLoginUser(), namespace, 1, 10);
logger.info(result.toString());
PageInfo<K8sNamespace> pageInfo = (PageInfo<K8sNamespace>) result.getData();
Assert.assertTrue(CollectionUtils.isNotEmpty(pageInfo.getTotalList()));
@ -95,19 +95,19 @@ public class K8SNamespaceServiceTest {
@Test
public void createK8sNamespace() {
// namespace is null
Map<String, Object> result = k8sNamespaceService.createK8sNamespace(getLoginUser(), null, k8s, 10.0, 100);
Map<String, Object> result = k8sNameSpaceService.createK8sNamespace(getLoginUser(), null, k8s, null, "tag", 10.0, 100);
logger.info(result.toString());
Assert.assertEquals(Status.REQUEST_PARAMS_NOT_VALID_ERROR, result.get(Constants.STATUS));
// k8s is null
result = k8sNamespaceService.createK8sNamespace(getLoginUser(), namespace, null, 10.0, 100);
result = k8sNameSpaceService.createK8sNamespace(getLoginUser(), namespace, null, null, "tag", 10.0, 100);
logger.info(result.toString());
Assert.assertEquals(Status.REQUEST_PARAMS_NOT_VALID_ERROR, result.get(Constants.STATUS));
// correct
result = k8sNamespaceService.createK8sNamespace(getLoginUser(), namespace, k8s, 10.0, 100);
result = k8sNameSpaceService.createK8sNamespace(getLoginUser(), namespace, k8s, null, "tag", 10.0, 100);
logger.info(result.toString());
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
//null limit cpu and mem
result = k8sNamespaceService.createK8sNamespace(getLoginUser(), namespace, k8s, null, null);
result = k8sNameSpaceService.createK8sNamespace(getLoginUser(), namespace, k8s, null, "tag", null, null);
logger.info(result.toString());
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
}
@ -116,15 +116,15 @@ public class K8SNamespaceServiceTest {
public void updateK8sNamespace() {
Mockito.when(k8sNamespaceMapper.selectById(1)).thenReturn(getNamespace());
Map<String, Object> result = k8sNamespaceService.updateK8sNamespace(getLoginUser(), 1, null, null, null);
Map<String, Object> result = k8sNameSpaceService.updateK8sNamespace(getLoginUser(), 1, null, "tag", null, null);
logger.info(result.toString());
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
result = k8sNamespaceService.updateK8sNamespace(getLoginUser(), 1, null, -1.0, 100);
result = k8sNameSpaceService.updateK8sNamespace(getLoginUser(), 1, null, "tag", -1.0, 100);
logger.info(result.toString());
Assert.assertEquals(Status.REQUEST_PARAMS_NOT_VALID_ERROR, result.get(Constants.STATUS));
result = k8sNamespaceService.updateK8sNamespace(getLoginUser(), 1, null, 1.0, 100);
result = k8sNameSpaceService.updateK8sNamespace(getLoginUser(), 1, null, "tag", 1.0, 100);
logger.info(result.toString());
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
}
@ -135,22 +135,22 @@ public class K8SNamespaceServiceTest {
Mockito.when(k8sNamespaceMapper.existNamespace(namespace, k8s)).thenReturn(true);
//namespace null
Result result = k8sNamespaceService.verifyNamespaceK8s(null, k8s);
Result result = k8sNameSpaceService.verifyNamespaceK8s(null, k8s);
logger.info(result.toString());
Assert.assertEquals(result.getCode().intValue(), Status.REQUEST_PARAMS_NOT_VALID_ERROR.getCode());
//k8s null
result = k8sNamespaceService.verifyNamespaceK8s(namespace, null);
result = k8sNameSpaceService.verifyNamespaceK8s(namespace, null);
logger.info(result.toString());
Assert.assertEquals(result.getCode().intValue(), Status.REQUEST_PARAMS_NOT_VALID_ERROR.getCode());
//exist
result = k8sNamespaceService.verifyNamespaceK8s(namespace, k8s);
result = k8sNameSpaceService.verifyNamespaceK8s(namespace, k8s);
logger.info(result.toString());
Assert.assertEquals(result.getCode().intValue(), Status.K8S_NAMESPACE_EXIST.getCode());
//not exist
result = k8sNamespaceService.verifyNamespaceK8s(namespace, "other k8s");
result = k8sNameSpaceService.verifyNamespaceK8s(namespace, "other k8s");
logger.info(result.toString());
Assert.assertEquals(result.getCode().intValue(), Status.SUCCESS.getCode());
}
@ -160,57 +160,11 @@ public class K8SNamespaceServiceTest {
Mockito.when(k8sNamespaceMapper.deleteById(Mockito.any())).thenReturn(1);
Mockito.when(k8sNamespaceMapper.selectById(1)).thenReturn(getNamespace());
Map<String, Object> result = k8sNamespaceService.deleteNamespaceById(getLoginUser(), 1);
Map<String, Object> result = k8sNameSpaceService.deleteNamespaceById(getLoginUser(), 1);
logger.info(result.toString());
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
}
@Test
public void testQueryAuthorizedNamespace() {
Mockito.when(k8sNamespaceMapper.queryAuthedNamespaceListByUserId(2)).thenReturn(getNamespaceList());
User loginUser = getLoginUser();
// test admin user
loginUser.setUserType(UserType.ADMIN_USER);
Map<String, Object> result = k8sNamespaceService.queryAuthorizedNamespace(loginUser, 2);
logger.info(result.toString());
List<K8sNamespace> namespaces = (List<K8sNamespace>) result.get(Constants.DATA_LIST);
Assert.assertTrue(CollectionUtils.isNotEmpty(namespaces));
// test non-admin user
loginUser.setUserType(UserType.GENERAL_USER);
loginUser.setId(3);
result = k8sNamespaceService.queryAuthorizedNamespace(loginUser, 2);
Assert.assertEquals(Status.USER_NO_OPERATION_PERM, result.get(Constants.STATUS));
namespaces = (List<K8sNamespace>) result.get(Constants.DATA_LIST);
Assert.assertTrue(CollectionUtils.isEmpty(namespaces));
}
@Test
public void testQueryUnAuthorizedNamespace() {
Mockito.when(k8sNamespaceMapper.queryAuthedNamespaceListByUserId(2)).thenReturn(new ArrayList<>());
Mockito.when(k8sNamespaceMapper.selectList(Mockito.any())).thenReturn(getNamespaceList());
// test admin user
User loginUser = new User();
loginUser.setUserType(UserType.ADMIN_USER);
Map<String, Object> result = k8sNamespaceService.queryUnauthorizedNamespace(loginUser, 2);
logger.info(result.toString());
List<K8sNamespace> namespaces = (List<K8sNamespace>) result.get(Constants.DATA_LIST);
Assert.assertTrue(CollectionUtils.isNotEmpty(namespaces));
// test non-admin user
loginUser.setId(2);
loginUser.setUserType(UserType.GENERAL_USER);
result = k8sNamespaceService.queryUnauthorizedNamespace(loginUser, 3);
logger.info(result.toString());
Assert.assertEquals(Status.USER_NO_OPERATION_PERM, result.get(Constants.STATUS));
namespaces = (List<K8sNamespace>) result.get(Constants.DATA_LIST);
Assert.assertTrue(CollectionUtils.isEmpty(namespaces));
}
private User getLoginUser() {
User loginUser = new User();

View File

@ -48,7 +48,6 @@ import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.TenantMapper;
@ -122,10 +121,6 @@ public class ProcessInstanceServiceTest {
@Mock
TaskPluginManager taskPluginManager;
@Mock
ScheduleMapper scheduleMapper;
private String shellJson = "[{\"name\":\"\",\"preTaskCode\":0,\"preTaskVersion\":0,\"postTaskCode\":123456789,"
+ "\"postTaskVersion\":1,\"conditionType\":0,\"conditionParams\":\"{}\"},{\"name\":\"\",\"preTaskCode\":123456789,"
+ "\"preTaskVersion\":1,\"postTaskCode\":123451234,\"postTaskVersion\":1,\"conditionType\":0,\"conditionParams\":\"{}\"}]";

View File

@ -345,11 +345,9 @@ public class ProjectServiceTest {
@Test
public void testQueryAllProjectList() {
Mockito.when(projectMapper.queryAllProject(0)).thenReturn(getList());
Mockito.when(projectMapper.queryAllProject()).thenReturn(getList());
User user = new User();
user.setId(0);
Map<String, Object> result = projectService.queryAllProjectList(user);
Map<String, Object> result = projectService.queryAllProjectList();
logger.info(result.toString());
List<Project> projects = (List<Project>) result.get(Constants.DATA_LIST);
Assert.assertTrue(CollectionUtils.isNotEmpty(projects));

View File

@ -90,9 +90,6 @@ public class UsersServiceTest {
@Mock
private UDFUserMapper udfUserMapper;
@Mock
private K8sNamespaceUserMapper k8sNamespaceUserMapper;
@Mock
private ProjectMapper projectMapper;
@ -437,24 +434,6 @@ public class UsersServiceTest {
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
}
@Test
public void testGrantNamespaces() {
String namespaceIds = "100000,120000";
when(userMapper.selectById(1)).thenReturn(getUser());
User loginUser = new User();
//user not exist
loginUser.setUserType(UserType.ADMIN_USER);
Map<String, Object> result = usersService.grantNamespaces(loginUser, 2, namespaceIds);
logger.info(result.toString());
Assert.assertEquals(Status.USER_NOT_EXIST, result.get(Constants.STATUS));
//success
when(k8sNamespaceUserMapper.deleteNamespaceRelation(0,1)).thenReturn(1);
result = usersService.grantNamespaces(loginUser, 1, namespaceIds);
logger.info(result.toString());
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
}
@Test
public void testGrantDataSource() {
String datasourceIds = "100000,120000";

View File

@ -28,10 +28,10 @@ public class RegexUtilsTest {
@Test
public void testIsValidLinuxUserName() {
String name1 = "10000";
Assert.assertTrue(RegexUtils.isValidLinuxUserName(name1));
Assert.assertFalse(RegexUtils.isValidLinuxUserName(name1));
String name2 = "00hayden";
Assert.assertTrue(RegexUtils.isValidLinuxUserName(name2));
Assert.assertFalse(RegexUtils.isValidLinuxUserName(name2));
String name3 = "hayde123456789123456789123456789";
Assert.assertFalse(RegexUtils.isValidLinuxUserName(name3));
@ -44,12 +44,6 @@ public class RegexUtilsTest {
String name6 = "hayden";
Assert.assertTrue(RegexUtils.isValidLinuxUserName(name6));
String name7 = "00hayden_0";
Assert.assertTrue(RegexUtils.isValidLinuxUserName(name2));
String name8 = "00hayden.8";
Assert.assertTrue(RegexUtils.isValidLinuxUserName(name2));
}
@Test

View File

@ -120,7 +120,7 @@ public final class Constants {
/**
* environment properties default path
*/
public static final String ENV_PATH = "dolphinscheduler_env.sh";
public static final String ENV_PATH = "env/dolphinscheduler_env.sh";
/**
* resource.view.suffixs
@ -816,10 +816,4 @@ public final class Constants {
public static final String K8S = "k8s";
public static final String LIMITS_CPU = "limitsCpu";
public static final String LIMITS_MEMORY = "limitsMemory";
public static final String K8S_LOCAL_TEST_CLUSTER = "ds_null_k8s";
/**
* schedule timezone
*/
public static final String SCHEDULE_TIMEZONE = "schedule_timezone";
}

View File

@ -25,9 +25,7 @@ import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.stereotype.Component;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_STORAGE_TYPE;
import static org.apache.dolphinscheduler.common.Constants.STORAGE_HDFS;
import static org.apache.dolphinscheduler.common.Constants.STORAGE_S3;
import static org.apache.dolphinscheduler.common.Constants.*;
/**

View File

@ -1,43 +0,0 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.exception;
/**
* exception for store
*/
public class StorageOperateNoConfiguredException extends RuntimeException {
public StorageOperateNoConfiguredException() {
}
public StorageOperateNoConfiguredException(String message) {
super(message);
}
public StorageOperateNoConfiguredException(String message, Throwable cause) {
super(message, cause);
}
public StorageOperateNoConfiguredException(Throwable cause) {
super(cause);
}
public StorageOperateNoConfiguredException(String message, Throwable cause, boolean enableSuppression, boolean writableStackTrace) {
super(message, cause, enableSuppression, writableStackTrace);
}
}

View File

@ -30,11 +30,8 @@ import org.apache.dolphinscheduler.common.storage.StorageOperate;
import org.apache.dolphinscheduler.plugin.task.api.enums.ExecutionStatus;
import org.apache.dolphinscheduler.spi.enums.ResourceType;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.client.cli.RMAdminCLI;
@ -52,10 +49,7 @@ import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static org.apache.dolphinscheduler.common.Constants.FOLDER_SEPARATOR;
import static org.apache.dolphinscheduler.common.Constants.FORMAT_S_S;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_TYPE_FILE;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_TYPE_UDF;
import static org.apache.dolphinscheduler.common.Constants.*;
/**
* hadoop utils

View File

@ -88,7 +88,7 @@ public class ParameterUtils {
* @return curing user define parameters
*/
public static String curingGlobalParams(Map<String, String> globalParamMap, List<Property> globalParamList,
CommandType commandType, Date scheduleTime, String timezone) {
CommandType commandType, Date scheduleTime) {
if (globalParamList == null || globalParamList.isEmpty()) {
return null;
@ -101,7 +101,7 @@ public class ParameterUtils {
Map<String, String> allParamMap = new HashMap<>();
//If it is a complement, a complement time needs to be passed in, according to the task type
Map<String, String> timeParams = BusinessTimeUtils.
getBusinessTime(commandType, scheduleTime, timezone);
getBusinessTime(commandType, scheduleTime);
if (timeParams != null) {
allParamMap.putAll(timeParams);

View File

@ -69,7 +69,7 @@ public class PropertyUtils {
*/
public static boolean getResUploadStartupState() {
String resUploadStartupType = PropertyUtils.getUpperCaseString(Constants.RESOURCE_STORAGE_TYPE);
ResUploadType resUploadType = ResUploadType.valueOf(StringUtils.isEmpty(resUploadStartupType) ? ResUploadType.NONE.name() : resUploadStartupType);
ResUploadType resUploadType = ResUploadType.valueOf(resUploadStartupType);
return resUploadType != ResUploadType.NONE;
}

View File

@ -24,11 +24,7 @@ import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.AmazonS3Exception;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectInputStream;
import com.amazonaws.services.s3.model.*;
import com.amazonaws.services.s3.transfer.MultipleFileDownload;
import com.amazonaws.services.s3.transfer.TransferManager;
import com.amazonaws.services.s3.transfer.TransferManagerBuilder;
@ -41,28 +37,13 @@ import org.jets3t.service.ServiceException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.Closeable;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.*;
import java.util.Collections;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static org.apache.dolphinscheduler.common.Constants.AWS_END_POINT;
import static org.apache.dolphinscheduler.common.Constants.BUCKET_NAME;
import static org.apache.dolphinscheduler.common.Constants.FOLDER_SEPARATOR;
import static org.apache.dolphinscheduler.common.Constants.FORMAT_S_S;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_STORAGE_TYPE;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_TYPE_FILE;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_TYPE_UDF;
import static org.apache.dolphinscheduler.common.Constants.STORAGE_S3;
import static org.apache.dolphinscheduler.common.Constants.*;
public class S3Utils implements Closeable, StorageOperate {

View File

@ -45,7 +45,7 @@ public class BusinessTimeUtils {
* @param runTime run time or schedule time
* @return business time
*/
public static Map<String, String> getBusinessTime(CommandType commandType, Date runTime, String timezone) {
public static Map<String, String> getBusinessTime(CommandType commandType, Date runTime) {
Date businessDate = runTime;
Map<String, String> result = new HashMap<>();
switch (commandType) {
@ -71,9 +71,9 @@ public class BusinessTimeUtils {
break;
}
Date businessCurrentDate = addDays(businessDate, 1);
result.put(Constants.PARAMETER_CURRENT_DATE, format(businessCurrentDate, PARAMETER_FORMAT_DATE, timezone));
result.put(Constants.PARAMETER_BUSINESS_DATE, format(businessDate, PARAMETER_FORMAT_DATE, timezone));
result.put(Constants.PARAMETER_DATETIME, format(businessCurrentDate, PARAMETER_FORMAT_TIME, timezone));
result.put(Constants.PARAMETER_CURRENT_DATE, format(businessCurrentDate, PARAMETER_FORMAT_DATE, null));
result.put(Constants.PARAMETER_BUSINESS_DATE, format(businessDate, PARAMETER_FORMAT_DATE, null));
result.put(Constants.PARAMETER_DATETIME, format(businessCurrentDate, PARAMETER_FORMAT_TIME, null));
return result;
}
}

Some files were not shown because too many files have changed in this diff Show More