status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
* constructor
*
* @param taskExecutionContext taskExecutionContext
*/
public DataxTask(TaskExecutionContext taskExecutionContext) {
super(taskExecutionContext);
this.taskExecutionContext = taskExecutionContext;
this.shellCommandExecutor = new ShellCommandExecutor(this::logHandle,
taskExecutionContext, logger);
}
/**
* init DataX config
*/
@Override
public void init() {
logger.info("datax task params {}", taskExecutionContext.getTaskParams());
dataXParameters = JSONUtils.parseObject(taskExecutionContext.getTaskParams(), DataxParameters.class);
if (!dataXParameters.checkParameters()) {
throw new RuntimeException("datax task params is not valid");
}
dataxTaskExecutionContext = dataXParameters.generateExtendedContext(taskExecutionContext.getResourceParametersHelper());
}
/**
* run DataX process
*
* @throws Exception if error throws Exception
*/
@Override
public void handle(TaskCallBack taskCallBack) throws TaskException {
try {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
Map<String, Property> paramsMap = taskExecutionContext.getPrepareParamsMap();
String jsonFilePath = buildDataxJsonFile(paramsMap);
String shellCommandFilePath = buildShellCommandFile(jsonFilePath, paramsMap);
TaskResponse commandExecuteResult = shellCommandExecutor.run(shellCommandFilePath);
setExitStatusCode(commandExecuteResult.getExitStatusCode());
setProcessId(commandExecuteResult.getProcessId());
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
logger.error("The current DataX task has been interrupted", e);
setExitStatusCode(EXIT_CODE_FAILURE);
throw new TaskException("The current DataX task has been interrupted", e);
} catch (Exception e) {
logger.error("datax task error", e);
setExitStatusCode(EXIT_CODE_FAILURE);
throw new TaskException("Execute DataX task failed", e);
}
}
/**
* cancel DataX process
*
* @throws TaskException if error throws Exception
*/
@Override
public void cancel() throws TaskException {
try {
shellCommandExecutor.cancelApplication();
} catch (Exception e) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
throw new TaskException("cancel application error", e);
}
}
/**
* build datax configuration file
*
* @return datax json file name
* @throws Exception if error throws Exception
*/
private String buildDataxJsonFile(Map<String, Property> paramsMap)
throws Exception {
String fileName = String.format("%s/%s_job.json",
taskExecutionContext.getExecutePath(),
taskExecutionContext.getTaskAppId());
String json;
Path path = new File(fileName).toPath();
if (Files.exists(path)) {
return fileName;
}
if (dataXParameters.getCustomConfig() == Flag.YES.ordinal()) {
json = dataXParameters.getJson().replaceAll("\\r\\n", "\n");
} else {
ObjectNode job = JSONUtils.createObjectNode();
job.putArray("content").addAll(buildDataxJobContentJson());
job.set("setting", buildDataxJobSettingJson());
ObjectNode root = JSONUtils.createObjectNode();
root.set("job", job);
root.set("core", buildDataxCoreJson());
json = root.toString();
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
}
json = ParameterUtils.convertParameterPlaceholders(json, ParamUtils.convert(paramsMap));
logger.debug("datax job json : {}", json);
FileUtils.writeStringToFile(new File(fileName), json, StandardCharsets.UTF_8);
return fileName;
}
/**
* build datax job config
*
* @return collection of datax job config JSONObject
* @throws SQLException if error throws SQLException
*/
private List<ObjectNode> buildDataxJobContentJson() {
BaseConnectionParam dataSourceCfg = (BaseConnectionParam) DataSourceUtils.buildConnectionParams(
dataxTaskExecutionContext.getSourcetype(),
dataxTaskExecutionContext.getSourceConnectionParams());
BaseConnectionParam dataTargetCfg = (BaseConnectionParam) DataSourceUtils.buildConnectionParams(
dataxTaskExecutionContext.getTargetType(),
dataxTaskExecutionContext.getTargetConnectionParams());
List<ObjectNode> readerConnArr = new ArrayList<>();
ObjectNode readerConn = JSONUtils.createObjectNode();
ArrayNode sqlArr = readerConn.putArray("querySql");
for (String sql : new String[]{dataXParameters.getSql()}) {
sqlArr.add(sql);
}
ArrayNode urlArr = readerConn.putArray("jdbcUrl");
urlArr.add(DataSourceUtils.getJdbcUrl(DbType.valueOf(dataXParameters.getDsType()), dataSourceCfg));
readerConnArr.add(readerConn);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
ObjectNode readerParam = JSONUtils.createObjectNode();
readerParam.put("username", dataSourceCfg.getUser());
readerParam.put("password", decodePassword(dataSourceCfg.getPassword()));
readerParam.putArray("connection").addAll(readerConnArr);
ObjectNode reader = JSONUtils.createObjectNode();
reader.put("name", DataxUtils.getReaderPluginName(dataxTaskExecutionContext.getSourcetype()));
reader.set("parameter", readerParam);
List<ObjectNode> writerConnArr = new ArrayList<>();
ObjectNode writerConn = JSONUtils.createObjectNode();
ArrayNode tableArr = writerConn.putArray("table");
tableArr.add(dataXParameters.getTargetTable());
writerConn.put("jdbcUrl", DataSourceUtils.getJdbcUrl(DbType.valueOf(dataXParameters.getDtType()), dataTargetCfg));
writerConnArr.add(writerConn);
ObjectNode writerParam = JSONUtils.createObjectNode();
writerParam.put("username", dataTargetCfg.getUser());
writerParam.put("password", decodePassword(dataTargetCfg.getPassword()));
String[] columns = parsingSqlColumnNames(dataxTaskExecutionContext.getSourcetype(),
dataxTaskExecutionContext.getTargetType(),
dataSourceCfg, dataXParameters.getSql());
ArrayNode columnArr = writerParam.putArray("column");
for (String column : columns) {
columnArr.add(column);
}
writerParam.putArray("connection").addAll(writerConnArr);
if (CollectionUtils.isNotEmpty(dataXParameters.getPreStatements())) {
ArrayNode preSqlArr = writerParam.putArray("preSql");
for (String preSql : dataXParameters.getPreStatements()) {
preSqlArr.add(preSql);
}
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
if (CollectionUtils.isNotEmpty(dataXParameters.getPostStatements())) {
ArrayNode postSqlArr = writerParam.putArray("postSql");
for (String postSql : dataXParameters.getPostStatements()) {
postSqlArr.add(postSql);
}
}
ObjectNode writer = JSONUtils.createObjectNode();
writer.put("name", DataxUtils.getWriterPluginName(dataxTaskExecutionContext.getTargetType()));
writer.set("parameter", writerParam);
List<ObjectNode> contentList = new ArrayList<>();
ObjectNode content = JSONUtils.createObjectNode();
content.set("reader", reader);
content.set("writer", writer);
contentList.add(content);
return contentList;
}
/**
* build datax setting config
*
* @return datax setting config JSONObject
*/
private ObjectNode buildDataxJobSettingJson() {
ObjectNode speed = JSONUtils.createObjectNode();
speed.put("channel", DATAX_CHANNEL_COUNT);
if (dataXParameters.getJobSpeedByte() > 0) {
speed.put("byte", dataXParameters.getJobSpeedByte());
}
if (dataXParameters.getJobSpeedRecord() > 0) {
speed.put("record", dataXParameters.getJobSpeedRecord());
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
ObjectNode errorLimit = JSONUtils.createObjectNode();
errorLimit.put("record", 0);
errorLimit.put("percentage", 0);
ObjectNode setting = JSONUtils.createObjectNode();
setting.set("speed", speed);
setting.set("errorLimit", errorLimit);
return setting;
}
private ObjectNode buildDataxCoreJson() {
ObjectNode speed = JSONUtils.createObjectNode();
speed.put("channel", DATAX_CHANNEL_COUNT);
if (dataXParameters.getJobSpeedByte() > 0) {
speed.put("byte", dataXParameters.getJobSpeedByte());
}
if (dataXParameters.getJobSpeedRecord() > 0) {
speed.put("record", dataXParameters.getJobSpeedRecord());
}
ObjectNode channel = JSONUtils.createObjectNode();
channel.set("speed", speed);
ObjectNode transport = JSONUtils.createObjectNode();
transport.set("channel", channel);
ObjectNode core = JSONUtils.createObjectNode();
core.set("transport", transport);
return core;
}
/**
* create command
*
* @return shell command file name
* @throws Exception if error throws Exception
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
*/
private String buildShellCommandFile(String jobConfigFilePath, Map<String, Property> paramsMap)
throws Exception {
String fileName = String.format("%s/%s_node.%s",
taskExecutionContext.getExecutePath(),
taskExecutionContext.getTaskAppId(),
SystemUtils.IS_OS_WINDOWS ? "bat" : "sh");
Path path = new File(fileName).toPath();
if (Files.exists(path)) {
return fileName;
}
StringBuilder sbr = new StringBuilder();
sbr.append(getPythonCommand());
sbr.append(" ");
sbr.append(DATAX_PATH);
sbr.append(" ");
sbr.append(loadJvmEnv(dataXParameters));
sbr.append(addCustomParameters(paramsMap));
sbr.append(" ");
sbr.append(jobConfigFilePath);
String dataxCommand = ParameterUtils.convertParameterPlaceholders(sbr.toString(), ParamUtils.convert(paramsMap));
logger.debug("raw script : {}", dataxCommand);
Set<PosixFilePermission> perms = PosixFilePermissions.fromString(RWXR_XR_X);
FileAttribute<Set<PosixFilePermission>> attr = PosixFilePermissions.asFileAttribute(perms);
if (SystemUtils.IS_OS_WINDOWS) {
Files.createFile(path);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
} else {
Files.createFile(path, attr);
}
Files.write(path, dataxCommand.getBytes(), StandardOpenOption.APPEND);
return fileName;
}
private StringBuilder addCustomParameters(Map<String, Property> paramsMap) {
StringBuilder customParameters = new StringBuilder("-p\"");
for (Map.Entry<String, Property> entry : paramsMap.entrySet()) {
customParameters.append(String.format(CUSTOM_PARAM, entry.getKey(), entry.getValue().getValue()));
}
customParameters.append("\"");
return customParameters;
}
public String getPythonCommand() {
String pythonHome = System.getenv("PYTHON_HOME");
return getPythonCommand(pythonHome);
}
public String getPythonCommand(String pythonHome) {
if (StringUtils.isEmpty(pythonHome)) {
return DATAX_PYTHON;
}
String pythonBinPath = "/bin/" + DATAX_PYTHON;
Matcher matcher = PYTHON_PATH_PATTERN.matcher(pythonHome);
if (matcher.find()) {
return matcher.replaceAll(pythonBinPath);
}
return Paths.get(pythonHome, pythonBinPath).toString();
}
public String loadJvmEnv(DataxParameters dataXParameters) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
int xms = Math.max(dataXParameters.getXms(), 1);
int xmx = Math.max(dataXParameters.getXmx(), 1);
return String.format(JVM_PARAM, xms, xmx);
}
/**
* parsing synchronized column names in SQL statements
*
* @param sourceType the database type of the data source
* @param targetType the database type of the data target
* @param dataSourceCfg the database connection parameters of the data source
* @param sql sql for data synchronization
* @return Keyword converted column names
*/
private String[] parsingSqlColumnNames(DbType sourceType, DbType targetType, BaseConnectionParam dataSourceCfg, String sql) {
String[] columnNames = tryGrammaticalAnalysisSqlColumnNames(sourceType, sql);
if (columnNames == null || columnNames.length == 0) {
logger.info("try to execute sql analysis query column name");
columnNames = tryExecuteSqlResolveColumnNames(sourceType, dataSourceCfg, sql);
}
notNull(columnNames, String.format("parsing sql columns failed : %s", sql));
return DataxUtils.convertKeywordsColumns(targetType, columnNames);
}
/**
* try grammatical parsing column
*
* @param dbType database type
* @param sql sql for data synchronization
* @return column name array
* @throws RuntimeException if error throws RuntimeException
*/
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
private String[] tryGrammaticalAnalysisSqlColumnNames(DbType dbType, String sql) {
String[] columnNames;
try {
SQLStatementParser parser = DataxUtils.getSqlStatementParser(dbType, sql);
if (parser == null) {
logger.warn("database driver [{}] is not support grammatical analysis sql", dbType);
return new String[0];
}
SQLStatement sqlStatement = parser.parseStatement();
SQLSelectStatement sqlSelectStatement = (SQLSelectStatement) sqlStatement;
SQLSelect sqlSelect = sqlSelectStatement.getSelect();
List<SQLSelectItem> selectItemList = null;
if (sqlSelect.getQuery() instanceof SQLSelectQueryBlock) {
SQLSelectQueryBlock block = (SQLSelectQueryBlock) sqlSelect.getQuery();
selectItemList = block.getSelectList();
} else if (sqlSelect.getQuery() instanceof SQLUnionQuery) {
SQLUnionQuery unionQuery = (SQLUnionQuery) sqlSelect.getQuery();
SQLSelectQueryBlock block = (SQLSelectQueryBlock) unionQuery.getRight();
selectItemList = block.getSelectList();
}
notNull(selectItemList,
String.format("select query type [%s] is not support", sqlSelect.getQuery().toString()));
columnNames = new String[selectItemList.size()];
for (int i = 0; i < selectItemList.size(); i++) {
SQLSelectItem item = selectItemList.get(i);
String columnName = null;
if (item.getAlias() != null) {
columnName = item.getAlias();
} else if (item.getExpr() != null) {
if (item.getExpr() instanceof SQLPropertyExpr) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
SQLPropertyExpr expr = (SQLPropertyExpr) item.getExpr();
columnName = expr.getName();
} else if (item.getExpr() instanceof SQLIdentifierExpr) {
SQLIdentifierExpr expr = (SQLIdentifierExpr) item.getExpr();
columnName = expr.getName();
}
} else {
throw new RuntimeException(
String.format("grammatical analysis sql column [ %s ] failed", item.toString()));
}
if (columnName == null) {
throw new RuntimeException(
String.format("grammatical analysis sql column [ %s ] failed", item.toString()));
}
columnNames[i] = columnName;
}
} catch (Exception e) {
logger.warn(e.getMessage(), e);
return new String[0];
}
return columnNames;
}
/**
* try to execute sql to resolve column names
*
* @param baseDataSource the database connection parameters
* @param sql sql for data synchronization
* @return column name array
*/
public String[] tryExecuteSqlResolveColumnNames(DbType sourceType, BaseConnectionParam baseDataSource, String sql) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,368 |
[Bug] [Task plugin] datax task error
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
[LOG-PATH]: /opt/dolphinscheduler/worker-server/logs/20221014/7195111179040_2-15-22.log, [HOST]: Host{address='172.16.10.15:1234', ip='172.16.10.15', port=1234}
[INFO] 2022-10-14 08:44:09.886 +0800 - Begin to pulling task
[INFO] 2022-10-14 08:44:09.887 +0800 - Begin to initialize task
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task startTime: Fri Oct 14 08:44:09 CST 2022
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task envFile: /opt/dolphinscheduler/worker-server/conf/dolphinscheduler_env.sh
[INFO] 2022-10-14 08:44:09.887 +0800 - Set task appId: 15_22
[INFO] 2022-10-14 08:44:09.887 +0800 - End initialize task
[INFO] 2022-10-14 08:44:09.888 +0800 - Set task status to TaskExecutionStatus{code=1, desc='running'}
[INFO] 2022-10-14 08:44:09.888 +0800 - TenantCode:root check success
[INFO] 2022-10-14 08:44:09.888 +0800 - ProcessExecDir:/tmp/dolphinscheduler/exec/process/7193666667040/7195111179040_2/15/22 check success
[INFO] 2022-10-14 08:44:09.888 +0800 - Resources:{} check success
[INFO] 2022-10-14 08:44:09.889 +0800 - Task plugin: DATAX create success
[INFO] 2022-10-14 08:44:09.889 +0800 - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":1,"dtType":"MYSQL","dataTarget":1,"sql":"select id,name from ods.ods_jdx_site","targetTable":"ods_jdx_site_copy1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[INFO] 2022-10-14 08:44:09.889 +0800 - Success initialized task plugin instance success
[INFO] 2022-10-14 08:44:09.889 +0800 - Success set taskVarPool: null
[ERROR] 2022-10-14 08:44:09.890 +0800 - datax task error
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2022-10-14 08:44:09.890 +0800 - Task execute failed, due to meet an exception
org.apache.dolphinscheduler.plugin.task.api.TaskException: Execute DataX task failed
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:171)
at org.apache.dolphinscheduler.server.worker.runner.DefaultWorkerDelayTaskExecuteRunnable.executeTask(DefaultWorkerDelayTaskExecuteRunnable.java:48)
at org.apache.dolphinscheduler.server.worker.runner.WorkerTaskExecuteRunnable.run(WorkerTaskExecuteRunnable.java:151)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.addCustomParameters(DataxTask.java:426)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.buildShellCommandFile(DataxTask.java:400)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.handle(DataxTask.java:157)
... 9 common frames omitted
[INFO] 2022-10-14 08:44:10.900 +0800 - Get a exception when execute the task, will send the task execute result to master, the current task execute result is TaskExecutionStatus{code=6, desc='failure'}
### What you expected to happen
task run success
### How to reproduce
datax task with mysql2mysql
### Anything else
DATAX_HOME=/opt/datax
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12368
|
https://github.com/apache/dolphinscheduler/pull/12388
|
fccbe5593ad2ceb1899524440858c938ef1ae98c
|
3bef85f546e5ebb9d4c91e48c515756286631069
| 2022-10-14T01:02:58Z |
java
| 2022-10-18T01:03:16Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTask.java
|
String[] columnNames;
sql = String.format("SELECT t.* FROM ( %s ) t WHERE 0 = 1", sql);
sql = sql.replace(";", "");
try (
Connection connection = DataSourceClientProvider.getInstance().getConnection(sourceType, baseDataSource);
PreparedStatement stmt = connection.prepareStatement(sql);
ResultSet resultSet = stmt.executeQuery()) {
ResultSetMetaData md = resultSet.getMetaData();
int num = md.getColumnCount();
columnNames = new String[num];
for (int i = 1; i <= num; i++) {
columnNames[i - 1] = md.getColumnName(i).replace("t.","");
}
} catch (SQLException | ExecutionException e) {
logger.error(e.getMessage(), e);
return null;
}
return columnNames;
}
@Override
public AbstractParameters getParameters() {
return dataXParameters;
}
private void notNull(Object obj, String message) {
if (obj == null) {
throw new RuntimeException(message);
}
}
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.api;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
import static org.apache.dolphinscheduler.plugin.task.api.TaskConstants.EXIT_CODE_FAILURE;
import static org.apache.dolphinscheduler.plugin.task.api.TaskConstants.EXIT_CODE_KILL;
import org.apache.dolphinscheduler.common.utils.PropertyUtils;
import org.apache.dolphinscheduler.plugin.task.api.model.TaskResponse;
import org.apache.dolphinscheduler.plugin.task.api.utils.AbstractCommandExecutorConstants;
import org.apache.dolphinscheduler.plugin.task.api.utils.OSUtils;
import org.apache.dolphinscheduler.spi.utils.StringUtils;
import org.apache.commons.lang3.SystemUtils;
import java.io.BufferedReader;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.lang.reflect.Field;
import java.util.Collections;
import java.util.LinkedList;
import java.util.List;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.slf4j.Logger;
import com.google.common.util.concurrent.ThreadFactoryBuilder;
/**
* abstract command executor
*/
public abstract class AbstractCommandExecutor {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
/**
* rules for extracting Var Pool
*/
protected static final Pattern SETVALUE_REGEX = Pattern.compile(TaskConstants.SETVALUE_REGEX);
protected StringBuilder varPool = new StringBuilder();
/**
* process
*/
private Process process;
/**
* log handler
*/
protected Consumer<LinkedBlockingQueue<String>> logHandler;
/**
* logger
*/
protected Logger logger;
/**
* log list
*/
protected LinkedBlockingQueue<String> logBuffer;
protected boolean logOutputIsSuccess = false;
/*
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
* SHELL result string
*/
protected String taskResultString;
/**
* taskRequest
*/
protected TaskExecutionContext taskRequest;
public AbstractCommandExecutor(Consumer<LinkedBlockingQueue<String>> logHandler,
TaskExecutionContext taskRequest,
Logger logger) {
this.logHandler = logHandler;
this.taskRequest = taskRequest;
this.logger = logger;
this.logBuffer = new LinkedBlockingQueue<>();
}
public AbstractCommandExecutor(LinkedBlockingQueue<String> logBuffer) {
this.logBuffer = logBuffer;
}
/**
* build process
*
* @param commandFile command file
* @throws IOException IO Exception
*/
private void buildProcess(String commandFile) throws IOException {
List<String> command = new LinkedList<>();
ProcessBuilder processBuilder = new ProcessBuilder();
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
processBuilder.directory(new File(taskRequest.getExecutePath()));
processBuilder.redirectErrorStream(true);
if (OSUtils.isSudoEnable()) {
if (SystemUtils.IS_OS_LINUX
&& PropertyUtils.getBoolean(AbstractCommandExecutorConstants.TASK_RESOURCE_LIMIT_STATE)) {
generateCgroupCommand(command);
} else {
command.add("sudo");
command.add("-u");
command.add(taskRequest.getTenantCode());
}
}
command.add(commandInterpreter());
command.addAll(Collections.emptyList());
command.add(commandFile);
processBuilder.command(command);
process = processBuilder.start();
printCommand(command);
}
/**
* generate systemd command.
* eg: sudo systemd-run -q --scope -p CPUQuota=100% -p MemoryMax=200M --uid=root
* @param command command
*/
private void generateCgroupCommand(List<String> command) {
Integer cpuQuota = taskRequest.getCpuQuota();
Integer memoryMax = taskRequest.getMemoryMax();
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
command.add("sudo");
command.add("systemd-run");
command.add("-q");
command.add("--scope");
if (cpuQuota == -1) {
command.add("-p");
command.add("CPUQuota=");
} else {
command.add("-p");
command.add(String.format("CPUQuota=%s%%", taskRequest.getCpuQuota()));
}
if (memoryMax == -1) {
command.add("-p");
command.add(String.format("MemoryMax=%s", "infinity"));
} else {
command.add("-p");
command.add(String.format("MemoryMax=%sM", taskRequest.getMemoryMax()));
}
command.add(String.format("--uid=%s", taskRequest.getTenantCode()));
}
public TaskResponse run(String execCommand) throws IOException, InterruptedException {
TaskResponse result = new TaskResponse();
int taskInstanceId = taskRequest.getTaskInstanceId();
if (null == TaskExecutionContextCacheManager.getByTaskInstanceId(taskInstanceId)) {
result.setExitStatusCode(EXIT_CODE_KILL);
return result;
}
if (StringUtils.isEmpty(execCommand)) {
TaskExecutionContextCacheManager.removeByTaskInstanceId(taskInstanceId);
return result;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
}
String commandFilePath = buildCommandFilePath();
createCommandFileIfNotExists(execCommand, commandFilePath);
buildProcess(commandFilePath);
parseProcessOutput(process);
int processId = getProcessId(process);
result.setProcessId(processId);
taskRequest.setProcessId(processId);
boolean updateTaskExecutionContextStatus =
TaskExecutionContextCacheManager.updateTaskExecutionContext(taskRequest);
if (Boolean.FALSE.equals(updateTaskExecutionContextStatus)) {
ProcessUtils.kill(taskRequest);
result.setExitStatusCode(EXIT_CODE_KILL);
return result;
}
logger.info("process start, process id is: {}", processId);
long remainTime = getRemainTime();
boolean status = process.waitFor(remainTime, TimeUnit.SECONDS);
if (status) {
result.setExitStatusCode(process.exitValue());
} else {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
logger.error("process has failure, the task timeout configuration value is:{}, ready to kill ...",
taskRequest.getTaskTimeout());
ProcessUtils.kill(taskRequest);
result.setExitStatusCode(EXIT_CODE_FAILURE);
}
logger.info(
"process has exited, execute path:{}, processId:{} ,exitStatusCode:{} ,processWaitForStatus:{} ,processExitValue:{}",
taskRequest.getExecutePath(), processId, result.getExitStatusCode(), status, process.exitValue());
return result;
}
public String getVarPool() {
return varPool.toString();
}
/**
* cancel application
*
* @throws Exception exception
*/
public void cancelApplication() throws Exception {
if (process == null) {
return;
}
clear();
int processId = getProcessId(process);
logger.info("cancel process: {}", processId);
boolean killed = softKill(processId);
if (!killed) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
hardKill(processId);
process.destroy();
process = null;
}
}
/**
* soft kill
*
* @param processId process id
* @return process is alive
*/
private boolean softKill(int processId) {
if (processId != 0 && process.isAlive()) {
try {
String cmd = String.format("kill %d", processId);
cmd = OSUtils.getSudoCmd(taskRequest.getTenantCode(), cmd);
logger.info("soft kill task:{}, process id:{}, cmd:{}", taskRequest.getTaskAppId(), processId, cmd);
Runtime.getRuntime().exec(cmd);
} catch (IOException e) {
logger.info("kill attempt failed", e);
}
}
return process.isAlive();
}
/**
* hard kill
*
* @param processId process id
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
*/
private void hardKill(int processId) {
if (processId != 0 && process.isAlive()) {
try {
String cmd = String.format("kill -9 %d", processId);
cmd = OSUtils.getSudoCmd(taskRequest.getTenantCode(), cmd);
logger.info("hard kill task:{}, process id:{}, cmd:{}", taskRequest.getTaskAppId(), processId, cmd);
Runtime.getRuntime().exec(cmd);
} catch (IOException e) {
logger.error("kill attempt failed ", e);
}
}
}
private void printCommand(List<String> commands) {
logger.info("task run command: {}", String.join(" ", commands));
}
/**
* clear
*/
private void clear() {
LinkedBlockingQueue<String> markerLog = new LinkedBlockingQueue<>(1);
markerLog.add(ch.qos.logback.classic.ClassicConstants.FINALIZE_SESSION_MARKER.toString());
if (!logBuffer.isEmpty()) {
logHandler.accept(logBuffer);
logBuffer.clear();
}
logHandler.accept(markerLog);
}
/**
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
* get the standard output of the process
*
* @param process process
*/
private void parseProcessOutput(Process process) {
String threadLoggerInfoName = taskRequest.getTaskLogName();
ExecutorService getOutputLogService = newDaemonSingleThreadExecutor(threadLoggerInfoName);
getOutputLogService.submit(() -> {
try (BufferedReader inReader = new BufferedReader(new InputStreamReader(process.getInputStream()))) {
String line;
while ((line = inReader.readLine()) != null) {
if (line.startsWith("${setValue(") || line.startsWith("#{setValue(")) {
varPool.append(findVarPool(line));
varPool.append("$VarPool$");
} else {
logBuffer.add(line);
taskResultString = line;
}
}
logOutputIsSuccess = true;
} catch (Exception e) {
logger.error(e.getMessage(), e);
logOutputIsSuccess = true;
}
});
getOutputLogService.shutdown();
ExecutorService parseProcessOutputExecutorService = newDaemonSingleThreadExecutor(threadLoggerInfoName);
parseProcessOutputExecutorService.submit(() -> {
try {
long lastFlushTime = System.currentTimeMillis();
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
while (logBuffer.size() > 0 || !logOutputIsSuccess) {
if (logBuffer.size() > 0) {
lastFlushTime = flush(lastFlushTime);
} else {
Thread.sleep(TaskConstants.DEFAULT_LOG_FLUSH_INTERVAL);
}
}
} catch (Exception e) {
Thread.currentThread().interrupt();
logger.error(e.getMessage(), e);
} finally {
clear();
}
});
parseProcessOutputExecutorService.shutdown();
}
/**
* find var pool
*
* @param line
* @return
*/
private String findVarPool(String line) {
Matcher matcher = SETVALUE_REGEX.matcher(line);
if (matcher.find()) {
return matcher.group(1);
}
return null;
}
/**
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
* get remain time(s)
*
* @return remain time
*/
private long getRemainTime() {
long usedTime = (System.currentTimeMillis() - taskRequest.getStartTime()) / 1000;
long remainTime = taskRequest.getTaskTimeout() - usedTime;
if (remainTime < 0) {
throw new RuntimeException("task execution time out");
}
return remainTime;
}
/**
* get process id
*
* @param process process
* @return process id
*/
private int getProcessId(Process process) {
int processId = 0;
try {
Field f = process.getClass().getDeclaredField(TaskConstants.PID);
f.setAccessible(true);
processId = f.getInt(process);
} catch (Throwable e) {
logger.error(e.getMessage(), e);
}
return processId;
}
/**
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractCommandExecutor.java
|
* when log buffer siz or flush time reach condition , then flush
*
* @param lastFlushTime last flush time
* @return last flush time
*/
private long flush(long lastFlushTime) {
long now = System.currentTimeMillis();
/*
* when log buffer siz or flush time reach condition , then flush
*/
if (logBuffer.size() >= TaskConstants.DEFAULT_LOG_ROWS_NUM
|| now - lastFlushTime > TaskConstants.DEFAULT_LOG_FLUSH_INTERVAL) {
lastFlushTime = now;
logHandler.accept(logBuffer);
logBuffer.clear();
}
return lastFlushTime;
}
protected abstract String buildCommandFilePath();
protected abstract void createCommandFileIfNotExists(String execCommand, String commandFile) throws IOException;
ExecutorService newDaemonSingleThreadExecutor(String threadName) {
ThreadFactory threadFactory = new ThreadFactoryBuilder()
.setDaemon(true)
.setNameFormat(threadName)
.build();
return Executors.newSingleThreadExecutor(threadFactory);
}
protected abstract String commandInterpreter();
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/ShellCommandExecutor.java
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.api;
import org.apache.commons.io.FileUtils;
import org.apache.commons.lang3.SystemUtils;
import java.io.File;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/ShellCommandExecutor.java
|
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.function.Consumer;
import org.slf4j.Logger;
import com.google.common.base.Strings;
/**
* shell command executor
*/
public class ShellCommandExecutor extends AbstractCommandExecutor {
/**
* For Unix-like, using sh
*/
private static final String SH = "sh";
/**
* For Windows, using cmd.exe
*/
private static final String CMD = "cmd.exe";
/**
* constructor
*
* @param logHandler logHandler
* @param taskRequest taskRequest
* @param logger logger
*/
public ShellCommandExecutor(Consumer<LinkedBlockingQueue<String>> logHandler,
TaskExecutionContext taskRequest,
Logger logger) {
super(logHandler, taskRequest, logger);
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/ShellCommandExecutor.java
|
public ShellCommandExecutor(LinkedBlockingQueue<String> logBuffer) {
super(logBuffer);
}
@Override
protected String buildCommandFilePath() {
return String.format("%s/%s.%s"
, taskRequest.getExecutePath()
, taskRequest.getTaskAppId()
, SystemUtils.IS_OS_WINDOWS ? "bat" : "command");
}
/**
* create command file if not exists
*
* @param execCommand exec command
* @param commandFile command file
* @throws IOException io exception
*/
@Override
protected void createCommandFileIfNotExists(String execCommand, String commandFile) throws IOException {
logger.info("tenantCode user:{}, task dir:{}", taskRequest.getTenantCode(),
taskRequest.getTaskAppId());
if (!Files.exists(Paths.get(commandFile))) {
logger.info("create command file:{}", commandFile);
StringBuilder sb = new StringBuilder();
if (SystemUtils.IS_OS_WINDOWS) {
sb.append("@echo off\n");
sb.append("cd /d %~dp0\n");
if (!Strings.isNullOrEmpty(taskRequest.getEnvironmentConfig())) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 11,236 |
[Bug] [DataX] DataX Variable configuration problem
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
k8s deployment:
ds-version:3.0.0-beat2
<img width="1828" alt="image" src="https://user-images.githubusercontent.com/46189785/182122211-6ddc4250-cd41-4196-9a1c-abe8844e8bba.png">
<img width="2579" alt="image" src="https://user-images.githubusercontent.com/46189785/182122241-75fbb38a-4bc1-46a7-b656-557ceb77e439.png">
datax.py script cannot read!
### What you expected to happen
datax.py script available
### How to reproduce
1、https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html
2、
```
kubectl cp -n test datax.tar.gz dolphinscheduler-worker-0:/opt/soft
```
3、
```
kubectl exec -n test -it dolphinscheduler-worker-0 bash
cd /opt/soft
tar -zxvf datax.tar.gz
rm -rf datax.tar.gz
```
4、
```
export DATAX_HOME=/opt/soft/datax
export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$PATH
```
<img width="1711" alt="image" src="https://user-images.githubusercontent.com/46189785/182123163-fa9635c2-5231-428d-a747-559a2a4a7584.png">
5、Execute task execution, prompt the above error
### Anything else
_No response_
### Version
3.0.0-beta-2
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/11236
|
https://github.com/apache/dolphinscheduler/pull/12180
|
3bef85f546e5ebb9d4c91e48c515756286631069
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
| 2022-08-01T09:55:02Z |
java
| 2022-10-18T04:57:37Z |
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/ShellCommandExecutor.java
|
sb.append(taskRequest.getEnvironmentConfig()).append("\n");
} else {
if (taskRequest.getEnvFile() != null) {
sb.append("call ").append(taskRequest.getEnvFile()).append("\n");
}
}
} else {
sb.append("#!/bin/bash\n");
sb.append("BASEDIR=$(cd `dirname $0`; pwd)\n");
sb.append("cd $BASEDIR\n");
if (!Strings.isNullOrEmpty(taskRequest.getEnvironmentConfig())) {
sb.append(taskRequest.getEnvironmentConfig()).append("\n");
} else {
if (taskRequest.getEnvFile() != null) {
sb.append("source ").append(taskRequest.getEnvFile()).append("\n");
}
}
}
sb.append(execCommand);
logger.info("command : {}", sb);
FileUtils.writeStringToFile(new File(commandFile), sb.toString(), StandardCharsets.UTF_8);
}
}
@Override
protected String commandInterpreter() {
return SystemUtils.IS_OS_WINDOWS ? CMD : SH;
}
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
/*
* Lcensed to the Apache Software Foundaton (ASF) under one or more
* contrbutor lcense agreements. See the NOTICE fle dstrbuted wth
* ths work for addtonal nformaton regardng copyrght ownershp.
* The ASF lcenses ths fle to You under the Apache Lcense, Verson 2.0
* (the "Lcense"); you may not use ths fle except n complance wth
* the Lcense. You may obtan a copy of the Lcense at
*
* http://www.apache.org/lcenses/LICENSE-2.0
*
* Unless requred by applcable law or agreed to n wrtng, software
* dstrbuted under the Lcense s dstrbuted on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, ether express or mpled.
* See the Lcense for the specfc language governng permssons and
* lmtatons under the Lcense.
*/
package org.apache.dolphnscheduler.server.master.runner;
mport statc org.apache.dolphnscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE;
mport statc org.apache.dolphnscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_SCHEDULE_DATE_LIST;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
mport statc org.apache.dolphnscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE;
mport statc org.apache.dolphnscheduler.common.Constants.CMD_PARAM_RECOVERY_START_NODE_STRING;
mport statc org.apache.dolphnscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING;
mport statc org.apache.dolphnscheduler.common.Constants.CMD_PARAM_START_NODES;
mport statc org.apache.dolphnscheduler.common.Constants.COMMA;
mport statc org.apache.dolphnscheduler.common.Constants.DEFAULT_WORKER_GROUP;
mport statc org.apache.dolphnscheduler.common.Constants.YYYY_MM_DD_HH_MM_SS;
mport statc org.apache.dolphnscheduler.plugn.task.ap.TaskConstants.TASK_TYPE_BLOCKING;
mport statc org.apache.dolphnscheduler.plugn.task.ap.enums.DataType.VARCHAR;
mport statc org.apache.dolphnscheduler.plugn.task.ap.enums.Drect.IN;
mport org.apache.dolphnscheduler.common.Constants;
mport org.apache.dolphnscheduler.common.enums.CommandType;
mport org.apache.dolphnscheduler.common.enums.FalureStrategy;
mport org.apache.dolphnscheduler.common.enums.Flag;
mport org.apache.dolphnscheduler.common.enums.Prorty;
mport org.apache.dolphnscheduler.common.enums.StateEventType;
mport org.apache.dolphnscheduler.common.enums.TaskDependType;
mport org.apache.dolphnscheduler.common.enums.TaskGroupQueueStatus;
mport org.apache.dolphnscheduler.common.enums.WorkflowExecutonStatus;
mport org.apache.dolphnscheduler.common.graph.DAG;
mport org.apache.dolphnscheduler.common.model.TaskNodeRelaton;
mport org.apache.dolphnscheduler.common.thread.ThreadUtls;
mport org.apache.dolphnscheduler.common.utls.DateUtls;
mport org.apache.dolphnscheduler.common.utls.JSONUtls;
mport org.apache.dolphnscheduler.common.utls.NetUtls;
mport org.apache.dolphnscheduler.dao.entty.Command;
mport org.apache.dolphnscheduler.dao.entty.Envronment;
mport org.apache.dolphnscheduler.dao.entty.ProcessDefnton;
mport org.apache.dolphnscheduler.dao.entty.ProcessInstance;
mport org.apache.dolphnscheduler.dao.entty.ProcessTaskRelaton;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
mport org.apache.dolphnscheduler.dao.entty.ProjectUser;
mport org.apache.dolphnscheduler.dao.entty.Schedule;
mport org.apache.dolphnscheduler.dao.entty.TaskDefntonLog;
mport org.apache.dolphnscheduler.dao.entty.TaskGroupQueue;
mport org.apache.dolphnscheduler.dao.entty.TaskInstance;
mport org.apache.dolphnscheduler.dao.repostory.ProcessInstanceDao;
mport org.apache.dolphnscheduler.plugn.task.ap.enums.DependResult;
mport org.apache.dolphnscheduler.plugn.task.ap.enums.Drect;
mport org.apache.dolphnscheduler.plugn.task.ap.enums.TaskExecutonStatus;
mport org.apache.dolphnscheduler.plugn.task.ap.model.Property;
mport org.apache.dolphnscheduler.remote.command.HostUpdateCommand;
mport org.apache.dolphnscheduler.remote.utls.Host;
mport org.apache.dolphnscheduler.server.master.confg.MasterConfg;
mport org.apache.dolphnscheduler.server.master.dspatch.executor.NettyExecutorManager;
mport org.apache.dolphnscheduler.server.master.event.StateEvent;
mport org.apache.dolphnscheduler.server.master.event.StateEventHandleError;
mport org.apache.dolphnscheduler.server.master.event.StateEventHandleExcepton;
mport org.apache.dolphnscheduler.server.master.event.StateEventHandler;
mport org.apache.dolphnscheduler.server.master.event.StateEventHandlerManager;
mport org.apache.dolphnscheduler.server.master.event.TaskStateEvent;
mport org.apache.dolphnscheduler.server.master.event.WorkflowStateEvent;
mport org.apache.dolphnscheduler.server.master.metrcs.TaskMetrcs;
mport org.apache.dolphnscheduler.server.master.runner.task.ITaskProcessor;
mport org.apache.dolphnscheduler.server.master.runner.task.TaskActon;
mport org.apache.dolphnscheduler.server.master.runner.task.TaskProcessorFactory;
mport org.apache.dolphnscheduler.servce.alert.ProcessAlertManager;
mport org.apache.dolphnscheduler.servce.cron.CronUtls;
mport org.apache.dolphnscheduler.servce.exceptons.CronParseExcepton;
mport org.apache.dolphnscheduler.servce.expand.CurngParamsServce;
mport org.apache.dolphnscheduler.servce.model.TaskNode;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
mport org.apache.dolphnscheduler.servce.process.ProcessDag;
mport org.apache.dolphnscheduler.servce.process.ProcessServce;
mport org.apache.dolphnscheduler.servce.queue.PeerTaskInstancePrortyQueue;
mport org.apache.dolphnscheduler.servce.utls.DagHelper;
mport org.apache.dolphnscheduler.servce.utls.LoggerUtls;
mport org.apache.commons.collectons.CollectonUtls;
mport org.apache.commons.lang3.StrngUtls;
mport org.apache.commons.lang3.math.NumberUtls;
mport java.utl.ArrayLst;
mport java.utl.Arrays;
mport java.utl.Collecton;
mport java.utl.Collectons;
mport java.utl.Date;
mport java.utl.HashMap;
mport java.utl.HashSet;
mport java.utl.Iterator;
mport java.utl.Lst;
mport java.utl.Map;
mport java.utl.Objects;
mport java.utl.Optonal;
mport java.utl.Set;
mport java.utl.concurrent.Callable;
mport java.utl.concurrent.ConcurrentHashMap;
mport java.utl.concurrent.ConcurrentLnkedQueue;
mport java.utl.concurrent.atomc.AtomcBoolean;
mport java.utl.stream.Collectors;
mport lombok.NonNull;
mport org.slf4j.Logger;
mport org.slf4j.LoggerFactory;
mport org.sprngframework.beans.BeanUtls;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
mport com.google.common.collect.Lsts;
mport com.google.common.collect.Sets;
/**
* Workflow execute task, used to execute a workflow nstance.
*/
publc class WorkflowExecuteRunnable mplements Callable<WorkflowSubmtStatue> {
prvate statc fnal Logger logger = LoggerFactory.getLogger(WorkflowExecuteRunnable.class);
prvate fnal ProcessServce processServce;
prvate ProcessInstanceDao processInstanceDao;
prvate fnal ProcessAlertManager processAlertManager;
prvate fnal NettyExecutorManager nettyExecutorManager;
prvate fnal ProcessInstance processInstance;
prvate ProcessDefnton processDefnton;
prvate DAG<Strng, TaskNode, TaskNodeRelaton> dag;
/**
* unque key of workflow
*/
prvate Strng key;
prvate WorkflowRunnableStatus workflowRunnableStatus = WorkflowRunnableStatus.CREATED;
/**
* submt falure nodes
*/
prvate boolean taskFaledSubmt = false;
/**
* task nstance hash map, taskId as key
*/
prvate fnal Map<Integer, TaskInstance> taskInstanceMap = new ConcurrentHashMap<>();
/**
* runnng taskProcessor, taskCode as key, taskProcessor as value
* only on taskProcessor per taskCode
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
*/
prvate fnal Map<Long, ITaskProcessor> actveTaskProcessorMaps = new ConcurrentHashMap<>();
/**
* vald task map, taskCode as key, taskId as value
* n a DAG, only one taskInstance per taskCode s vald
*/
prvate fnal Map<Long, Integer> valdTaskMap = new ConcurrentHashMap<>();
/**
* error task map, taskCode as key, taskInstanceId as value
* n a DAG, only one taskInstance per taskCode s vald
*/
prvate fnal Map<Long, Integer> errorTaskMap = new ConcurrentHashMap<>();
/**
* complete task map, taskCode as key, taskInstanceId as value
* n a DAG, only one taskInstance per taskCode s vald
*/
prvate fnal Map<Long, Integer> completeTaskMap = new ConcurrentHashMap<>();
/**
* depend faled task set
*/
prvate fnal Set<Long> dependFaledTaskSet = Sets.newConcurrentHashSet();
/**
* forbdden task map, code as key
*/
prvate fnal Map<Long, TaskNode> forbddenTaskMap = new ConcurrentHashMap<>();
/**
* skp task map, code as key
*/
prvate fnal Map<Strng, TaskNode> skpTaskNodeMap = new ConcurrentHashMap<>();
/**
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
* complement date lst
*/
prvate Lst<Date> complementLstDate = Lsts.newLnkedLst();
/**
* state event queue
*/
prvate fnal ConcurrentLnkedQueue<StateEvent> stateEvents = new ConcurrentLnkedQueue<>();
/**
* The StandBy task lst, wll be executed, need to know, the taskInstance n ths queue may doesn't have d.
*/
prvate fnal PeerTaskInstancePrortyQueue readyToSubmtTaskQueue = new PeerTaskInstancePrortyQueue();
/**
* wat to retry taskInstance map, taskCode as key, taskInstance as value
* before retry, the taskInstance d s 0
*/
prvate fnal Map<Long, TaskInstance> watToRetryTaskInstanceMap = new ConcurrentHashMap<>();
prvate fnal StateWheelExecuteThread stateWheelExecuteThread;
prvate fnal CurngParamsServce curngParamsServce;
prvate fnal Strng masterAddress;
/**
* @param processInstance processInstance
* @param processServce processServce
* @param processInstanceDao processInstanceDao
* @param nettyExecutorManager nettyExecutorManager
* @param processAlertManager processAlertManager
* @param masterConfg masterConfg
* @param stateWheelExecuteThread stateWheelExecuteThread
*/
publc WorkflowExecuteRunnable(
@NonNull ProcessInstance processInstance,
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
@NonNull ProcessServce processServce,
@NonNull ProcessInstanceDao processInstanceDao,
@NonNull NettyExecutorManager nettyExecutorManager,
@NonNull ProcessAlertManager processAlertManager,
@NonNull MasterConfg masterConfg,
@NonNull StateWheelExecuteThread stateWheelExecuteThread,
@NonNull CurngParamsServce curngParamsServce) {
ths.processServce = processServce;
ths.processInstanceDao = processInstanceDao;
ths.processInstance = processInstance;
ths.nettyExecutorManager = nettyExecutorManager;
ths.processAlertManager = processAlertManager;
ths.stateWheelExecuteThread = stateWheelExecuteThread;
ths.curngParamsServce = curngParamsServce;
ths.masterAddress = NetUtls.getAddr(masterConfg.getLstenPort());
TaskMetrcs.regsterTaskPrepared(readyToSubmtTaskQueue::sze);
}
/**
* the process start nodes are submtted completely.
*/
publc boolean sStart() {
return WorkflowRunnableStatus.STARTED == workflowRunnableStatus;
}
/**
* handle event
*/
publc vod handleEvents() {
f (!sStart()) {
logger.nfo(
"The workflow nstance s not started, wll not handle ts state event, current state event sze: {}",
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
stateEvents);
return;
}
StateEvent stateEvent = null;
whle (!ths.stateEvents.sEmpty()) {
try {
stateEvent = ths.stateEvents.peek();
LoggerUtls.setWorkflowAndTaskInstanceIDMDC(stateEvent.getProcessInstanceId(),
stateEvent.getTaskInstanceId());
checkProcessInstance(stateEvent);
StateEventHandler stateEventHandler =
StateEventHandlerManager.getStateEventHandler(stateEvent.getType())
.orElseThrow(() -> new StateEventHandleError(
"Cannot fnd handler for the gven state event"));
logger.nfo("Begn to handle state event, {}", stateEvent);
f (stateEventHandler.handleStateEvent(ths, stateEvent)) {
ths.stateEvents.remove(stateEvent);
}
} catch (StateEventHandleError stateEventHandleError) {
logger.error("State event handle error, wll remove ths event: {}", stateEvent, stateEventHandleError);
ths.stateEvents.remove(stateEvent);
ThreadUtls.sleep(Constants.SLEEP_TIME_MILLIS);
} catch (StateEventHandleExcepton stateEventHandleExcepton) {
logger.error("State event handle error, wll retry ths event: {}",
stateEvent,
stateEventHandleExcepton);
ThreadUtls.sleep(Constants.SLEEP_TIME_MILLIS);
} catch (Excepton e) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
logger.error("State event handle error, get a unknown excepton, wll retry ths event: {}",
stateEvent,
e);
ThreadUtls.sleep(Constants.SLEEP_TIME_MILLIS);
} fnally {
LoggerUtls.removeWorkflowAndTaskInstanceIdMDC();
}
}
}
publc Strng getKey() {
f (StrngUtls.sNotEmpty(key) || ths.processDefnton == null) {
return key;
}
key = Strng.format("%d_%d_%d",
ths.processDefnton.getCode(),
ths.processDefnton.getVerson(),
ths.processInstance.getId());
return key;
}
publc boolean addStateEvent(StateEvent stateEvent) {
f (processInstance.getId() != stateEvent.getProcessInstanceId()) {
logger.nfo("state event would be abounded :{}", stateEvent);
return false;
}
ths.stateEvents.add(stateEvent);
return true;
}
publc nt eventSze() {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
return ths.stateEvents.sze();
}
publc ProcessInstance getProcessInstance() {
return ths.processInstance;
}
publc boolean checkForceStartAndWakeUp(StateEvent stateEvent) {
TaskGroupQueue taskGroupQueue = ths.processServce.loadTaskGroupQueue(stateEvent.getTaskInstanceId());
f (taskGroupQueue.getForceStart() == Flag.YES.getCode()) {
TaskInstance taskInstance = ths.processServce.fndTaskInstanceById(stateEvent.getTaskInstanceId());
ITaskProcessor taskProcessor = actveTaskProcessorMaps.get(taskInstance.getTaskCode());
taskProcessor.acton(TaskActon.DISPATCH);
ths.processServce.updateTaskGroupQueueStatus(taskGroupQueue.getTaskId(),
TaskGroupQueueStatus.ACQUIRE_SUCCESS.getCode());
return true;
}
f (taskGroupQueue.getInQueue() == Flag.YES.getCode()) {
boolean acqureTaskGroup = processServce.robTaskGroupResource(taskGroupQueue);
f (acqureTaskGroup) {
TaskInstance taskInstance = ths.processServce.fndTaskInstanceById(stateEvent.getTaskInstanceId());
ITaskProcessor taskProcessor = actveTaskProcessorMaps.get(taskInstance.getTaskCode());
taskProcessor.acton(TaskActon.DISPATCH);
return true;
}
}
return false;
}
publc vod processTmeout() {
ProjectUser projectUser = processServce.queryProjectWthUserByProcessInstanceId(processInstance.getId());
ths.processAlertManager.sendProcessTmeoutAlert(ths.processInstance, projectUser);
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
publc vod taskTmeout(TaskInstance taskInstance) {
ProjectUser projectUser = processServce.queryProjectWthUserByProcessInstanceId(processInstance.getId());
processAlertManager.sendTaskTmeoutAlert(processInstance, taskInstance, projectUser);
}
publc vod taskFnshed(TaskInstance taskInstance) throws StateEventHandleExcepton {
logger.nfo("TaskInstance fnshed task code:{} state:{}", taskInstance.getTaskCode(), taskInstance.getState());
try {
actveTaskProcessorMaps.remove(taskInstance.getTaskCode());
stateWheelExecuteThread.removeTask4TmeoutCheck(processInstance, taskInstance);
stateWheelExecuteThread.removeTask4RetryCheck(processInstance, taskInstance);
stateWheelExecuteThread.removeTask4StateCheck(processInstance, taskInstance);
f (taskInstance.getState().sSuccess()) {
completeTaskMap.put(taskInstance.getTaskCode(), taskInstance.getId());
processInstance.setVarPool(taskInstance.getVarPool());
processInstanceDao.upsertProcessInstance(processInstance);
f (!processInstance.sBlocked()) {
submtPostNode(Long.toStrng(taskInstance.getTaskCode()));
}
} else f (taskInstance.taskCanRetry() && !processInstance.getState().sReadyStop()) {
logger.nfo("Retry taskInstance taskInstance state: {}", taskInstance.getState());
retryTaskInstance(taskInstance);
} else f (taskInstance.getState().sFalure()) {
completeTaskMap.put(taskInstance.getTaskCode(), taskInstance.getId());
f (processInstance.getFalureStrategy() == FalureStrategy.CONTINUE && DagHelper.haveAllNodeAfterNode(
Long.toStrng(taskInstance.getTaskCode()),
dag)) {
submtPostNode(Long.toStrng(taskInstance.getTaskCode()));
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
} else {
errorTaskMap.put(taskInstance.getTaskCode(), taskInstance.getId());
f (processInstance.getFalureStrategy() == FalureStrategy.END) {
kllAllTasks();
}
}
} else f (taskInstance.getState().sFnshed()) {
completeTaskMap.put(taskInstance.getTaskCode(), taskInstance.getId());
}
logger.nfo("TaskInstance fnshed wll try to update the workflow nstance state, task code:{} state:{}",
taskInstance.getTaskCode(),
taskInstance.getState());
ths.updateProcessInstanceState();
} catch (Excepton ex) {
logger.error("Task fnsh faled, get a excepton, wll remove ths taskInstance from completeTaskMap", ex);
completeTaskMap.remove(taskInstance.getTaskCode());
throw ex;
}
}
/**
* release task group
*
* @param taskInstance
*/
publc vod releaseTaskGroup(TaskInstance taskInstance) {
logger.nfo("Release task group");
f (taskInstance.getTaskGroupId() > 0) {
TaskInstance nextTaskInstance = ths.processServce.releaseTaskGroup(taskInstance);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
f (nextTaskInstance != null) {
f (nextTaskInstance.getProcessInstanceId() == taskInstance.getProcessInstanceId()) {
TaskStateEvent nextEvent = TaskStateEvent.bulder()
.processInstanceId(processInstance.getId())
.taskInstanceId(nextTaskInstance.getId())
.type(StateEventType.WAIT_TASK_GROUP)
.buld();
ths.stateEvents.add(nextEvent);
} else {
ProcessInstance processInstance =
ths.processServce.fndProcessInstanceById(nextTaskInstance.getProcessInstanceId());
ths.processServce.sendStartTask2Master(processInstance, nextTaskInstance.getId(),
org.apache.dolphnscheduler.remote.command.CommandType.TASK_WAKEUP_EVENT_REQUEST);
}
}
}
}
/**
* crate new task nstance to retry, dfferent objects from the orgnal
*
* @param taskInstance
*/
prvate vod retryTaskInstance(TaskInstance taskInstance) throws StateEventHandleExcepton {
f (!taskInstance.taskCanRetry()) {
return;
}
TaskInstance newTaskInstance = cloneRetryTaskInstance(taskInstance);
f (newTaskInstance == null) {
logger.error("Retry task fal because new taskInstance s null, task code:{}, task d:{}",
taskInstance.getTaskCode(),
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
taskInstance.getId());
return;
}
watToRetryTaskInstanceMap.put(newTaskInstance.getTaskCode(), newTaskInstance);
f (!taskInstance.retryTaskIntervalOverTme()) {
logger.nfo(
"Falure task wll be submtted, process d: {}, task nstance code: {}, state: {}, retry tmes: {} / {}, nterval: {}",
processInstance.getId(), newTaskInstance.getTaskCode(),
newTaskInstance.getState(), newTaskInstance.getRetryTmes(), newTaskInstance.getMaxRetryTmes(),
newTaskInstance.getRetryInterval());
stateWheelExecuteThread.addTask4TmeoutCheck(processInstance, newTaskInstance);
stateWheelExecuteThread.addTask4RetryCheck(processInstance, newTaskInstance);
} else {
addTaskToStandByLst(newTaskInstance);
submtStandByTask();
watToRetryTaskInstanceMap.remove(newTaskInstance.getTaskCode());
}
}
/**
* update process nstance
*/
publc vod refreshProcessInstance(nt processInstanceId) {
logger.nfo("process nstance update: {}", processInstanceId);
ProcessInstance newProcessInstance = processServce.fndProcessInstanceById(processInstanceId);
BeanUtls.copyPropertes(newProcessInstance, processInstance);
processDefnton = processServce.fndProcessDefnton(processInstance.getProcessDefntonCode(),
processInstance.getProcessDefntonVerson());
processInstance.setProcessDefnton(processDefnton);
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
/**
* update task nstance
*/
publc vod refreshTaskInstance(nt taskInstanceId) {
logger.nfo("task nstance update: {} ", taskInstanceId);
TaskInstance taskInstance = processServce.fndTaskInstanceById(taskInstanceId);
f (taskInstance == null) {
logger.error("can not fnd task nstance, d:{}", taskInstanceId);
return;
}
processServce.packageTaskInstance(taskInstance, processInstance);
taskInstanceMap.put(taskInstance.getId(), taskInstance);
valdTaskMap.remove(taskInstance.getTaskCode());
f (Flag.YES == taskInstance.getFlag()) {
valdTaskMap.put(taskInstance.getTaskCode(), taskInstance.getId());
}
}
/**
* check process nstance by state event
*/
publc vod checkProcessInstance(StateEvent stateEvent) throws StateEventHandleError {
f (ths.processInstance.getId() != stateEvent.getProcessInstanceId()) {
throw new StateEventHandleError("The event doesn't contans process nstance d");
}
}
/**
* check f task nstance exst by state event
*/
publc vod checkTaskInstanceByStateEvent(TaskStateEvent stateEvent) throws StateEventHandleError {
f (stateEvent.getTaskInstanceId() == 0) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
throw new StateEventHandleError("The taskInstanceId s 0");
}
f (!taskInstanceMap.contansKey(stateEvent.getTaskInstanceId())) {
throw new StateEventHandleError("Cannot fnd the taskInstance from taskInstanceMap");
}
}
/**
* check f task nstance exst by d
*/
publc boolean checkTaskInstanceById(nt taskInstanceId) {
f (taskInstanceMap.sEmpty()) {
return false;
}
return taskInstanceMap.contansKey(taskInstanceId);
}
/**
* get task nstance from memory
*/
publc Optonal<TaskInstance> getTaskInstance(nt taskInstanceId) {
f (taskInstanceMap.contansKey(taskInstanceId)) {
return Optonal.ofNullable(taskInstanceMap.get(taskInstanceId));
}
return Optonal.empty();
}
publc Optonal<TaskInstance> getTaskInstance(long taskCode) {
f (taskInstanceMap.sEmpty()) {
return Optonal.empty();
}
for (TaskInstance taskInstance : taskInstanceMap.values()) {
f (taskInstance.getTaskCode() == taskCode) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
return Optonal.of(taskInstance);
}
}
return Optonal.empty();
}
publc Optonal<TaskInstance> getActveTaskInstanceByTaskCode(long taskCode) {
Integer taskInstanceId = valdTaskMap.get(taskCode);
f (taskInstanceId != null) {
return Optonal.ofNullable(taskInstanceMap.get(taskInstanceId));
}
return Optonal.empty();
}
publc Optonal<TaskInstance> getRetryTaskInstanceByTaskCode(long taskCode) {
f (watToRetryTaskInstanceMap.contansKey(taskCode)) {
return Optonal.ofNullable(watToRetryTaskInstanceMap.get(taskCode));
}
return Optonal.empty();
}
publc vod processBlock() {
ProjectUser projectUser = processServce.queryProjectWthUserByProcessInstanceId(processInstance.getId());
processAlertManager.sendProcessBlockngAlert(processInstance, projectUser);
logger.nfo("processInstance {} block alert send successful!", processInstance.getId());
}
publc boolean processComplementData() {
f (!needComplementProcess()) {
return false;
}
f (processInstance.getState().sReadyStop() || !processInstance.getState().sFnshed()) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
return false;
}
Date scheduleDate = processInstance.getScheduleTme();
f (scheduleDate == null) {
scheduleDate = complementLstDate.get(0);
} else f (processInstance.getState().sFnshed()) {
endProcess();
f (complementLstDate.sEmpty()) {
logger.nfo("process complement end. process d:{}", processInstance.getId());
return true;
}
nt ndex = complementLstDate.ndexOf(scheduleDate);
f (ndex >= complementLstDate.sze() - 1 || !processInstance.getState().sSuccess()) {
logger.nfo("process complement end. process d:{}", processInstance.getId());
return true;
}
logger.nfo("process complement contnue. process d:{}, schedule tme:{} complementLstDate:{}",
processInstance.getId(), processInstance.getScheduleTme(), complementLstDate);
scheduleDate = complementLstDate.get(ndex + 1);
}
nt create = ths.createComplementDataCommand(scheduleDate);
f (create > 0) {
logger.nfo("create complement data command successfully.");
}
return true;
}
prvate nt createComplementDataCommand(Date scheduleDate) {
Command command = new Command();
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
command.setScheduleTme(scheduleDate);
command.setCommandType(CommandType.COMPLEMENT_DATA);
command.setProcessDefntonCode(processInstance.getProcessDefntonCode());
Map<Strng, Strng> cmdParam = JSONUtls.toMap(processInstance.getCommandParam());
f (cmdParam.contansKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) {
cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING);
}
f (cmdParam.contansKey(CMDPARAM_COMPLEMENT_DATA_SCHEDULE_DATE_LIST)) {
cmdParam.replace(CMDPARAM_COMPLEMENT_DATA_SCHEDULE_DATE_LIST,
cmdParam.get(CMDPARAM_COMPLEMENT_DATA_SCHEDULE_DATE_LIST)
.substrng(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_SCHEDULE_DATE_LIST).ndexOf(COMMA) + 1));
}
f (cmdParam.contansKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) {
cmdParam.replace(CMDPARAM_COMPLEMENT_DATA_START_DATE,
DateUtls.format(scheduleDate, YYYY_MM_DD_HH_MM_SS, null));
}
command.setCommandParam(JSONUtls.toJsonStrng(cmdParam));
command.setTaskDependType(processInstance.getTaskDependType());
command.setFalureStrategy(processInstance.getFalureStrategy());
command.setWarnngType(processInstance.getWarnngType());
command.setWarnngGroupId(processInstance.getWarnngGroupId());
command.setStartTme(new Date());
command.setExecutorId(processInstance.getExecutorId());
command.setUpdateTme(new Date());
command.setProcessInstanceProrty(processInstance.getProcessInstanceProrty());
command.setWorkerGroup(processInstance.getWorkerGroup());
command.setEnvronmentCode(processInstance.getEnvronmentCode());
command.setDryRun(processInstance.getDryRun());
command.setProcessInstanceId(0);
command.setProcessDefntonVerson(processInstance.getProcessDefntonVerson());
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
command.setTestFlag(processInstance.getTestFlag());
return processServce.createCommand(command);
}
prvate boolean needComplementProcess() {
f (processInstance.sComplementData() && Flag.NO == processInstance.getIsSubProcess()) {
return true;
}
return false;
}
/**
* ProcessInstance start entrypont.
*/
@Overrde
publc WorkflowSubmtStatue call() {
f (sStart()) {
logger.warn("[WorkflowInstance-{}] The workflow has already been started", processInstance.getId());
return WorkflowSubmtStatue.DUPLICATED_SUBMITTED;
}
try {
LoggerUtls.setWorkflowInstanceIdMDC(processInstance.getId());
f (workflowRunnableStatus == WorkflowRunnableStatus.CREATED) {
buldFlowDag();
workflowRunnableStatus = WorkflowRunnableStatus.INITIALIZE_DAG;
logger.nfo("workflowStatue changed to :{}", workflowRunnableStatus);
}
f (workflowRunnableStatus == WorkflowRunnableStatus.INITIALIZE_DAG) {
ntTaskQueue();
workflowRunnableStatus = WorkflowRunnableStatus.INITIALIZE_QUEUE;
logger.nfo("workflowStatue changed to :{}", workflowRunnableStatus);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
}
f (workflowRunnableStatus == WorkflowRunnableStatus.INITIALIZE_QUEUE) {
submtPostNode(null);
workflowRunnableStatus = WorkflowRunnableStatus.STARTED;
logger.nfo("workflowStatue changed to :{}", workflowRunnableStatus);
}
return WorkflowSubmtStatue.SUCCESS;
} catch (Excepton e) {
logger.error("Start workflow error", e);
return WorkflowSubmtStatue.FAILED;
} fnally {
LoggerUtls.removeWorkflowInstanceIdMDC();
}
}
/**
* process end handle
*/
publc vod endProcess() {
ths.stateEvents.clear();
f (processDefnton.getExecutonType().typeIsSeralWat() || processDefnton.getExecutonType()
.typeIsSeralProrty()) {
checkSeralProcess(processDefnton);
}
ProjectUser projectUser = processServce.queryProjectWthUserByProcessInstanceId(processInstance.getId());
processAlertManager.sendAlertProcessInstance(processInstance, getValdTaskLst(), projectUser);
f (processInstance.getState().sSuccess()) {
processAlertManager.closeAlert(processInstance);
}
f (checkTaskQueue()) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
processServce.releaseAllTaskGroup(processInstance.getId());
}
}
publc vod checkSeralProcess(ProcessDefnton processDefnton) {
nt nextInstanceId = processInstance.getNextProcessInstanceId();
f (nextInstanceId == 0) {
ProcessInstance nextProcessInstance =
ths.processServce.loadNextProcess4Seral(processInstance.getProcessDefnton().getCode(),
WorkflowExecutonStatus.SERIAL_WAIT.getCode(), processInstance.getId());
f (nextProcessInstance == null) {
return;
}
ProcessInstance nextReadyStopProcessInstance =
ths.processServce.loadNextProcess4Seral(processInstance.getProcessDefnton().getCode(),
WorkflowExecutonStatus.READY_STOP.getCode(), processInstance.getId());
f (processDefnton.getExecutonType().typeIsSeralProrty() && nextReadyStopProcessInstance != null) {
return;
}
nextInstanceId = nextProcessInstance.getId();
}
ProcessInstance nextProcessInstance = ths.processServce.fndProcessInstanceById(nextInstanceId);
f (nextProcessInstance.getState().sFnshed() || nextProcessInstance.getState().sRunnng()) {
return;
}
Map<Strng, Object> cmdParam = new HashMap<>();
cmdParam.put(CMD_PARAM_RECOVER_PROCESS_ID_STRING, nextInstanceId);
Command command = new Command();
command.setCommandType(CommandType.RECOVER_SERIAL_WAIT);
command.setProcessInstanceId(nextProcessInstance.getId());
command.setProcessDefntonCode(processDefnton.getCode());
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
command.setProcessDefntonVerson(processDefnton.getVerson());
command.setCommandParam(JSONUtls.toJsonStrng(cmdParam));
processServce.createCommand(command);
}
/**
* Generate process dag
*
* @throws Excepton excepton
*/
prvate vod buldFlowDag() throws Excepton {
processDefnton = processServce.fndProcessDefnton(processInstance.getProcessDefntonCode(),
processInstance.getProcessDefntonVerson());
processInstance.setProcessDefnton(processDefnton);
Lst<TaskInstance> recoverNodeLst = getRecoverTaskInstanceLst(processInstance.getCommandParam());
Lst<ProcessTaskRelaton> processTaskRelatons =
processServce.fndRelatonByCode(processDefnton.getCode(), processDefnton.getVerson());
Lst<TaskDefntonLog> taskDefntonLogs =
processServce.getTaskDefneLogLstByRelaton(processTaskRelatons);
Lst<TaskNode> taskNodeLst = processServce.transformTask(processTaskRelatons, taskDefntonLogs);
forbddenTaskMap.clear();
taskNodeLst.forEach(taskNode -> {
f (taskNode.sForbdden()) {
forbddenTaskMap.put(taskNode.getCode(), taskNode);
}
});
Lst<Strng> recoveryNodeCodeLst = getRecoveryNodeCodeLst(recoverNodeLst);
Lst<Strng> startNodeNameLst = parseStartNodeName(processInstance.getCommandParam());
ProcessDag processDag = generateFlowDag(taskNodeLst, startNodeNameLst, recoveryNodeCodeLst,
processInstance.getTaskDependType());
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
f (processDag == null) {
logger.error("ProcessDag s null");
return;
}
dag = DagHelper.buldDagGraph(processDag);
logger.nfo("Buld dag success, dag: {}", dag);
}
/**
* nt task queue
*/
prvate vod ntTaskQueue() throws StateEventHandleExcepton, CronParseExcepton {
taskFaledSubmt = false;
actveTaskProcessorMaps.clear();
dependFaledTaskSet.clear();
completeTaskMap.clear();
errorTaskMap.clear();
f (!sNewProcessInstance()) {
logger.nfo("The workflowInstance s not a newly runnng nstance, runtmes: {}, recover flag: {}",
processInstance.getRunTmes(),
processInstance.getRecovery());
Lst<TaskInstance> valdTaskInstanceLst =
processServce.fndValdTaskLstByProcessId(processInstance.getId(), processInstance.getTestFlag());
for (TaskInstance task : valdTaskInstanceLst) {
try {
LoggerUtls.setWorkflowAndTaskInstanceIDMDC(task.getProcessInstanceId(), task.getId());
logger.nfo(
"Check the taskInstance from a exst workflowInstance, exstTaskInstanceCode: {}, taskInstanceStatus: {}",
task.getTaskCode(),
task.getState());
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
f (valdTaskMap.contansKey(task.getTaskCode())) {
logger.warn("Have same taskCode taskInstance when nt task queue, need to check taskExecutonStatus, taskCode:{}",
task.getTaskCode());
nt oldTaskInstanceId = valdTaskMap.get(task.getTaskCode());
TaskInstance oldTaskInstance = taskInstanceMap.get(oldTaskInstanceId);
f (!oldTaskInstance.getState().sFnshed() && task.getState().sFnshed()) {
task.setFlag(Flag.NO);
processServce.updateTaskInstance(task);
contnue;
}
}
valdTaskMap.put(task.getTaskCode(), task.getId());
taskInstanceMap.put(task.getId(), task);
f (task.sTaskComplete()) {
logger.nfo("TaskInstance s already complete.");
completeTaskMap.put(task.getTaskCode(), task.getId());
contnue;
}
f (task.sCondtonsTask() || DagHelper.haveCondtonsAfterNode(Long.toStrng(task.getTaskCode()),
dag)) {
contnue;
}
f (task.taskCanRetry()) {
f (task.getState().sNeedFaultTolerance()) {
logger.nfo("TaskInstance needs fault tolerance, wll be added to standby lst.");
task.setFlag(Flag.NO);
processServce.updateTaskInstance(task);
TaskInstance tolerantTaskInstance = cloneTolerantTaskInstance(task);
addTaskToStandByLst(tolerantTaskInstance);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
} else {
logger.nfo("Retry taskInstance, taskState: {}", task.getState());
retryTaskInstance(task);
}
contnue;
}
f (task.getState().sFalure()) {
errorTaskMap.put(task.getTaskCode(), task.getId());
}
} fnally {
LoggerUtls.removeWorkflowAndTaskInstanceIdMDC();
}
}
} else {
logger.nfo("The current workflowInstance s a newly runnng workflowInstance");
}
f (processInstance.sComplementData() && complementLstDate.sEmpty()) {
Map<Strng, Strng> cmdParam = JSONUtls.toMap(processInstance.getCommandParam());
f (cmdParam != null) {
setGlobalParamIfCommanded(processDefnton, cmdParam);
Date start = null;
Date end = null;
f (cmdParam.contansKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)
&& cmdParam.contansKey(CMDPARAM_COMPLEMENT_DATA_END_DATE)) {
start = DateUtls.strngToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE));
end = DateUtls.strngToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
}
f (complementLstDate.sEmpty() && needComplementProcess()) {
f (start != null && end != null) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
Lst<Schedule> schedules = processServce.queryReleaseSchedulerLstByProcessDefntonCode(
processInstance.getProcessDefntonCode());
complementLstDate = CronUtls.getSelfFreDateLst(start, end, schedules);
}
f (cmdParam.contansKey(CMDPARAM_COMPLEMENT_DATA_SCHEDULE_DATE_LIST)) {
complementLstDate = CronUtls.getSelfScheduleDateLst(cmdParam);
}
logger.nfo(" process defnton code:{} complement data: {}",
processInstance.getProcessDefntonCode(), complementLstDate);
f (!complementLstDate.sEmpty() && Flag.NO == processInstance.getIsSubProcess()) {
processInstance.setScheduleTme(complementLstDate.get(0));
Strng globalParams = curngParamsServce.curngGlobalParams(processInstance.getId(),
processDefnton.getGlobalParamMap(),
processDefnton.getGlobalParamLst(),
CommandType.COMPLEMENT_DATA,
processInstance.getScheduleTme(),
cmdParam.get(Constants.SCHEDULE_TIMEZONE));
processInstance.setGlobalParams(globalParams);
processInstanceDao.updateProcessInstance(processInstance);
}
}
}
}
logger.nfo("Intalze task queue, dependFaledTaskSet: {}, completeTaskMap: {}, errorTaskMap: {}",
dependFaledTaskSet,
completeTaskMap,
errorTaskMap);
}
/**
* submt task to execute
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
*
* @param taskInstance task nstance
* @return TaskInstance
*/
prvate Optonal<TaskInstance> submtTaskExec(TaskInstance taskInstance) {
try {
processServce.packageTaskInstance(taskInstance, processInstance);
ITaskProcessor taskProcessor = TaskProcessorFactory.getTaskProcessor(taskInstance.getTaskType());
taskProcessor.nt(taskInstance, processInstance);
f (taskInstance.getState().sRunnng()
&& taskProcessor.getType().equalsIgnoreCase(Constants.COMMON_TASK_TYPE)) {
notfyProcessHostUpdate(taskInstance);
}
boolean submt = taskProcessor.acton(TaskActon.SUBMIT);
f (!submt) {
logger.error("Submt standby task faled!, taskCode: {}, taskName: {}",
taskInstance.getTaskCode(),
taskInstance.getName());
return Optonal.empty();
}
LoggerUtls.setWorkflowAndTaskInstanceIDMDC(taskInstance.getProcessInstanceId(), taskInstance.getId());
f (valdTaskMap.contansKey(taskInstance.getTaskCode())) {
nt oldTaskInstanceId = valdTaskMap.get(taskInstance.getTaskCode());
f (taskInstance.getId() != oldTaskInstanceId) {
TaskInstance oldTaskInstance = taskInstanceMap.get(oldTaskInstanceId);
oldTaskInstance.setFlag(Flag.NO);
processServce.updateTaskInstance(oldTaskInstance);
valdTaskMap.remove(taskInstance.getTaskCode());
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
actveTaskProcessorMaps.remove(taskInstance.getTaskCode());
}
}
valdTaskMap.put(taskInstance.getTaskCode(), taskInstance.getId());
taskInstanceMap.put(taskInstance.getId(), taskInstance);
actveTaskProcessorMaps.put(taskInstance.getTaskCode(), taskProcessor);
nt taskGroupId = taskInstance.getTaskGroupId();
f (taskGroupId > 0) {
boolean acqureTaskGroup = processServce.acqureTaskGroup(taskInstance.getId(),
taskInstance.getName(),
taskGroupId,
taskInstance.getProcessInstanceId(),
taskInstance.getTaskGroupProrty());
f (!acqureTaskGroup) {
logger.nfo("Submtted task wll not be dspatch rght now because the frst tme to try to acqure" +
" task group faled, taskInstanceName: {}, taskGroupId: {}",
taskInstance.getName(), taskGroupId);
return Optonal.of(taskInstance);
}
}
boolean dspatchSuccess = taskProcessor.acton(TaskActon.DISPATCH);
f (!dspatchSuccess) {
logger.error("Dspatch standby process {} task {} faled", processInstance.getName(), taskInstance.getName());
return Optonal.empty();
}
taskProcessor.acton(TaskActon.RUN);
stateWheelExecuteThread.addTask4TmeoutCheck(processInstance, taskInstance);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
stateWheelExecuteThread.addTask4StateCheck(processInstance, taskInstance);
f (taskProcessor.taskInstance().getState().sFnshed()) {
f (processInstance.sBlocked()) {
TaskStateEvent processBlockEvent = TaskStateEvent.bulder()
.processInstanceId(processInstance.getId())
.taskInstanceId(taskInstance.getId())
.status(taskProcessor.taskInstance().getState())
.type(StateEventType.PROCESS_BLOCKED)
.buld();
ths.stateEvents.add(processBlockEvent);
}
TaskStateEvent taskStateChangeEvent = TaskStateEvent.bulder()
.processInstanceId(processInstance.getId())
.taskInstanceId(taskInstance.getId())
.status(taskProcessor.taskInstance().getState())
.type(StateEventType.TASK_STATE_CHANGE)
.buld();
ths.stateEvents.add(taskStateChangeEvent);
}
return Optonal.of(taskInstance);
} catch (Excepton e) {
logger.error("Submt standby task {} error, taskCode: {}", taskInstance.getName(),
taskInstance.getTaskCode(), e);
return Optonal.empty();
} fnally {
LoggerUtls.removeWorkflowAndTaskInstanceIdMDC();
}
}
prvate vod notfyProcessHostUpdate(TaskInstance taskInstance) {
f (StrngUtls.sEmpty(taskInstance.getHost())) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
return;
}
try {
HostUpdateCommand hostUpdateCommand = new HostUpdateCommand();
hostUpdateCommand.setProcessHost(masterAddress);
hostUpdateCommand.setTaskInstanceId(taskInstance.getId());
Host host = new Host(taskInstance.getHost());
nettyExecutorManager.doExecute(host, hostUpdateCommand.convert2Command());
} catch (Excepton e) {
logger.error("notfy process host update", e);
}
}
/**
* fnd task nstance n db.
* n case submt more than one same name task n the same tme.
*
* @param taskCode task code
* @param taskVerson task verson
* @return TaskInstance
*/
prvate TaskInstance fndTaskIfExsts(Long taskCode, nt taskVerson) {
Lst<TaskInstance> valdTaskInstanceLst = getValdTaskLst();
for (TaskInstance taskInstance : valdTaskInstanceLst) {
f (taskInstance.getTaskCode() == taskCode && taskInstance.getTaskDefntonVerson() == taskVerson) {
return taskInstance;
}
}
return null;
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
/**
* encapsulaton task, ths method wll only create a new task nstance, the return task nstance wll not contan d.
*
* @param processInstance process nstance
* @param taskNode taskNode
* @return TaskInstance
*/
prvate TaskInstance createTaskInstance(ProcessInstance processInstance, TaskNode taskNode) {
TaskInstance taskInstance = fndTaskIfExsts(taskNode.getCode(), taskNode.getVerson());
f (taskInstance != null) {
return taskInstance;
}
return newTaskInstance(processInstance, taskNode);
}
/**
* clone a new taskInstance for retry and reset some logc felds
*
* @return
*/
publc TaskInstance cloneRetryTaskInstance(TaskInstance taskInstance) {
TaskNode taskNode = dag.getNode(Long.toStrng(taskInstance.getTaskCode()));
f (taskNode == null) {
logger.error("Clone retry taskInstance error because taskNode s null, taskCode:{}",
taskInstance.getTaskCode());
return null;
}
TaskInstance newTaskInstance = newTaskInstance(processInstance, taskNode);
newTaskInstance.setTaskDefne(taskInstance.getTaskDefne());
newTaskInstance.setProcessDefne(taskInstance.getProcessDefne());
newTaskInstance.setProcessInstance(processInstance);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
newTaskInstance.setRetryTmes(taskInstance.getRetryTmes() + 1);
newTaskInstance.setState(taskInstance.getState());
newTaskInstance.setEndTme(taskInstance.getEndTme());
f (taskInstance.getState() == TaskExecutonStatus.NEED_FAULT_TOLERANCE) {
newTaskInstance.setAppLnk(taskInstance.getAppLnk());
}
return newTaskInstance;
}
/**
* clone a new taskInstance for tolerant and reset some logc felds
*
* @return
*/
publc TaskInstance cloneTolerantTaskInstance(TaskInstance taskInstance) {
TaskNode taskNode = dag.getNode(Long.toStrng(taskInstance.getTaskCode()));
f (taskNode == null) {
logger.error("Clone tolerant taskInstance error because taskNode s null, taskCode:{}",
taskInstance.getTaskCode());
return null;
}
TaskInstance newTaskInstance = newTaskInstance(processInstance, taskNode);
newTaskInstance.setTaskDefne(taskInstance.getTaskDefne());
newTaskInstance.setProcessDefne(taskInstance.getProcessDefne());
newTaskInstance.setProcessInstance(processInstance);
newTaskInstance.setRetryTmes(taskInstance.getRetryTmes());
newTaskInstance.setState(taskInstance.getState());
newTaskInstance.setAppLnk(taskInstance.getAppLnk());
return newTaskInstance;
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
/**
* new a taskInstance
*
* @param processInstance
* @param taskNode
* @return
*/
publc TaskInstance newTaskInstance(ProcessInstance processInstance, TaskNode taskNode) {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setTaskCode(taskNode.getCode());
taskInstance.setTaskDefntonVerson(taskNode.getVerson());
taskInstance.setName(taskNode.getName());
taskInstance.setState(TaskExecutonStatus.SUBMITTED_SUCCESS);
taskInstance.setProcessInstanceId(processInstance.getId());
taskInstance.setTaskType(taskNode.getType().toUpperCase());
taskInstance.setAlertFlag(Flag.NO);
taskInstance.setStartTme(null);
taskInstance.setTestFlag(processInstance.getTestFlag());
taskInstance.setFlag(Flag.YES);
taskInstance.setDryRun(processInstance.getDryRun());
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
taskInstance.setRetryTmes(0);
taskInstance.setMaxRetryTmes(taskNode.getMaxRetryTmes());
taskInstance.setRetryInterval(taskNode.getRetryInterval());
taskInstance.setTaskParams(taskNode.getTaskParams());
taskInstance.setTaskGroupId(taskNode.getTaskGroupId());
taskInstance.setTaskGroupProrty(taskNode.getTaskGroupProrty());
taskInstance.setCpuQuota(taskNode.getCpuQuota());
taskInstance.setMemoryMax(taskNode.getMemoryMax());
f (taskNode.getTaskInstanceProrty() == null) {
taskInstance.setTaskInstanceProrty(Prorty.MEDIUM);
} else {
taskInstance.setTaskInstanceProrty(taskNode.getTaskInstanceProrty());
}
Strng processWorkerGroup = processInstance.getWorkerGroup();
processWorkerGroup = StrngUtls.sBlank(processWorkerGroup) ? DEFAULT_WORKER_GROUP : processWorkerGroup;
Strng taskWorkerGroup =
StrngUtls.sBlank(taskNode.getWorkerGroup()) ? processWorkerGroup : taskNode.getWorkerGroup();
Long processEnvronmentCode =
Objects.sNull(processInstance.getEnvronmentCode()) ? -1 : processInstance.getEnvronmentCode();
Long taskEnvronmentCode =
Objects.sNull(taskNode.getEnvronmentCode()) ? processEnvronmentCode : taskNode.getEnvronmentCode();
f (!processWorkerGroup.equals(DEFAULT_WORKER_GROUP) && taskWorkerGroup.equals(DEFAULT_WORKER_GROUP)) {
taskInstance.setWorkerGroup(processWorkerGroup);
taskInstance.setEnvronmentCode(processEnvronmentCode);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
} else {
taskInstance.setWorkerGroup(taskWorkerGroup);
taskInstance.setEnvronmentCode(taskEnvronmentCode);
}
f (!taskInstance.getEnvronmentCode().equals(-1L)) {
Envronment envronment = processServce.fndEnvronmentByCode(taskInstance.getEnvronmentCode());
f (Objects.nonNull(envronment) && StrngUtls.sNotEmpty(envronment.getConfg())) {
taskInstance.setEnvronmentConfg(envronment.getConfg());
}
}
taskInstance.setDelayTme(taskNode.getDelayTme());
taskInstance.setTaskExecuteType(taskNode.getTaskExecuteType());
return taskInstance;
}
publc vod getPreVarPool(TaskInstance taskInstance, Set<Strng> preTask) {
Map<Strng, Property> allProperty = new HashMap<>();
Map<Strng, TaskInstance> allTaskInstance = new HashMap<>();
f (CollectonUtls.sNotEmpty(preTask)) {
for (Strng preTaskCode : preTask) {
Integer taskId = completeTaskMap.get(Long.parseLong(preTaskCode));
f (taskId == null) {
contnue;
}
TaskInstance preTaskInstance = taskInstanceMap.get(taskId);
f (preTaskInstance == null) {
contnue;
}
Strng preVarPool = preTaskInstance.getVarPool();
f (StrngUtls.sNotEmpty(preVarPool)) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
Lst<Property> propertes = JSONUtls.toLst(preVarPool, Property.class);
for (Property nfo : propertes) {
setVarPoolValue(allProperty, allTaskInstance, preTaskInstance, nfo);
}
}
}
f (allProperty.sze() > 0) {
taskInstance.setVarPool(JSONUtls.toJsonStrng(allProperty.values()));
}
} else {
f (StrngUtls.sNotEmpty(processInstance.getVarPool())) {
taskInstance.setVarPool(processInstance.getVarPool());
}
}
}
publc Collecton<TaskInstance> getAllTaskInstances() {
return taskInstanceMap.values();
}
prvate vod setVarPoolValue(Map<Strng, Property> allProperty, Map<Strng, TaskInstance> allTaskInstance,
TaskInstance preTaskInstance, Property thsProperty) {
thsProperty.setDrect(Drect.IN);
Strng proName = thsProperty.getProp();
f (allProperty.contansKey(proName)) {
Property otherPro = allProperty.get(proName);
f (StrngUtls.sEmpty(thsProperty.getValue())) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
allProperty.put(proName, otherPro);
} else f (StrngUtls.sNotEmpty(otherPro.getValue())) {
TaskInstance otherTask = allTaskInstance.get(proName);
f (otherTask.getEndTme().getTme() > preTaskInstance.getEndTme().getTme()) {
allProperty.put(proName, thsProperty);
allTaskInstance.put(proName, preTaskInstance);
} else {
allProperty.put(proName, otherPro);
}
} else {
allProperty.put(proName, thsProperty);
allTaskInstance.put(proName, preTaskInstance);
}
} else {
allProperty.put(proName, thsProperty);
allTaskInstance.put(proName, preTaskInstance);
}
}
/**
* get complete task nstance map, taskCode as key
*/
prvate Map<Strng, TaskInstance> getCompleteTaskInstanceMap() {
Map<Strng, TaskInstance> completeTaskInstanceMap = new HashMap<>();
for (Map.Entry<Long, Integer> entry : completeTaskMap.entrySet()) {
Long taskConde = entry.getKey();
Integer taskInstanceId = entry.getValue();
TaskInstance taskInstance = taskInstanceMap.get(taskInstanceId);
f (taskInstance == null) {
logger.warn("Cannot fnd the taskInstance from taskInstanceMap, taskInstanceId: {}, taskConde: {}",
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
taskInstanceId,
taskConde);
contnue;
}
completeTaskInstanceMap.put(Long.toStrng(taskInstance.getTaskCode()), taskInstance);
}
return completeTaskInstanceMap;
}
/**
* get vald task lst
*/
prvate Lst<TaskInstance> getValdTaskLst() {
Lst<TaskInstance> valdTaskInstanceLst = new ArrayLst<>();
for (Integer taskInstanceId : valdTaskMap.values()) {
valdTaskInstanceLst.add(taskInstanceMap.get(taskInstanceId));
}
return valdTaskInstanceLst;
}
prvate vod submtPostNode(Strng parentNodeCode) throws StateEventHandleExcepton {
Set<Strng> submtTaskNodeLst =
DagHelper.parsePostNodes(parentNodeCode, skpTaskNodeMap, dag, getCompleteTaskInstanceMap());
Lst<TaskInstance> taskInstances = new ArrayLst<>();
for (Strng taskNode : submtTaskNodeLst) {
TaskNode taskNodeObject = dag.getNode(taskNode);
Optonal<TaskInstance> exstTaskInstanceOptonal = getTaskInstance(taskNodeObject.getCode());
f (exstTaskInstanceOptonal.sPresent()) {
taskInstances.add(exstTaskInstanceOptonal.get());
contnue;
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
TaskInstance task = createTaskInstance(processInstance, taskNodeObject);
taskInstances.add(task);
}
f (StrngUtls.sNotEmpty(parentNodeCode) && dag.getEndNode().contans(parentNodeCode)) {
TaskInstance endTaskInstance = taskInstanceMap.get(completeTaskMap.get(NumberUtls.toLong(parentNodeCode)));
Strng taskInstanceVarPool = endTaskInstance.getVarPool();
f (StrngUtls.sNotEmpty(taskInstanceVarPool)) {
Set<Property> taskPropertes = new HashSet<>(JSONUtls.toLst(taskInstanceVarPool, Property.class));
Strng processInstanceVarPool = processInstance.getVarPool();
f (StrngUtls.sNotEmpty(processInstanceVarPool)) {
Set<Property> propertes = new HashSet<>(JSONUtls.toLst(processInstanceVarPool, Property.class));
propertes.addAll(taskPropertes);
processInstance.setVarPool(JSONUtls.toJsonStrng(propertes));
} else {
processInstance.setVarPool(JSONUtls.toJsonStrng(taskPropertes));
}
}
}
for (TaskInstance task : taskInstances) {
f (readyToSubmtTaskQueue.contans(task)) {
logger.warn("Task s already at submt queue, taskInstanceId: {}", task.getId());
contnue;
}
f (task.getId() != null && completeTaskMap.contansKey(task.getTaskCode())) {
logger.nfo("Task has already run success, taskName: {}", task.getName());
contnue;
}
f (task.getState().sKll()) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
logger.nfo("Task s be stopped, the state s {}, taskInstanceId: {}", task.getState(), task.getId());
contnue;
}
addTaskToStandByLst(task);
}
submtStandByTask();
updateProcessInstanceState();
}
/**
* determne whether the dependences of the task node are complete
*
* @return DependResult
*/
prvate DependResult sTaskDepsComplete(Strng taskCode) {
Collecton<Strng> startNodes = dag.getBegnNode();
f (startNodes.contans(taskCode)) {
return DependResult.SUCCESS;
}
TaskNode taskNode = dag.getNode(taskCode);
Lst<Strng> ndrectDepCodeLst = new ArrayLst<>();
setIndrectDepLst(taskCode, ndrectDepCodeLst);
for (Strng depsNode : ndrectDepCodeLst) {
f (dag.contansNode(depsNode) && !skpTaskNodeMap.contansKey(depsNode)) {
Long despNodeTaskCode = Long.parseLong(depsNode);
f (!completeTaskMap.contansKey(despNodeTaskCode)) {
return DependResult.WAITING;
}
Integer depsTaskId = completeTaskMap.get(despNodeTaskCode);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
TaskExecutonStatus depTaskState = taskInstanceMap.get(depsTaskId).getState();
f (depTaskState.sKll()) {
return DependResult.NON_EXEC;
}
f (taskNode.sBlockngTask()) {
contnue;
}
f (taskNode.sCondtonsTask()) {
contnue;
}
f (!dependTaskSuccess(depsNode, taskCode)) {
return DependResult.FAILED;
}
}
}
logger.nfo("The dependTasks of task all success, currentTaskCode: {}, dependTaskCodes: {}",
taskCode, Arrays.toStrng(completeTaskMap.keySet().toArray()));
return DependResult.SUCCESS;
}
/**
* Ths functon s specally used to handle the dependency stuaton where the parent node s a prohbted node.
* When the parent node s a forbdden node, the dependency relatonshp should contnue to be traced
*
* @param taskCode taskCode
* @param ndrectDepCodeLst All ndrectly dependent nodes
*/
prvate vod setIndrectDepLst(Strng taskCode, Lst<Strng> ndrectDepCodeLst) {
TaskNode taskNode = dag.getNode(taskCode);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
Lst<Strng> depCodeLst = taskNode.getDepLst();
for (Strng depsNode : depCodeLst) {
f (forbddenTaskMap.contansKey(Long.parseLong(depsNode))) {
setIndrectDepLst(depsNode, ndrectDepCodeLst);
} else {
ndrectDepCodeLst.add(depsNode);
}
}
}
/**
* depend node s completed, but here need check the condton task branch s the next node
*/
prvate boolean dependTaskSuccess(Strng dependNodeName, Strng nextNodeName) {
f (dag.getNode(dependNodeName).sCondtonsTask()) {
Lst<Strng> nextTaskLst =
DagHelper.parseCondtonTask(dependNodeName, skpTaskNodeMap, dag, getCompleteTaskInstanceMap());
f (!nextTaskLst.contans(nextNodeName)) {
logger.nfo("DependTask s a condton task, and ts next condton branch does not hava current task, " +
"dependTaskCode: {}, currentTaskCode: {}", dependNodeName, nextNodeName
);
return false;
}
} else {
long taskCode = Long.parseLong(dependNodeName);
Integer taskInstanceId = completeTaskMap.get(taskCode);
TaskExecutonStatus depTaskState = taskInstanceMap.get(taskInstanceId).getState();
f (depTaskState.sFalure()) {
return false;
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
}
return true;
}
/**
* query task nstance by complete state
*
* @param state state
* @return task nstance lst
*/
prvate Lst<TaskInstance> getCompleteTaskByState(TaskExecutonStatus state) {
Lst<TaskInstance> resultLst = new ArrayLst<>();
for (Integer taskInstanceId : completeTaskMap.values()) {
TaskInstance taskInstance = taskInstanceMap.get(taskInstanceId);
f (taskInstance != null && taskInstance.getState() == state) {
resultLst.add(taskInstance);
}
}
return resultLst;
}
/**
* where there are ongong tasks
*
* @param state state
* @return ExecutonStatus
*/
prvate WorkflowExecutonStatus runnngState(WorkflowExecutonStatus state) {
f (state == WorkflowExecutonStatus.READY_STOP || state == WorkflowExecutonStatus.READY_PAUSE
|| state == WorkflowExecutonStatus.READY_BLOCK ||
state == WorkflowExecutonStatus.DELAY_EXECUTION) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
return state;
} else {
return WorkflowExecutonStatus.RUNNING_EXECUTION;
}
}
/**
* exsts falure task,contans submt falure、dependency falure,execute falure(retry after)
*
* @return Boolean whether has faled task
*/
prvate boolean hasFaledTask() {
f (ths.taskFaledSubmt) {
return true;
}
f (ths.errorTaskMap.sze() > 0) {
return true;
}
return ths.dependFaledTaskSet.sze() > 0;
}
/**
* process nstance falure
*
* @return Boolean whether process nstance faled
*/
prvate boolean processFaled() {
f (hasFaledTask()) {
logger.nfo("The current process has faled task, the current process faled");
f (processInstance.getFalureStrategy() == FalureStrategy.END) {
return true;
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
f (processInstance.getFalureStrategy() == FalureStrategy.CONTINUE) {
return readyToSubmtTaskQueue.sze() == 0 && actveTaskProcessorMaps.sze() == 0
&& watToRetryTaskInstanceMap.sze() == 0;
}
}
return false;
}
/**
* prepare for pause
* 1,faled retry task n the preparaton queue , returns to falure drectly
* 2,exsts pause task,complement not completed, pendng submsson of tasks, return to suspenson
* 3,success
*
* @return ExecutonStatus
*/
prvate WorkflowExecutonStatus processReadyPause() {
f (hasRetryTaskInStandBy()) {
return WorkflowExecutonStatus.FAILURE;
}
Lst<TaskInstance> pauseLst = getCompleteTaskByState(TaskExecutonStatus.PAUSE);
f (CollectonUtls.sNotEmpty(pauseLst) || processInstance.sBlocked() || !sComplementEnd()
|| readyToSubmtTaskQueue.sze() > 0) {
return WorkflowExecutonStatus.PAUSE;
} else {
return WorkflowExecutonStatus.SUCCESS;
}
}
/**
* prepare for block
* f process has tasks stll runnng, pause them
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
* f readyToSubmtTaskQueue s not empty, kll them
* else return block status drectly
*
* @return ExecutonStatus
*/
prvate WorkflowExecutonStatus processReadyBlock() {
f (actveTaskProcessorMaps.sze() > 0) {
for (ITaskProcessor taskProcessor : actveTaskProcessorMaps.values()) {
f (!TASK_TYPE_BLOCKING.equals(taskProcessor.getType())) {
taskProcessor.acton(TaskActon.PAUSE);
}
}
}
f (readyToSubmtTaskQueue.sze() > 0) {
for (Iterator<TaskInstance> ter = readyToSubmtTaskQueue.terator(); ter.hasNext();) {
ter.next().setState(TaskExecutonStatus.PAUSE);
}
}
return WorkflowExecutonStatus.BLOCK;
}
/**
* generate the latest process nstance status by the tasks state
*
* @return process nstance executon status
*/
prvate WorkflowExecutonStatus getProcessInstanceState(ProcessInstance nstance) {
WorkflowExecutonStatus state = nstance.getState();
f (actveTaskProcessorMaps.sze() > 0 || hasRetryTaskInStandBy()) {
// actve
WorkflowExecutonStatus executonStatus = runnngState(state);
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
logger.nfo("The workflowInstance has task runnng, the workflowInstance status s {}", executonStatus);
return executonStatus;
}
// block
f (state == WorkflowExecutonStatus.READY_BLOCK) {
WorkflowExecutonStatus executonStatus = processReadyBlock();
logger.nfo("The workflowInstance s ready to block, the workflowInstance status s {}", executonStatus);
return executonStatus;
}
// pause
f (state == WorkflowExecutonStatus.READY_PAUSE) {
WorkflowExecutonStatus executonStatus = processReadyPause();
logger.nfo("The workflowInstance s ready to pause, the workflow status s {}", executonStatus);
return executonStatus;
}
// stop
f (state == WorkflowExecutonStatus.READY_STOP) {
Lst<TaskInstance> kllLst = getCompleteTaskByState(TaskExecutonStatus.KILL);
Lst<TaskInstance> falLst = getCompleteTaskByState(TaskExecutonStatus.FAILURE);
WorkflowExecutonStatus executonStatus;
f (CollectonUtls.sNotEmpty(kllLst) || CollectonUtls.sNotEmpty(falLst) || !sComplementEnd()) {
executonStatus = WorkflowExecutonStatus.STOP;
} else {
executonStatus = WorkflowExecutonStatus.SUCCESS;
}
logger.nfo("The workflowInstance s ready to stop, the workflow status s {}", executonStatus);
return executonStatus;
}
// process falure
f (processFaled()) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
logger.nfo("The workflowInstance s faled, the workflow status s {}", WorkflowExecutonStatus.FAILURE);
return WorkflowExecutonStatus.FAILURE;
}
// success
f (state == WorkflowExecutonStatus.RUNNING_EXECUTION) {
Lst<TaskInstance> kllTasks = getCompleteTaskByState(TaskExecutonStatus.KILL);
f (readyToSubmtTaskQueue.sze() > 0 || watToRetryTaskInstanceMap.sze() > 0) {
// tasks c
return WorkflowExecutonStatus.RUNNING_EXECUTION;
} else f (CollectonUtls.sNotEmpty(kllTasks)) {
// tasks m
return WorkflowExecutonStatus.FAILURE;
} else {
// f the
return WorkflowExecutonStatus.SUCCESS;
}
}
return state;
}
/**
* whether complement end
*
* @return Boolean whether s complement end
*/
prvate boolean sComplementEnd() {
f (!processInstance.sComplementData()) {
return true;
}
Map<Strng, Strng> cmdParam = JSONUtls.toMap(processInstance.getCommandParam());
Date endTme = DateUtls.strngToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
return processInstance.getScheduleTme().equals(endTme);
}
/**
* updateProcessInstance process nstance state
* after each batch of tasks s executed, of the process nstance s updated
*/
prvate vod updateProcessInstanceState() throws StateEventHandleExcepton {
WorkflowExecutonStatus state = getProcessInstanceState(processInstance);
f (processInstance.getState() != state) {
logger.nfo("Update workflowInstance states, orgn state: {}, target state: {}",
processInstance.getState(),
state);
updateWorkflowInstanceStatesToDB(state);
WorkflowStateEvent stateEvent = WorkflowStateEvent.bulder()
.processInstanceId(processInstance.getId())
.status(processInstance.getState())
.type(StateEventType.PROCESS_STATE_CHANGE)
.buld();
// replace
ths.stateEvents.add(stateEvent);
} else {
logger.nfo("There s no need to update the workflow nstance state, orgn state: {}, target state: {}",
processInstance.getState(),
state);
}
}
/**
* stateEvent's executon status as process nstance state
*/
publc vod updateProcessInstanceState(WorkflowStateEvent stateEvent) throws StateEventHandleExcepton {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
WorkflowExecutonStatus state = stateEvent.getStatus();
updateWorkflowInstanceStatesToDB(state);
}
prvate vod updateWorkflowInstanceStatesToDB(WorkflowExecutonStatus newStates) throws StateEventHandleExcepton {
WorkflowExecutonStatus orgnStates = processInstance.getState();
f (orgnStates != newStates) {
logger.nfo("Begn to update workflow nstance state , state wll change from {} to {}",
orgnStates,
newStates);
processInstance.setStateWthDesc(newStates, "update by workflow executor");
f (newStates.sFnshed()) {
processInstance.setEndTme(new Date());
}
try {
processInstanceDao.updateProcessInstance(processInstance);
} catch (Excepton ex) {
// recover
processInstance.setStateWthDesc(orgnStates, "recover state by DB error");
processInstance.setEndTme(null);
throw new StateEventHandleExcepton("Update process nstance status to DB error", ex);
}
}
}
/**
* get task dependency result
*
* @param taskInstance task nstance
* @return DependResult
*/
prvate DependResult getDependResultForTask(TaskInstance taskInstance) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
return sTaskDepsComplete(Long.toStrng(taskInstance.getTaskCode()));
}
/**
* add task to standby lst
*
* @param taskInstance task nstance
*/
publc vod addTaskToStandByLst(TaskInstance taskInstance) {
f (readyToSubmtTaskQueue.contans(taskInstance)) {
logger.warn("Task already exsts n ready submt queue, no need to add agan, task code:{}",
taskInstance.getTaskCode());
return;
}
logger.nfo("Add task to stand by lst, task name:{}, task d:{}, task code:{}",
taskInstance.getName(),
taskInstance.getId(),
taskInstance.getTaskCode());
TaskMetrcs.ncTaskInstanceByState("submt");
readyToSubmtTaskQueue.put(taskInstance);
}
/**
* remove task from stand by lst
*
* @param taskInstance task nstance
*/
prvate boolean removeTaskFromStandbyLst(TaskInstance taskInstance) {
return readyToSubmtTaskQueue.remove(taskInstance);
}
/**
* has retry task n standby
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
*
* @return Boolean whether has retry task n standby
*/
prvate boolean hasRetryTaskInStandBy() {
for (Iterator<TaskInstance> ter = readyToSubmtTaskQueue.terator(); ter.hasNext();) {
f (ter.next().getState().sFalure()) {
return true;
}
}
return false;
}
/**
* close the on gong tasks
*/
publc vod kllAllTasks() {
logger.nfo("kll called on process nstance d: {}, num: {}",
processInstance.getId(),
actveTaskProcessorMaps.sze());
f (readyToSubmtTaskQueue.sze() > 0) {
readyToSubmtTaskQueue.clear();
}
for (long taskCode : actveTaskProcessorMaps.keySet()) {
ITaskProcessor taskProcessor = actveTaskProcessorMaps.get(taskCode);
Integer taskInstanceId = valdTaskMap.get(taskCode);
f (taskInstanceId == null || taskInstanceId.equals(0)) {
contnue;
}
TaskInstance taskInstance = processServce.fndTaskInstanceById(taskInstanceId);
f (taskInstance == null || taskInstance.getState().sFnshed()) {
contnue;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
}
taskProcessor.acton(TaskActon.STOP);
f (taskProcessor.taskInstance().getState().sFnshed()) {
TaskStateEvent taskStateEvent = TaskStateEvent.bulder()
.processInstanceId(processInstance.getId())
.taskInstanceId(taskInstance.getId())
.status(taskProcessor.taskInstance().getState())
.type(StateEventType.TASK_STATE_CHANGE)
.buld();
ths.addStateEvent(taskStateEvent);
}
}
}
publc boolean workFlowFnsh() {
return ths.processInstance.getState().sFnshed();
}
/**
* handlng the lst of tasks to be submtted
*/
publc vod submtStandByTask() throws StateEventHandleExcepton {
nt length = readyToSubmtTaskQueue.sze();
for (nt = 0; < length; ++) {
TaskInstance task = readyToSubmtTaskQueue.peek();
f (task == null) {
contnue;
}
// stop ta
f (task.taskCanRetry()) {
TaskInstance retryTask = processServce.fndTaskInstanceById(task.getId());
f (retryTask != null && retryTask.getState().sForceSuccess()) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
task.setState(retryTask.getState());
logger.nfo("Task {} has been forced success, put t nto complete task lst and stop retryng, taskInstanceId: {}",
task.getName(), task.getId());
removeTaskFromStandbyLst(task);
completeTaskMap.put(task.getTaskCode(), task.getId());
taskInstanceMap.put(task.getId(), task);
submtPostNode(Long.toStrng(task.getTaskCode()));
contnue;
}
}
// nt va
f (task.sFrstRun()) {
// get pre
Set<Strng> preTask = dag.getPrevousNodes(Long.toStrng(task.getTaskCode()));
getPreVarPool(task, preTask);
}
DependResult dependResult = getDependResultForTask(task);
f (DependResult.SUCCESS == dependResult) {
logger.nfo("The dependResult of task {} s success, so ready to submt to execute", task.getName());
Optonal<TaskInstance> taskInstanceOptonal = submtTaskExec(task);
f (!taskInstanceOptonal.sPresent()) {
ths.taskFaledSubmt = true;
// Remove
f (!removeTaskFromStandbyLst(task)) {
logger.error(
"Task submt faled, remove from standby lst faled, workflowInstanceId: {}, taskCode: {}",
processInstance.getId(),
task.getTaskCode());
}
completeTaskMap.put(task.getTaskCode(), task.getId());
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
taskInstanceMap.put(task.getId(), task);
errorTaskMap.put(task.getTaskCode(), task.getId());
actveTaskProcessorMaps.remove(task.getTaskCode());
logger.error("Task submtted faled, workflowInstanceId: {}, taskInstanceId: {}, taskCode: {}",
task.getProcessInstanceId(),
task.getId(),
task.getTaskCode());
} else {
removeTaskFromStandbyLst(task);
}
} else f (DependResult.FAILED == dependResult) {
// f the
dependFaledTaskSet.add(task.getTaskCode());
removeTaskFromStandbyLst(task);
logger.nfo("Task dependent result s faled, taskInstanceId:{} depend result : {}", task.getId(),
dependResult);
} else f (DependResult.NON_EXEC == dependResult) {
// for som
removeTaskFromStandbyLst(task);
logger.nfo("Remove task due to depend result not executed, taskInstanceId:{} depend result : {}",
task.getId(), dependResult);
}
}
}
/**
* Get start task nstance lst from recover
*
* @param cmdParam command param
* @return task nstance lst
*/
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
protected Lst<TaskInstance> getRecoverTaskInstanceLst(Strng cmdParam) {
Map<Strng, Strng> paramMap = JSONUtls.toMap(cmdParam);
// todo: Can we use a better way to set the recover taskInstanceId lst? rather then use the cmdParam
f (paramMap != null && paramMap.contansKey(CMD_PARAM_RECOVERY_START_NODE_STRING)) {
Lst<Integer> startTaskInstanceIds = Arrays.stream(paramMap.get(CMD_PARAM_RECOVERY_START_NODE_STRING)
.splt(COMMA))
.flter(StrngUtls::sNotEmpty)
.map(Integer::valueOf)
.collect(Collectors.toLst());
f (CollectonUtls.sNotEmpty(startTaskInstanceIds)) {
return processServce.fndTaskInstanceByIdLst(startTaskInstanceIds);
}
}
return Collectons.emptyLst();
}
/**
* parse "StartNodeNameLst" from cmd param
*
* @param cmdParam command param
* @return start node name lst
*/
prvate Lst<Strng> parseStartNodeName(Strng cmdParam) {
Lst<Strng> startNodeNameLst = new ArrayLst<>();
Map<Strng, Strng> paramMap = JSONUtls.toMap(cmdParam);
f (paramMap == null) {
return startNodeNameLst;
}
f (paramMap.contansKey(CMD_PARAM_START_NODES)) {
startNodeNameLst = Arrays.asLst(paramMap.get(CMD_PARAM_START_NODES).splt(Constants.COMMA));
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
return startNodeNameLst;
}
/**
* generate start node code lst from parsng command param;
* f "StartNodeIdLst" exsts n command param, return StartNodeIdLst
*
* @return recovery node code lst
*/
prvate Lst<Strng> getRecoveryNodeCodeLst(Lst<TaskInstance> recoverNodeLst) {
Lst<Strng> recoveryNodeCodeLst = new ArrayLst<>();
f (CollectonUtls.sNotEmpty(recoverNodeLst)) {
for (TaskInstance task : recoverNodeLst) {
recoveryNodeCodeLst.add(Long.toStrng(task.getTaskCode()));
}
}
return recoveryNodeCodeLst;
}
/**
* generate flow dag
*
* @param totalTaskNodeLst total task node lst
* @param startNodeNameLst start node name lst
* @param recoveryNodeCodeLst recovery node code lst
* @param depNodeType depend node type
* @return ProcessDag process dag
* @throws Excepton excepton
*/
publc ProcessDag generateFlowDag(Lst<TaskNode> totalTaskNodeLst,
Lst<Strng> startNodeNameLst,
Lst<Strng> recoveryNodeCodeLst,
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
TaskDependType depNodeType) throws Excepton {
return DagHelper.generateFlowDag(totalTaskNodeLst, startNodeNameLst, recoveryNodeCodeLst, depNodeType);
}
/**
* check task queue
*/
prvate boolean checkTaskQueue() {
AtomcBoolean result = new AtomcBoolean(false);
taskInstanceMap.forEach((d, taskInstance) -> {
f (taskInstance != null && taskInstance.getTaskGroupId() > 0) {
result.set(true);
}
});
return result.get();
}
/**
* s new process nstance
*/
prvate boolean sNewProcessInstance() {
f (Flag.YES.equals(processInstance.getRecovery())) {
logger.nfo("Ths workInstance wll be recover by ths executon");
return false;
}
f (WorkflowExecutonStatus.RUNNING_EXECUTION == processInstance.getState()
&& processInstance.getRunTmes() == 1) {
return true;
}
logger.nfo(
"The workflowInstance has been executed before, ths executon s to reRun, processInstance status: {}, runTmes: {}",
processInstance.getState(),
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
processInstance.getRunTmes());
return false;
}
publc vod resubmt(long taskCode) throws Excepton {
ITaskProcessor taskProcessor = actveTaskProcessorMaps.get(taskCode);
f (taskProcessor != null) {
taskProcessor.acton(TaskActon.RESUBMIT);
logger.debug("RESUBMIT: task code:{}", taskCode);
} else {
throw new Excepton("resubmt error, taskProcessor s null, task code: " + taskCode);
}
}
publc Map<Long, Integer> getCompleteTaskMap() {
return completeTaskMap;
}
publc Map<Long, ITaskProcessor> getActveTaskProcessMap() {
return actveTaskProcessorMaps;
}
publc Map<Long, TaskInstance> getWatToRetryTaskInstanceMap() {
return watToRetryTaskInstanceMap;
}
prvate vod setGlobalParamIfCommanded(ProcessDefnton processDefnton, Map<Strng, Strng> cmdParam) {
// get start params from command param
Map<Strng, Strng> startParamMap = new HashMap<>();
f (cmdParam.contansKey(Constants.CMD_PARAM_START_PARAMS)) {
Strng startParamJson = cmdParam.get(Constants.CMD_PARAM_START_PARAMS);
startParamMap = JSONUtls.toMap(startParamJson);
}
Map<Strng, Strng> fatherParamMap = new HashMap<>();
f (cmdParam.contansKey(Constants.CMD_PARAM_FATHER_PARAMS)) {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,325 |
[Bug] workflow state is FAILURE when last task node is forbidden
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened



for example,a workflow has two nodes and the last node is forbiddened to run,the first node gets error,but the whole workflow is successful.
### What you expected to happen
the whole workflow state is FAILURE
### How to reproduce
create a workflow has two nodes, set the first node throws exception,set the last node forbidden ,then start the workerflow.
The first node gets FAILURE,but the whole workflow is successful.
### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12325
|
https://github.com/apache/dolphinscheduler/pull/12424
|
ba538067f291c4fdb378ca84c02bb31e2fb2d295
|
38b643f69b65f4de9dd43809404470934bfadc7b
| 2022-10-12T01:25:02Z |
java
| 2022-10-19T01:36:47Z |
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteRunnable.java
|
Strng fatherParamJson = cmdParam.get(Constants.CMD_PARAM_FATHER_PARAMS);
fatherParamMap = JSONUtls.toMap(fatherParamJson);
}
startParamMap.putAll(fatherParamMap);
// set start param nto global params
Map<Strng, Strng> globalMap = processDefnton.getGlobalParamMap();
Lst<Property> globalParamLst = processDefnton.getGlobalParamLst();
f (startParamMap.sze() > 0 && globalMap != null) {
// start p
for (Map.Entry<Strng, Strng> param : globalMap.entrySet()) {
Strng val = startParamMap.get(param.getKey());
f (val != null) {
param.setValue(val);
}
}
// start p
for (Map.Entry<Strng, Strng> startParam : startParamMap.entrySet()) {
f (!globalMap.contansKey(startParam.getKey())) {
globalMap.put(startParam.getKey(), startParam.getValue());
globalParamLst.add(new Property(startParam.getKey(), IN, VARCHAR, startParam.getValue()));
}
}
}
}
prvate enum WorkflowRunnableStatus {
CREATED, INITIALIZE_DAG, INITIALIZE_QUEUE, STARTED,
;
}
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,408 |
[Bug] [Failed to schedule schedule with parameters] Schedule schedule
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
带有$[yyyyMMdd]的定时参数失败
### What you expected to happen
能执行带参shell
### How to reproduce





### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12408
|
https://github.com/apache/dolphinscheduler/pull/12419
|
38b643f69b65f4de9dd43809404470934bfadc7b
|
a8e23008acdebd99811271611a15e83a9a7d8d92
| 2022-10-18T02:17:53Z |
java
| 2022-10-19T01:43:36Z |
dolphinscheduler-scheduler-plugin/dolphinscheduler-scheduler-quartz/src/main/java/org/apache/dolphinscheduler/scheduler/quartz/ProcessScheduleTask.java
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,408 |
[Bug] [Failed to schedule schedule with parameters] Schedule schedule
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
带有$[yyyyMMdd]的定时参数失败
### What you expected to happen
能执行带参shell
### How to reproduce





### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12408
|
https://github.com/apache/dolphinscheduler/pull/12419
|
38b643f69b65f4de9dd43809404470934bfadc7b
|
a8e23008acdebd99811271611a15e83a9a7d8d92
| 2022-10-18T02:17:53Z |
java
| 2022-10-19T01:43:36Z |
dolphinscheduler-scheduler-plugin/dolphinscheduler-scheduler-quartz/src/main/java/org/apache/dolphinscheduler/scheduler/quartz/ProcessScheduleTask.java
|
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.scheduler.quartz;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.ReleaseState;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.scheduler.quartz.utils.QuartzTaskUtils;
import org.apache.dolphinscheduler.service.process.ProcessService;
import java.util.Date;
import org.quartz.JobDataMap;
import org.quartz.JobExecutionContext;
import org.quartz.JobKey;
import org.quartz.Scheduler;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.quartz.QuartzJobBean;
import org.springframework.util.StringUtils;
import io.micrometer.core.annotation.Counted;
import io.micrometer.core.annotation.Timed;
public class ProcessScheduleTask extends QuartzJobBean {
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,408 |
[Bug] [Failed to schedule schedule with parameters] Schedule schedule
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
带有$[yyyyMMdd]的定时参数失败
### What you expected to happen
能执行带参shell
### How to reproduce





### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12408
|
https://github.com/apache/dolphinscheduler/pull/12419
|
38b643f69b65f4de9dd43809404470934bfadc7b
|
a8e23008acdebd99811271611a15e83a9a7d8d92
| 2022-10-18T02:17:53Z |
java
| 2022-10-19T01:43:36Z |
dolphinscheduler-scheduler-plugin/dolphinscheduler-scheduler-quartz/src/main/java/org/apache/dolphinscheduler/scheduler/quartz/ProcessScheduleTask.java
|
private static final Logger logger = LoggerFactory.getLogger(ProcessScheduleTask.class);
@Autowired
private ProcessService processService;
@Counted(value = "ds.master.quartz.job.executed")
@Timed(value = "ds.master.quartz.job.execution.time", percentiles = {0.5, 0.75, 0.95, 0.99}, histogram = true)
@Override
protected void executeInternal(JobExecutionContext context) {
JobDataMap dataMap = context.getJobDetail().getJobDataMap();
int projectId = dataMap.getInt(QuartzTaskUtils.PROJECT_ID);
int scheduleId = dataMap.getInt(QuartzTaskUtils.SCHEDULE_ID);
Date scheduledFireTime = context.getScheduledFireTime();
Date fireTime = context.getFireTime();
logger.info("scheduled fire time :{}, fire time :{}, scheduleId :{}", scheduledFireTime, fireTime, scheduleId);
Schedule schedule = processService.querySchedule(scheduleId);
if (schedule == null || ReleaseState.OFFLINE == schedule.getReleaseState()) {
logger.warn("process schedule does not exist in db or process schedule offline,delete schedule job in quartz, projectId:{}, scheduleId:{}", projectId, scheduleId);
deleteJob(context, projectId, scheduleId);
return;
}
ProcessDefinition processDefinition = processService.findProcessDefinitionByCode(schedule.getProcessDefinitionCode());
//
ReleaseState releaseState = processDefinition.getReleaseState();
if (releaseState == ReleaseState.OFFLINE) {
logger.warn("process definition does not exist in db or offline,need not to create command, projectId:{}, processDefinitionId:{}", projectId, processDefinition.getId());
return;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,408 |
[Bug] [Failed to schedule schedule with parameters] Schedule schedule
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
带有$[yyyyMMdd]的定时参数失败
### What you expected to happen
能执行带参shell
### How to reproduce





### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12408
|
https://github.com/apache/dolphinscheduler/pull/12419
|
38b643f69b65f4de9dd43809404470934bfadc7b
|
a8e23008acdebd99811271611a15e83a9a7d8d92
| 2022-10-18T02:17:53Z |
java
| 2022-10-19T01:43:36Z |
dolphinscheduler-scheduler-plugin/dolphinscheduler-scheduler-quartz/src/main/java/org/apache/dolphinscheduler/scheduler/quartz/ProcessScheduleTask.java
|
}
Command command = new Command();
command.setCommandType(CommandType.SCHEDULER);
command.setExecutorId(schedule.getUserId());
command.setFailureStrategy(schedule.getFailureStrategy());
command.setProcessDefinitionCode(schedule.getProcessDefinitionCode());
command.setScheduleTime(scheduledFireTime);
command.setStartTime(fireTime);
command.setWarningGroupId(schedule.getWarningGroupId());
String workerGroup = StringUtils.isEmpty(schedule.getWorkerGroup()) ? Constants.DEFAULT_WORKER_GROUP : schedule.getWorkerGroup();
command.setWorkerGroup(workerGroup);
command.setWarningType(schedule.getWarningType());
command.setProcessInstancePriority(schedule.getProcessInstancePriority());
command.setProcessDefinitionVersion(processDefinition.getVersion());
processService.createCommand(command);
}
private void deleteJob(JobExecutionContext context, int projectId, int scheduleId) {
final Scheduler scheduler = context.getScheduler();
JobKey jobKey = QuartzTaskUtils.getJobKey(scheduleId, projectId);
try {
if (scheduler.checkExists(jobKey)) {
logger.info("Try to delete job: {}, projectId: {}, schedulerId", projectId, scheduleId);
scheduler.deleteJob(jobKey);
}
} catch (Exception e) {
logger.error("Failed to delete job: {}", jobKey);
}
}
}
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,408 |
[Bug] [Failed to schedule schedule with parameters] Schedule schedule
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
带有$[yyyyMMdd]的定时参数失败
### What you expected to happen
能执行带参shell
### How to reproduce





### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12408
|
https://github.com/apache/dolphinscheduler/pull/12419
|
38b643f69b65f4de9dd43809404470934bfadc7b
|
a8e23008acdebd99811271611a15e83a9a7d8d92
| 2022-10-18T02:17:53Z |
java
| 2022-10-19T01:43:36Z |
dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessServiceImpl.java
|
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.service.process;
import static java.util.stream.Collectors.toSet;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_SCHEDULE_DATE_LIST;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_EMPTY_SUB_PROCESS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_FATHER_PARAMS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,408 |
[Bug] [Failed to schedule schedule with parameters] Schedule schedule
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
带有$[yyyyMMdd]的定时参数失败
### What you expected to happen
能执行带参shell
### How to reproduce





### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12408
|
https://github.com/apache/dolphinscheduler/pull/12419
|
38b643f69b65f4de9dd43809404470934bfadc7b
|
a8e23008acdebd99811271611a15e83a9a7d8d92
| 2022-10-18T02:17:53Z |
java
| 2022-10-19T01:43:36Z |
dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessServiceImpl.java
|
import static org.apache.dolphinscheduler.common.Constants.LOCAL_PARAMS;
import static org.apache.dolphinscheduler.plugin.task.api.enums.DataType.VARCHAR;
import static org.apache.dolphinscheduler.plugin.task.api.enums.Direct.IN;
import static org.apache.dolphinscheduler.plugin.task.api.utils.DataQualityConstants.TASK_INSTANCE_ID;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.AuthorizationType;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.ReleaseState;
import org.apache.dolphinscheduler.common.enums.TaskDependType;
import org.apache.dolphinscheduler.common.enums.TaskGroupQueueStatus;
import org.apache.dolphinscheduler.common.enums.TimeoutFlag;
import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.common.enums.WorkflowExecutionStatus;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils;
import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils.CodeGenerateException;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.DagData;
import org.apache.dolphinscheduler.dao.entity.DataSource;
import org.apache.dolphinscheduler.dao.entity.DependentProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.DqComparisonType;
import org.apache.dolphinscheduler.dao.entity.DqExecuteResult;
import org.apache.dolphinscheduler.dao.entity.DqRule;
import org.apache.dolphinscheduler.dao.entity.DqRuleExecuteSql;
import org.apache.dolphinscheduler.dao.entity.DqRuleInputEntry;
|
closed
|
apache/dolphinscheduler
|
https://github.com/apache/dolphinscheduler
| 12,408 |
[Bug] [Failed to schedule schedule with parameters] Schedule schedule
|
### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
带有$[yyyyMMdd]的定时参数失败
### What you expected to happen
能执行带参shell
### How to reproduce





### Anything else
_No response_
### Version
3.1.x
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
|
https://github.com/apache/dolphinscheduler/issues/12408
|
https://github.com/apache/dolphinscheduler/pull/12419
|
38b643f69b65f4de9dd43809404470934bfadc7b
|
a8e23008acdebd99811271611a15e83a9a7d8d92
| 2022-10-18T02:17:53Z |
java
| 2022-10-19T01:43:36Z |
dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessServiceImpl.java
|
import org.apache.dolphinscheduler.dao.entity.DqTaskStatisticsValue;
import org.apache.dolphinscheduler.dao.entity.Environment;
import org.apache.dolphinscheduler.dao.entity.ErrorCommand;
import org.apache.dolphinscheduler.dao.entity.K8s;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.ProcessInstanceMap;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelationLog;
import org.apache.dolphinscheduler.dao.entity.Project;
import org.apache.dolphinscheduler.dao.entity.ProjectUser;
import org.apache.dolphinscheduler.dao.entity.Resource;
import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.TaskGroup;
import org.apache.dolphinscheduler.dao.entity.TaskGroupQueue;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.UdfFunc;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.CommandMapper;
import org.apache.dolphinscheduler.dao.mapper.DataSourceMapper;
import org.apache.dolphinscheduler.dao.mapper.DqComparisonTypeMapper;
import org.apache.dolphinscheduler.dao.mapper.DqExecuteResultMapper;
import org.apache.dolphinscheduler.dao.mapper.DqRuleExecuteSqlMapper;
import org.apache.dolphinscheduler.dao.mapper.DqRuleInputEntryMapper;
import org.apache.dolphinscheduler.dao.mapper.DqRuleMapper;
import org.apache.dolphinscheduler.dao.mapper.DqTaskStatisticsValueMapper;
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.