status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final int EXIT_CODE_KILL = 137;
/**
* exit code failure
*/
public static final int EXIT_CODE_FAILURE = -1;
/**
* process or task definition failure
*/
public static final int DEFINITION_FAILURE = -1;
/**
* date format of yyyyMMdd
*/
public static final String PARAMETER_FORMAT_DATE = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String PARAMETER_FORMAT_TIME = "yyyyMMddHHmmss";
/**
* system date(yyyyMMddHHmmss)
*/
public static final String PARAMETER_DATETIME = "system.datetime";
/**
* system date(yyyymmdd) today
*/
public static final String PARAMETER_CURRENT_DATE = "system.biz.curdate";
/**
* system date(yyyymmdd) yesterday
*/
public static final String PARAMETER_BUSINESS_DATE = "system.biz.date";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * ACCEPTED
*/
public static final String ACCEPTED = "ACCEPTED";
/**
* SUCCEEDED
*/
public static final String SUCCEEDED = "SUCCEEDED";
/**
* NEW
*/
public static final String NEW = "NEW";
/**
* NEW_SAVING
*/
public static final String NEW_SAVING = "NEW_SAVING";
/**
* SUBMITTED
*/
public static final String SUBMITTED = "SUBMITTED";
/**
* FAILED
*/
public static final String FAILED = "FAILED";
/**
* KILLED
*/
public static final String KILLED = "KILLED";
/**
* RUNNING
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String RUNNING = "RUNNING";
/**
* underline "_"
*/
public static final String UNDERLINE = "_";
/**
* quartz job prifix
*/
public static final String QUARTZ_JOB_PRIFIX = "job";
/**
* quartz job group prifix
*/
public static final String QUARTZ_JOB_GROUP_PRIFIX = "jobgroup";
/**
* projectId
*/
public static final String PROJECT_ID = "projectId";
/**
* processId
*/
public static final String SCHEDULE_ID = "scheduleId";
/**
* schedule
*/
public static final String SCHEDULE = "schedule";
/**
* application regex
*/
public static final String APPLICATION_REGEX = "application_\\d+_\\d+";
public static final String PID = OSUtils.isWindows() ? "handle" : "pid"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* month_begin
*/
public static final String MONTH_BEGIN = "month_begin";
/**
* add_months
*/
public static final String ADD_MONTHS = "add_months";
/**
* month_end
*/
public static final String MONTH_END = "month_end";
/**
* week_begin
*/
public static final String WEEK_BEGIN = "week_begin";
/**
* week_end
*/
public static final String WEEK_END = "week_end";
/**
* timestamp
*/
public static final String TIMESTAMP = "timestamp";
public static final char SUBTRACT_CHAR = '-';
public static final char ADD_CHAR = '+';
public static final char MULTIPLY_CHAR = '*';
public static final char DIVISION_CHAR = '/';
public static final char LEFT_BRACE_CHAR = '(';
public static final char RIGHT_BRACE_CHAR = ')'; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String ADD_STRING = "+";
public static final String MULTIPLY_STRING = "*";
public static final String DIVISION_STRING = "/";
public static final String LEFT_BRACE_STRING = "(";
public static final char P = 'P';
public static final char N = 'N';
public static final String SUBTRACT_STRING = "-";
public static final String GLOBAL_PARAMS = "globalParams";
public static final String LOCAL_PARAMS = "localParams";
public static final String LOCAL_PARAMS_LIST = "localParamsList";
public static final String SUBPROCESS_INSTANCE_ID = "subProcessInstanceId";
public static final String PROCESS_INSTANCE_STATE = "processInstanceState";
public static final String PARENT_WORKFLOW_INSTANCE = "parentWorkflowInstance";
public static final String CONDITION_RESULT = "conditionResult";
public static final String DEPENDENCE = "dependence";
public static final String TASK_TYPE = "taskType";
public static final String TASK_LIST = "taskList";
public static final String RWXR_XR_X = "rwxr-xr-x";
public static final String QUEUE = "queue";
public static final String QUEUE_NAME = "queueName";
public static final int LOG_QUERY_SKIP_LINE_NUMBER = 0;
public static final int LOG_QUERY_LIMIT = 4096;
/**
* master/worker server use for zk
*/
public static final String MASTER_TYPE = "master";
public static final String WORKER_TYPE = "worker";
public static final String DELETE_OP = "delete";
public static final String ADD_OP = "add";
public static final String ALIAS = "alias"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String CONTENT = "content";
public static final String DEPENDENT_SPLIT = ":||";
public static final String DEPENDENT_ALL = "ALL";
/**
* preview schedule execute count
*/
public static final int PREVIEW_SCHEDULE_EXECUTE_COUNT = 5;
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* kerberos expire time
*/
public static final String KERBEROS_EXPIRE_TIME = "kerberos.expire.time";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf";
/**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state";
/**
* com.amazonaws.services.s3.enableV4
*/
public static final String AWS_S3_V4 = "com.amazonaws.services.s3.enableV4";
/**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* task log info format
*/
public static final String TASK_LOG_INFO_FORMAT = "TaskLogInfo-%s";
/**
* hive conf
*/
public static final String HIVE_CONF = "hiveconf:";
/**
* flink
*/
public static final String FLINK_YARN_CLUSTER = "yarn-cluster";
public static final String FLINK_RUN_MODE = "-m";
public static final String FLINK_YARN_SLOT = "-ys";
public static final String FLINK_APP_NAME = "-ynm";
public static final String FLINK_QUEUE = "-yqu";
public static final String FLINK_TASK_MANAGE = "-yn"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String FLINK_JOB_MANAGE_MEM = "-yjm";
public static final String FLINK_TASK_MANAGE_MEM = "-ytm";
public static final String FLINK_MAIN_CLASS = "-c";
public static final String FLINK_PARALLELISM = "-p";
public static final String FLINK_SHUTDOWN_ON_ATTACHED_EXIT = "-sae";
public static final int[] NOT_TERMINATED_STATES = new int[] {
ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.RUNNING_EXECUTION.ordinal(),
ExecutionStatus.DELAY_EXECUTION.ordinal(),
ExecutionStatus.READY_PAUSE.ordinal(),
ExecutionStatus.READY_STOP.ordinal(),
ExecutionStatus.NEED_FAULT_TOLERANCE.ordinal(),
ExecutionStatus.WAITTING_THREAD.ordinal(),
ExecutionStatus.WAITTING_DEPEND.ordinal()
};
/**
* status
*/
public static final String STATUS = "status";
/**
* message
*/
public static final String MSG = "msg";
/**
* data total
*/
public static final String COUNT = "count";
/**
* page size
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String PAGE_SIZE = "pageSize";
/**
* current page no
*/
public static final String PAGE_NUMBER = "pageNo";
/**
*
*/
public static final String DATA_LIST = "data";
public static final String TOTAL_LIST = "totalList";
public static final String CURRENT_PAGE = "currentPage";
public static final String TOTAL_PAGE = "totalPage";
public static final String TOTAL = "total";
/**
* workflow
*/
public static final String WORKFLOW_LIST = "workFlowList";
public static final String WORKFLOW_RELATION_LIST = "workFlowRelationList";
/**
* session user
*/
public static final String SESSION_USER = "session.user";
public static final String SESSION_ID = "sessionId";
public static final String PASSWORD_DEFAULT = "******";
/**
* locale
*/
public static final String LOCALE_LANGUAGE = "language";
/**
* driver |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String ORG_POSTGRESQL_DRIVER = "org.postgresql.Driver";
public static final String COM_MYSQL_JDBC_DRIVER = "com.mysql.jdbc.Driver";
public static final String ORG_APACHE_HIVE_JDBC_HIVE_DRIVER = "org.apache.hive.jdbc.HiveDriver";
public static final String COM_CLICKHOUSE_JDBC_DRIVER = "ru.yandex.clickhouse.ClickHouseDriver";
public static final String COM_ORACLE_JDBC_DRIVER = "oracle.jdbc.driver.OracleDriver";
public static final String COM_SQLSERVER_JDBC_DRIVER = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
public static final String COM_DB2_JDBC_DRIVER = "com.ibm.db2.jcc.DB2Driver";
public static final String COM_PRESTO_JDBC_DRIVER = "com.facebook.presto.jdbc.PrestoDriver";
/**
* database type
*/
public static final String MYSQL = "MYSQL";
public static final String POSTGRESQL = "POSTGRESQL";
public static final String HIVE = "HIVE";
public static final String SPARK = "SPARK";
public static final String CLICKHOUSE = "CLICKHOUSE";
public static final String ORACLE = "ORACLE";
public static final String SQLSERVER = "SQLSERVER";
public static final String DB2 = "DB2";
public static final String PRESTO = "PRESTO";
/**
* jdbc url
*/
public static final String JDBC_MYSQL = "jdbc:mysql://";
public static final String JDBC_POSTGRESQL = "jdbc:postgresql://";
public static final String JDBC_HIVE_2 = "jdbc:hive2://";
public static final String JDBC_CLICKHOUSE = "jdbc:clickhouse://";
public static final String JDBC_ORACLE_SID = "jdbc:oracle:thin:@";
public static final String JDBC_ORACLE_SERVICE_NAME = "jdbc:oracle:thin:@//"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String JDBC_SQLSERVER = "jdbc:sqlserver://";
public static final String JDBC_DB2 = "jdbc:db2://";
public static final String JDBC_PRESTO = "jdbc:presto://";
public static final String ADDRESS = "address";
public static final String DATABASE = "database";
public static final String JDBC_URL = "jdbcUrl";
public static final String PRINCIPAL = "principal";
public static final String OTHER = "other";
public static final String ORACLE_DB_CONNECT_TYPE = "connectType";
public static final String KERBEROS_KRB5_CONF_PATH = "javaSecurityKrb5Conf";
public static final String KERBEROS_KEY_TAB_USERNAME = "loginUserKeytabUsername";
public static final String KERBEROS_KEY_TAB_PATH = "loginUserKeytabPath";
/**
* session timeout
*/
public static final int SESSION_TIME_OUT = 7200;
public static final int MAX_FILE_SIZE = 1024 * 1024 * 1024;
public static final String UDF = "UDF";
public static final String CLASS = "class";
public static final String RECEIVERS = "receivers";
public static final String RECEIVERS_CC = "receiversCc";
/**
* dataSource sensitive param
*/
public static final String DATASOURCE_PASSWORD_REGEX = "(?<=(\"password\":\")).*?(?=(\"))";
/**
* default worker group
*/
public static final String DEFAULT_WORKER_GROUP = "default";
public static final Integer TASK_INFO_LENGTH = 5; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* new
* schedule time
*/
public static final String PARAMETER_SHECDULE_TIME = "schedule.time";
/**
* authorize writable perm
*/
public static final int AUTHORIZE_WRITABLE_PERM = 7;
/**
* authorize readable perm
*/
public static final int AUTHORIZE_READABLE_PERM = 4;
/**
* plugin configurations
*/
public static final String PLUGIN_JAR_SUFFIX = ".jar";
public static final int NORMAL_NODE_STATUS = 0;
public static final int ABNORMAL_NODE_STATUS = 1;
public static final String START_TIME = "start time";
public static final String END_TIME = "end time";
public static final String START_END_DATE = "startDate,endDate";
/**
* system line separator
*/
public static final String SYSTEM_LINE_SEPARATOR = System.getProperty("line.separator");
public static final String EXCEL_SUFFIX_XLS = ".xls";
/**
* datasource encryption salt
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String DATASOURCE_ENCRYPTION_SALT_DEFAULT = "!@#$%^&*";
public static final String DATASOURCE_ENCRYPTION_ENABLE = "datasource.encryption.enable";
public static final String DATASOURCE_ENCRYPTION_SALT = "datasource.encryption.salt";
/**
* network interface preferred
*/
public static final String DOLPHIN_SCHEDULER_NETWORK_INTERFACE_PREFERRED = "dolphin.scheduler.network.interface.preferred";
/**
* network IP gets priority, default inner outer
*/
public static final String DOLPHIN_SCHEDULER_NETWORK_PRIORITY_STRATEGY = "dolphin.scheduler.network.priority.strategy";
/**
* exec shell scripts
*/
public static final String SH = "sh";
/**
* pstree, get pud and sub pid
*/
public static final String PSTREE = "pstree";
/**
* snow flake, data center id, this id must be greater than 0 and less than 32
*/
public static final String SNOW_FLAKE_DATA_CENTER_ID = "data.center.id";
/**
* docker & kubernetes
*/
public static final boolean DOCKER_MODE = StringUtils.isNotEmpty(System.getenv("DOCKER"));
public static final boolean KUBERNETES_MODE = StringUtils.isNotEmpty(System.getenv("KUBERNETES_SERVICE_HOST")) && StringUtils.isNotEmpty(System.getenv("KUBERNETES_SERVICE_PORT"));
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/cache/impl/TaskInstanceCacheManagerImpl.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.cache.impl;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/cache/impl/TaskInstanceCacheManagerImpl.java | import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.remote.command.TaskExecuteAckCommand;
import org.apache.dolphinscheduler.remote.command.TaskExecuteResponseCommand;
import org.apache.dolphinscheduler.server.entity.TaskExecutionContext;
import org.apache.dolphinscheduler.server.master.cache.TaskInstanceCacheManager;
import org.apache.dolphinscheduler.service.process.ProcessService;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
/**
* taskInstance state manager
*/
@Component
public class TaskInstanceCacheManagerImpl implements TaskInstanceCacheManager {
/**
* taskInstance cache
*/
private Map<Integer,TaskInstance> taskInstanceCache = new ConcurrentHashMap<>();
/**
* process service
*/
@Autowired
private ProcessService processService;
/**
* get taskInstance by taskInstance id
*
* @param taskInstanceId taskInstanceId
* @return taskInstance
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/cache/impl/TaskInstanceCacheManagerImpl.java | @Override
public TaskInstance getByTaskInstanceId(Integer taskInstanceId) {
TaskInstance taskInstance = taskInstanceCache.get(taskInstanceId);
if (taskInstance == null){
taskInstance = processService.findTaskInstanceById(taskInstanceId);
taskInstanceCache.put(taskInstanceId,taskInstance);
}
return taskInstance;
}
/**
* cache taskInstance
*
* @param taskExecutionContext taskExecutionContext
*/
@Override
public void cacheTaskInstance(TaskExecutionContext taskExecutionContext) {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(taskExecutionContext.getTaskInstanceId());
taskInstance.setName(taskExecutionContext.getTaskName());
taskInstance.setStartTime(taskExecutionContext.getStartTime());
taskInstance.setTaskType(taskExecutionContext.getTaskType());
taskInstance.setExecutePath(taskExecutionContext.getExecutePath());
taskInstanceCache.put(taskExecutionContext.getTaskInstanceId(), taskInstance);
}
/**
* cache taskInstance
*
* @param taskAckCommand taskAckCommand
*/
@Override |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/cache/impl/TaskInstanceCacheManagerImpl.java | public void cacheTaskInstance(TaskExecuteAckCommand taskAckCommand) {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setState(ExecutionStatus.of(taskAckCommand.getStatus()));
taskInstance.setStartTime(taskAckCommand.getStartTime());
taskInstance.setHost(taskAckCommand.getHost());
taskInstance.setExecutePath(taskAckCommand.getExecutePath());
taskInstance.setLogPath(taskAckCommand.getLogPath());
taskInstanceCache.put(taskAckCommand.getTaskInstanceId(), taskInstance);
}
/**
* cache taskInstance
*
* @param taskExecuteResponseCommand taskExecuteResponseCommand
*/
@Override
public void cacheTaskInstance(TaskExecuteResponseCommand taskExecuteResponseCommand) {
TaskInstance taskInstance = getByTaskInstanceId(taskExecuteResponseCommand.getTaskInstanceId());
taskInstance.setState(ExecutionStatus.of(taskExecuteResponseCommand.getStatus()));
taskInstance.setEndTime(taskExecuteResponseCommand.getEndTime());
}
/**
* remove taskInstance by taskInstanceId
* @param taskInstanceId taskInstanceId
*/
@Override
public void removeByTaskInstanceId(Integer taskInstanceId) {
taskInstanceCache.remove(taskInstanceId);
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.runner;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.common.utils.CollectionUtils;
import org.apache.dolphinscheduler.common.utils.StringUtils;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.remote.command.TaskKillRequestCommand;
import org.apache.dolphinscheduler.remote.utils.Host;
import org.apache.dolphinscheduler.server.master.cache.TaskInstanceCacheManager; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java | import org.apache.dolphinscheduler.server.master.cache.impl.TaskInstanceCacheManagerImpl;
import org.apache.dolphinscheduler.server.master.dispatch.context.ExecutionContext;
import org.apache.dolphinscheduler.server.master.dispatch.enums.ExecutorType;
import org.apache.dolphinscheduler.server.master.dispatch.executor.NettyExecutorManager;
import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
import org.apache.dolphinscheduler.service.registry.RegistryClient;
import java.util.Date;
import java.util.Set;
/**
* master task exec thread
*/
public class MasterTaskExecThread extends MasterBaseTaskExecThread {
/**
* taskInstance state manager
*/
private TaskInstanceCacheManager taskInstanceCacheManager;
/**
* netty executor manager
*/
private NettyExecutorManager nettyExecutorManager;
/**
* zookeeper register center
*/
private RegistryClient registryClient;
/**
* constructor of MasterTaskExecThread
*
* @param taskInstance task instance
*/
public MasterTaskExecThread(TaskInstance taskInstance) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java | super(taskInstance);
this.taskInstanceCacheManager = SpringApplicationContext.getBean(TaskInstanceCacheManagerImpl.class);
this.nettyExecutorManager = SpringApplicationContext.getBean(NettyExecutorManager.class);
this.registryClient = SpringApplicationContext.getBean(RegistryClient.class);
}
/**
* get task instance
*
* @return TaskInstance
*/
@Override
public TaskInstance getTaskInstance() {
return this.taskInstance;
}
/**
* whether already Killed,default false
*/
private boolean alreadyKilled = false;
/**
* submit task instance and wait complete
*
* @return true is task quit is true
*/
@Override
public Boolean submitWaitComplete() {
Boolean result = false;
this.taskInstance = submit();
if (this.taskInstance == null) {
logger.error("submit task instance to mysql and queue failed , please check and fix it");
return result; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java | }
if (!this.taskInstance.getState().typeIsFinished()) {
result = waitTaskQuit();
}
taskInstance.setEndTime(new Date());
processService.updateTaskInstance(taskInstance);
logger.info("task :{} id:{}, process id:{}, exec thread completed ",
this.taskInstance.getName(), taskInstance.getId(), processInstance.getId());
return result;
}
/**
* polling db
* <p>
* wait task quit
*
* @return true if task quit success
*/
public Boolean waitTaskQuit() {
taskInstance = processService.findTaskInstanceById(taskInstance.getId());
logger.info("wait task: process id: {}, task id:{}, task name:{} complete",
this.taskInstance.getProcessInstanceId(), this.taskInstance.getId(), this.taskInstance.getName());
while (Stopper.isRunning()) {
try {
if (this.processInstance == null) {
logger.error("process instance not exists , master task exec thread exit");
return true;
}
if (this.cancel || this.processInstance.getState() == ExecutionStatus.READY_STOP) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java | cancelTaskInstance();
}
if (processInstance.getState() == ExecutionStatus.READY_PAUSE) {
pauseTask();
}
if (taskInstance.getState().typeIsFinished()) {
taskInstanceCacheManager.removeByTaskInstanceId(taskInstance.getId());
break;
}
if (checkTaskTimeout()) {
this.checkTimeoutFlag = !alertTimeout();
}
taskInstance = processService.findTaskInstanceById(taskInstance.getId());
processInstance = processService.findProcessInstanceById(processInstance.getId());
Thread.sleep(Constants.SLEEP_TIME_MILLIS);
} catch (Exception e) {
logger.error("exception", e);
if (processInstance != null) {
logger.error("wait task quit failed, instance id:{}, task id:{}",
processInstance.getId(), taskInstance.getId());
}
}
}
return true;
}
/**
* pause task if task have not been dispatched to worker, do not dispatch anymore. |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java | */
public void pauseTask() {
taskInstance = processService.findTaskInstanceById(taskInstance.getId());
if (taskInstance == null) {
return;
}
if (StringUtils.isBlank(taskInstance.getHost())) {
taskInstance.setState(ExecutionStatus.PAUSE);
taskInstance.setEndTime(new Date());
processService.updateTaskInstance(taskInstance);
}
}
/**
* task instance add queue , waiting worker to kill
*/
private void cancelTaskInstance() throws Exception {
if (alreadyKilled) {
return;
}
alreadyKilled = true;
taskInstance = processService.findTaskInstanceById(taskInstance.getId());
if (StringUtils.isBlank(taskInstance.getHost())) {
taskInstance.setState(ExecutionStatus.KILL);
taskInstance.setEndTime(new Date());
processService.updateTaskInstance(taskInstance);
return;
}
TaskKillRequestCommand killCommand = new TaskKillRequestCommand();
killCommand.setTaskInstanceId(taskInstance.getId());
ExecutionContext executionContext = new ExecutionContext(killCommand.convert2Command(), ExecutorType.WORKER); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,539 | [Improvement][Master] Check status of taskInstance from cache | **Describe the question**
After the master submit a task, the master will wait for the task execution to end, and it will loop to query the task status from database.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java#L123-L164
Why doesn't it query the status from the taskInstanceCacheManager?
When the master receive the response from worker, it will also update the cache.
https://github.com/apache/dolphinscheduler/blob/8a1d849701671544327a1d4e7852575af6872017/dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/processor/TaskResponseProcessor.java#L68-L87
I think if we query the status from cache, we can reduce the pressure of the database.
The main risk is that after the worker crashed, we need to send a response to the master when doing worker tolerance.
So as a compromise, can we query the cache 9 times and then query the database once? Or we get task status from cache, and the cache query the task status from database periodically(the schedule interval can be longer).
**Which version of DolphinScheduler:**
-[dev]
| https://github.com/apache/dolphinscheduler/issues/5539 | https://github.com/apache/dolphinscheduler/pull/5572 | e2243d63bee789b96d8ceeb302261564c5a28ce7 | 79eb2e85d78f380bb9b8f812d874f1143b661e76 | "2021-05-22T07:08:34Z" | java | "2021-06-10T01:39:12Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterTaskExecThread.java | Host host = Host.of(taskInstance.getHost());
executionContext.setHost(host);
nettyExecutorManager.executeDirectly(executionContext);
logger.info("master kill taskInstance name :{} taskInstance id:{}",
taskInstance.getName(), taskInstance.getId());
}
/**
* whether exists valid worker group
*
* @param taskInstanceWorkerGroup taskInstanceWorkerGroup
* @return whether exists
*/
public Boolean existsValidWorkerGroup(String taskInstanceWorkerGroup) {
Set<String> workerGroups = registryClient.getWorkerGroupDirectly();
if (CollectionUtils.isEmpty(workerGroups)) {
return false;
}
if (!workerGroups.contains(taskInstanceWorkerGroup)) {
return false;
}
Set<String> workers = registryClient.getWorkerGroupNodesDirectly(taskInstanceWorkerGroup);
if (CollectionUtils.isEmpty(workers)) {
return false;
}
return true;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/PythonCommandExecutor.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.worker.task;
import org.apache.dolphinscheduler.common.Constants; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/PythonCommandExecutor.java | import org.apache.dolphinscheduler.common.utils.FileUtils;
import org.apache.dolphinscheduler.common.utils.StringUtils;
import org.apache.dolphinscheduler.server.entity.TaskExecutionContext;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.Collections;
import java.util.List;
import java.util.function.Consumer;
import java.util.regex.Pattern;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* python command executor
*/
public class PythonCommandExecutor extends AbstractCommandExecutor {
/**
* logger
*/
private static final Logger logger = LoggerFactory.getLogger(PythonCommandExecutor.class);
/**
* python
*/
public static final String PYTHON = "python";
private static final Pattern PYTHON_PATH_PATTERN = Pattern.compile("/bin/python[\\d.]*$"); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/PythonCommandExecutor.java | /**
* constructor
* @param logHandler log handler
* @param taskExecutionContext taskExecutionContext
* @param logger logger
*/
public PythonCommandExecutor(Consumer<List<String>> logHandler,
TaskExecutionContext taskExecutionContext,
Logger logger) {
super(logHandler,taskExecutionContext,logger);
}
/**
* build command file path
*
* @return command file path
*/
@Override
protected String buildCommandFilePath() {
return String.format("%s/py_%s.command", taskExecutionContext.getExecutePath(), taskExecutionContext.getTaskAppId());
}
/**
* create command file if not exists
* @param execCommand exec command
* @param commandFile command file
* @throws IOException io exception
*/
@Override
protected void createCommandFileIfNotExists(String execCommand, String commandFile) throws IOException {
logger.info("tenantCode :{}, task dir:{}", taskExecutionContext.getTenantCode(), taskExecutionContext.getExecutePath());
if (!Files.exists(Paths.get(commandFile))) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/PythonCommandExecutor.java | logger.info("generate command file:{}", commandFile);
StringBuilder sb = new StringBuilder();
sb.append("#-*- encoding=utf8 -*-\n");
sb.append("\n\n");
sb.append(execCommand);
logger.info(sb.toString());
FileUtils.writeStringToFile(new File(commandFile),
sb.toString(),
StandardCharsets.UTF_8);
}
}
/**
* get command options
* @return command options list
*/
@Override
protected List<String> commandOptions() {
return Collections.singletonList("-u");
}
/**
* Gets the command path to which Python can execute
* @return python command path
*/
@Override
protected String commandInterpreter() {
String pythonHome = getPythonHome(taskExecutionContext.getEnvFile());
return getPythonCommand(pythonHome);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/PythonCommandExecutor.java | /**
* get python command
*
* @param pythonHome python home
* @return python command
*/
public static String getPythonCommand(String pythonHome) {
if (StringUtils.isEmpty(pythonHome)) {
return PYTHON;
}
File file = new File(pythonHome);
if (file.exists() && file.isFile()) {
return pythonHome;
}
if (PYTHON_PATH_PATTERN.matcher(pythonHome).find()) {
return pythonHome;
}
return pythonHome + "/bin/python";
}
/**
* get python home
*
* @param envPath env path
* @return python home
*/
public static String getPythonHome(String envPath) {
BufferedReader br = null;
StringBuilder sb = new StringBuilder();
try {
br = new BufferedReader(new InputStreamReader(new FileInputStream(envPath))); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/PythonCommandExecutor.java | String line;
while ((line = br.readLine()) != null) {
if (line.contains(Constants.PYTHON_HOME)) {
sb.append(line);
break;
}
}
String result = sb.toString();
if (StringUtils.isEmpty(result)) {
return null;
}
String[] arrs = result.split(Constants.EQUAL_SIGN);
if (arrs.length == 2) {
return arrs[1];
}
} catch (IOException e) {
logger.error("read file failure", e);
} finally {
try {
if (br != null) {
br.close();
}
} catch (IOException e) {
logger.error(e.getMessage(), e);
}
}
return null;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | package org.apache.dolphinscheduler.server.worker.task.datax;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.datasource.BaseConnectionParam;
import org.apache.dolphinscheduler.common.datasource.DatasourceUtil;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.DbType;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.process.Property;
import org.apache.dolphinscheduler.common.task.AbstractParameters;
import org.apache.dolphinscheduler.common.task.datax.DataxParameters;
import org.apache.dolphinscheduler.common.utils.CollectionUtils;
import org.apache.dolphinscheduler.common.utils.CommonUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.OSUtils;
import org.apache.dolphinscheduler.common.utils.ParameterUtils;
import org.apache.dolphinscheduler.server.entity.DataxTaskExecutionContext;
import org.apache.dolphinscheduler.server.entity.TaskExecutionContext;
import org.apache.dolphinscheduler.server.utils.DataxUtils;
import org.apache.dolphinscheduler.server.utils.ParamUtils;
import org.apache.dolphinscheduler.server.worker.task.AbstractTask;
import org.apache.dolphinscheduler.server.worker.task.CommandExecuteResult;
import org.apache.dolphinscheduler.server.worker.task.ShellCommandExecutor;
import org.apache.commons.io.FileUtils;
import java.io.File;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.nio.file.attribute.FileAttribute;
import java.nio.file.attribute.PosixFilePermission; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | import java.nio.file.attribute.PosixFilePermissions;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.slf4j.Logger;
import com.alibaba.druid.sql.ast.SQLStatement;
import com.alibaba.druid.sql.ast.expr.SQLIdentifierExpr;
import com.alibaba.druid.sql.ast.expr.SQLPropertyExpr;
import com.alibaba.druid.sql.ast.statement.SQLSelect;
import com.alibaba.druid.sql.ast.statement.SQLSelectItem;
import com.alibaba.druid.sql.ast.statement.SQLSelectQueryBlock;
import com.alibaba.druid.sql.ast.statement.SQLSelectStatement;
import com.alibaba.druid.sql.ast.statement.SQLUnionQuery;
import com.alibaba.druid.sql.parser.SQLStatementParser;
import com.fasterxml.jackson.databind.node.ArrayNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
/**
* DataX task
*/
public class DataxTask extends AbstractTask {
/**
* jvm parameters
*/
public static final String JVM_PARAM = " --jvm=\"-Xms%sG -Xmx%sG\" "; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | /**
* python process(datax only supports version 2.7 by default)
*/
private static final String DATAX_PYTHON = "python2.7";
/**
* datax path
*/
private static final String DATAX_PATH = "${DATAX_HOME}/bin/datax.py";
/**
* datax channel count
*/
private static final int DATAX_CHANNEL_COUNT = 1;
/**
* datax parameters
*/
private DataxParameters dataXParameters;
/**
* shell command executor
*/
private ShellCommandExecutor shellCommandExecutor;
/**
* taskExecutionContext
*/
private TaskExecutionContext taskExecutionContext;
/**
* constructor
*
* @param taskExecutionContext taskExecutionContext
* @param logger logger
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | public DataxTask(TaskExecutionContext taskExecutionContext, Logger logger) {
super(taskExecutionContext, logger);
this.taskExecutionContext = taskExecutionContext;
this.shellCommandExecutor = new ShellCommandExecutor(this::logHandle,
taskExecutionContext, logger);
}
/**
* init DataX config
*/
@Override
public void init() {
logger.info("datax task params {}", taskExecutionContext.getTaskParams());
dataXParameters = JSONUtils.parseObject(taskExecutionContext.getTaskParams(), DataxParameters.class);
if (!dataXParameters.checkParameters()) {
throw new RuntimeException("datax task params is not valid");
}
}
/**
* run DataX process
*
* @throws Exception if error throws Exception
*/
@Override
public void handle() throws Exception {
try {
String threadLoggerInfoName = String.format("TaskLogInfo-%s", taskExecutionContext.getTaskAppId());
Thread.currentThread().setName(threadLoggerInfoName);
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()), |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | taskExecutionContext.getDefinedParams(),
dataXParameters.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
String jsonFilePath = buildDataxJsonFile(paramsMap);
String shellCommandFilePath = buildShellCommandFile(jsonFilePath, paramsMap);
CommandExecuteResult commandExecuteResult = shellCommandExecutor.run(shellCommandFilePath);
setExitStatusCode(commandExecuteResult.getExitStatusCode());
setAppIds(commandExecuteResult.getAppIds());
setProcessId(commandExecuteResult.getProcessId());
} catch (Exception e) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
throw e;
}
}
/**
* cancel DataX process
*
* @param cancelApplication cancelApplication
* @throws Exception if error throws Exception
*/
@Override
public void cancelApplication(boolean cancelApplication)
throws Exception {
shellCommandExecutor.cancelApplication();
}
/**
* build datax configuration file |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | *
* @return datax json file name
* @throws Exception if error throws Exception
*/
private String buildDataxJsonFile(Map<String, Property> paramsMap)
throws Exception {
String fileName = String.format("%s/%s_job.json",
taskExecutionContext.getExecutePath(),
taskExecutionContext.getTaskAppId());
String json;
Path path = new File(fileName).toPath();
if (Files.exists(path)) {
return fileName;
}
if (dataXParameters.getCustomConfig() == Flag.YES.ordinal()) {
json = dataXParameters.getJson().replaceAll("\\r\\n", "\n");
} else {
ObjectNode job = JSONUtils.createObjectNode();
job.putArray("content").addAll(buildDataxJobContentJson());
job.set("setting", buildDataxJobSettingJson());
ObjectNode root = JSONUtils.createObjectNode();
root.set("job", job);
root.set("core", buildDataxCoreJson());
json = root.toString();
}
json = ParameterUtils.convertParameterPlaceholders(json, ParamUtils.convert(paramsMap));
logger.debug("datax job json : {}", json); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | FileUtils.writeStringToFile(new File(fileName), json, StandardCharsets.UTF_8);
return fileName;
}
/**
* build datax job config
*
* @return collection of datax job config JSONObject
* @throws SQLException if error throws SQLException
*/
private List<ObjectNode> buildDataxJobContentJson() {
DataxTaskExecutionContext dataxTaskExecutionContext = taskExecutionContext.getDataxTaskExecutionContext();
BaseConnectionParam dataSourceCfg = (BaseConnectionParam) DatasourceUtil.buildConnectionParams(
DbType.of(dataxTaskExecutionContext.getSourcetype()),
dataxTaskExecutionContext.getSourceConnectionParams());
BaseConnectionParam dataTargetCfg = (BaseConnectionParam) DatasourceUtil.buildConnectionParams(
DbType.of(dataxTaskExecutionContext.getTargetType()),
dataxTaskExecutionContext.getTargetConnectionParams());
List<ObjectNode> readerConnArr = new ArrayList<>();
ObjectNode readerConn = JSONUtils.createObjectNode();
ArrayNode sqlArr = readerConn.putArray("querySql");
for (String sql : new String[]{dataXParameters.getSql()}) {
sqlArr.add(sql);
}
ArrayNode urlArr = readerConn.putArray("jdbcUrl");
urlArr.add(DatasourceUtil.getJdbcUrl(DbType.valueOf(dataXParameters.getDtType()), dataSourceCfg));
readerConnArr.add(readerConn);
ObjectNode readerParam = JSONUtils.createObjectNode();
readerParam.put("username", dataSourceCfg.getUser());
readerParam.put("password", CommonUtils.decodePassword(dataSourceCfg.getPassword()));
readerParam.putArray("connection").addAll(readerConnArr); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | ObjectNode reader = JSONUtils.createObjectNode();
reader.put("name", DataxUtils.getReaderPluginName(DbType.of(dataxTaskExecutionContext.getSourcetype())));
reader.set("parameter", readerParam);
List<ObjectNode> writerConnArr = new ArrayList<>();
ObjectNode writerConn = JSONUtils.createObjectNode();
ArrayNode tableArr = writerConn.putArray("table");
tableArr.add(dataXParameters.getTargetTable());
writerConn.put("jdbcUrl", DatasourceUtil.getJdbcUrl(DbType.valueOf(dataXParameters.getDsType()), dataTargetCfg));
writerConnArr.add(writerConn);
ObjectNode writerParam = JSONUtils.createObjectNode();
writerParam.put("username", dataTargetCfg.getUser());
writerParam.put("password", CommonUtils.decodePassword(dataTargetCfg.getPassword()));
String[] columns = parsingSqlColumnNames(DbType.of(dataxTaskExecutionContext.getSourcetype()),
DbType.of(dataxTaskExecutionContext.getTargetType()),
dataSourceCfg, dataXParameters.getSql());
ArrayNode columnArr = writerParam.putArray("column");
for (String column : columns) {
columnArr.add(column);
}
writerParam.putArray("connection").addAll(writerConnArr);
if (CollectionUtils.isNotEmpty(dataXParameters.getPreStatements())) {
ArrayNode preSqlArr = writerParam.putArray("preSql");
for (String preSql : dataXParameters.getPreStatements()) {
preSqlArr.add(preSql);
}
}
if (CollectionUtils.isNotEmpty(dataXParameters.getPostStatements())) {
ArrayNode postSqlArr = writerParam.putArray("postSql");
for (String postSql : dataXParameters.getPostStatements()) {
postSqlArr.add(postSql); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | }
}
ObjectNode writer = JSONUtils.createObjectNode();
writer.put("name", DataxUtils.getWriterPluginName(DbType.of(dataxTaskExecutionContext.getTargetType())));
writer.set("parameter", writerParam);
List<ObjectNode> contentList = new ArrayList<>();
ObjectNode content = JSONUtils.createObjectNode();
content.set("reader", reader);
content.set("writer", writer);
contentList.add(content);
return contentList;
}
/**
* build datax setting config
*
* @return datax setting config JSONObject
*/
private ObjectNode buildDataxJobSettingJson() {
ObjectNode speed = JSONUtils.createObjectNode();
speed.put("channel", DATAX_CHANNEL_COUNT);
if (dataXParameters.getJobSpeedByte() > 0) {
speed.put("byte", dataXParameters.getJobSpeedByte());
}
if (dataXParameters.getJobSpeedRecord() > 0) {
speed.put("record", dataXParameters.getJobSpeedRecord());
}
ObjectNode errorLimit = JSONUtils.createObjectNode();
errorLimit.put("record", 0);
errorLimit.put("percentage", 0);
ObjectNode setting = JSONUtils.createObjectNode(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | setting.set("speed", speed);
setting.set("errorLimit", errorLimit);
return setting;
}
private ObjectNode buildDataxCoreJson() {
ObjectNode speed = JSONUtils.createObjectNode();
speed.put("channel", DATAX_CHANNEL_COUNT);
if (dataXParameters.getJobSpeedByte() > 0) {
speed.put("byte", dataXParameters.getJobSpeedByte());
}
if (dataXParameters.getJobSpeedRecord() > 0) {
speed.put("record", dataXParameters.getJobSpeedRecord());
}
ObjectNode channel = JSONUtils.createObjectNode();
channel.set("speed", speed);
ObjectNode transport = JSONUtils.createObjectNode();
transport.set("channel", channel);
ObjectNode core = JSONUtils.createObjectNode();
core.set("transport", transport);
return core;
}
/**
* create command
*
* @return shell command file name
* @throws Exception if error throws Exception
*/
private String buildShellCommandFile(String jobConfigFilePath, Map<String, Property> paramsMap)
throws Exception { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | String fileName = String.format("%s/%s_node.%s",
taskExecutionContext.getExecutePath(),
taskExecutionContext.getTaskAppId(),
OSUtils.isWindows() ? "bat" : "sh");
Path path = new File(fileName).toPath();
if (Files.exists(path)) {
return fileName;
}
StringBuilder sbr = new StringBuilder();
sbr.append(DATAX_PYTHON);
sbr.append(" ");
sbr.append(DATAX_PATH);
sbr.append(" ");
sbr.append(loadJvmEnv(dataXParameters));
sbr.append(jobConfigFilePath);
String dataxCommand = ParameterUtils.convertParameterPlaceholders(sbr.toString(), ParamUtils.convert(paramsMap));
logger.debug("raw script : {}", dataxCommand);
Set<PosixFilePermission> perms = PosixFilePermissions.fromString(Constants.RWXR_XR_X);
FileAttribute<Set<PosixFilePermission>> attr = PosixFilePermissions.asFileAttribute(perms);
if (OSUtils.isWindows()) {
Files.createFile(path);
} else {
Files.createFile(path, attr);
}
Files.write(path, dataxCommand.getBytes(), StandardOpenOption.APPEND);
return fileName;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | public String loadJvmEnv(DataxParameters dataXParameters) {
int xms = dataXParameters.getXms() < 1 ? 1 : dataXParameters.getXms();
int xmx = dataXParameters.getXmx() < 1 ? 1 : dataXParameters.getXmx();
return String.format(JVM_PARAM, xms, xmx);
}
/**
* parsing synchronized column names in SQL statements
*
* @param dsType the database type of the data source
* @param dtType the database type of the data target
* @param dataSourceCfg the database connection parameters of the data source
* @param sql sql for data synchronization
* @return Keyword converted column names
*/
private String[] parsingSqlColumnNames(DbType dsType, DbType dtType, BaseConnectionParam dataSourceCfg, String sql) {
String[] columnNames = tryGrammaticalAnalysisSqlColumnNames(dsType, sql);
if (columnNames == null || columnNames.length == 0) {
logger.info("try to execute sql analysis query column name");
columnNames = tryExecuteSqlResolveColumnNames(dataSourceCfg, sql);
}
notNull(columnNames, String.format("parsing sql columns failed : %s", sql));
return DataxUtils.convertKeywordsColumns(dtType, columnNames);
}
/**
* try grammatical parsing column
*
* @param dbType database type
* @param sql sql for data synchronization
* @return column name array
* @throws RuntimeException if error throws RuntimeException |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | */
private String[] tryGrammaticalAnalysisSqlColumnNames(DbType dbType, String sql) {
String[] columnNames;
try {
SQLStatementParser parser = DataxUtils.getSqlStatementParser(dbType, sql);
if (parser == null) {
logger.warn("database driver [{}] is not support grammatical analysis sql", dbType);
return new String[0];
}
SQLStatement sqlStatement = parser.parseStatement();
SQLSelectStatement sqlSelectStatement = (SQLSelectStatement) sqlStatement;
SQLSelect sqlSelect = sqlSelectStatement.getSelect();
List<SQLSelectItem> selectItemList = null;
if (sqlSelect.getQuery() instanceof SQLSelectQueryBlock) {
SQLSelectQueryBlock block = (SQLSelectQueryBlock) sqlSelect.getQuery();
selectItemList = block.getSelectList();
} else if (sqlSelect.getQuery() instanceof SQLUnionQuery) {
SQLUnionQuery unionQuery = (SQLUnionQuery) sqlSelect.getQuery();
SQLSelectQueryBlock block = (SQLSelectQueryBlock) unionQuery.getRight();
selectItemList = block.getSelectList();
}
notNull(selectItemList,
String.format("select query type [%s] is not support", sqlSelect.getQuery().toString()));
columnNames = new String[selectItemList.size()];
for (int i = 0; i < selectItemList.size(); i++) {
SQLSelectItem item = selectItemList.get(i);
String columnName = null;
if (item.getAlias() != null) {
columnName = item.getAlias();
} else if (item.getExpr() != null) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | if (item.getExpr() instanceof SQLPropertyExpr) {
SQLPropertyExpr expr = (SQLPropertyExpr) item.getExpr();
columnName = expr.getName();
} else if (item.getExpr() instanceof SQLIdentifierExpr) {
SQLIdentifierExpr expr = (SQLIdentifierExpr) item.getExpr();
columnName = expr.getName();
}
} else {
throw new RuntimeException(
String.format("grammatical analysis sql column [ %s ] failed", item.toString()));
}
if (columnName == null) {
throw new RuntimeException(
String.format("grammatical analysis sql column [ %s ] failed", item.toString()));
}
columnNames[i] = columnName;
}
} catch (Exception e) {
logger.warn(e.getMessage(), e);
return new String[0];
}
return columnNames;
}
/**
* try to execute sql to resolve column names
*
* @param baseDataSource the database connection parameters
* @param sql sql for data synchronization
* @return column name array
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTask.java | public String[] tryExecuteSqlResolveColumnNames(BaseConnectionParam baseDataSource, String sql) {
String[] columnNames;
sql = String.format("SELECT t.* FROM ( %s ) t WHERE 0 = 1", sql);
sql = sql.replace(";", "");
try (
Connection connection = DatasourceUtil.getConnection(DbType.valueOf(dataXParameters.getDtType()), baseDataSource);
PreparedStatement stmt = connection.prepareStatement(sql);
ResultSet resultSet = stmt.executeQuery()) {
ResultSetMetaData md = resultSet.getMetaData();
int num = md.getColumnCount();
columnNames = new String[num];
for (int i = 1; i <= num; i++) {
columnNames[i - 1] = md.getColumnName(i);
}
} catch (SQLException e) {
logger.warn(e.getMessage(), e);
return null;
}
return columnNames;
}
@Override
public AbstractParameters getParameters() {
return dataXParameters;
}
private void notNull(Object obj, String message) {
if (obj == null) {
throw new RuntimeException(message);
}
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.worker.task.datax;
import static org.apache.dolphinscheduler.common.enums.CommandType.START_PROCESS;
import org.apache.dolphinscheduler.common.datasource.BaseConnectionParam;
import org.apache.dolphinscheduler.common.datasource.DatasourceUtil;
import org.apache.dolphinscheduler.common.enums.DbType;
import org.apache.dolphinscheduler.common.task.datax.DataxParameters;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.dao.entity.DataSource;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.server.entity.DataxTaskExecutionContext; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | import org.apache.dolphinscheduler.server.entity.TaskExecutionContext;
import org.apache.dolphinscheduler.server.utils.DataxUtils;
import org.apache.dolphinscheduler.server.worker.task.ShellCommandExecutor;
import org.apache.dolphinscheduler.server.worker.task.TaskProps;
import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
import org.apache.dolphinscheduler.service.process.ProcessService;
import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.Date;
import java.util.List;
import java.util.UUID;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import org.mockito.Mockito;
import org.powermock.api.mockito.PowerMockito;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
/**
* DataxTask Tester.
*/
public class DataxTaskTest {
private static final Logger logger = LoggerFactory.getLogger(DataxTaskTest.class);
private static final String CONNECTION_PARAMS = " {\n"
+ " \"user\":\"root\",\n" |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | + " \"password\":\"123456\",\n"
+ " \"address\":\"jdbc:mysql://127.0.0.1:3306\",\n"
+ " \"database\":\"test\",\n"
+ " \"jdbcUrl\":\"jdbc:mysql://127.0.0.1:3306/test\"\n"
+ "}";
private DataxTask dataxTask;
private ProcessService processService;
private ShellCommandExecutor shellCommandExecutor;
private ApplicationContext applicationContext;
private TaskExecutionContext taskExecutionContext;
private final TaskProps props = new TaskProps();
@Before
public void before()
throws Exception {
setTaskParems(0);
}
private void setTaskParems(Integer customConfig) {
processService = Mockito.mock(ProcessService.class);
shellCommandExecutor = Mockito.mock(ShellCommandExecutor.class);
applicationContext = Mockito.mock(ApplicationContext.class);
SpringApplicationContext springApplicationContext = new SpringApplicationContext();
springApplicationContext.setApplicationContext(applicationContext);
Mockito.when(applicationContext.getBean(ProcessService.class)).thenReturn(processService);
TaskProps props = new TaskProps();
props.setExecutePath("/tmp");
props.setTaskAppId(String.valueOf(System.currentTimeMillis()));
props.setTaskInstanceId(1);
props.setTenantCode("1");
props.setEnvFile(".dolphinscheduler_env.sh");
props.setTaskStartTime(new Date()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | props.setTaskTimeout(0);
if (customConfig == 1) {
props.setTaskParams(
"{\n"
+ " \"customConfig\":1,\n"
+ " \"localParams\":[\n"
+ " {\n"
+ " \"prop\":\"test\",\n"
+ " \"value\":\"38294729\"\n"
+ " }\n"
+ " ],\n"
+ " \"json\":\""
+ "{\"job\":{\"setting\":{\"speed\":{\"byte\":1048576},\"errorLimit\":{\"record\":0,\"percentage\":0.02}},\"content\":["
+ "{\"reader\":{\"name\":\"rdbmsreader\",\"parameter\":{\"username\":\"xxx\",\"password\":\"${test}\",\"column\":[\"id\",\"name\"],\"splitPk\":\"pk\",\""
+ "connection\":[{\"querySql\":[\"SELECT * from dual\"],\"jdbcUrl\":[\"jdbc:dm://ip:port/database\"]}],\"fetchSize\":1024,\"where\":\"1 = 1\"}},\""
+ "writer\":{\"name\":\"streamwriter\",\"parameter\":{\"print\":true}}}]}}\"\n"
+ "}");
} else {
props.setTaskParams(
"{\n"
+ " \"customConfig\":0,\n"
+ " \"targetTable\":\"test\",\n"
+ " \"postStatements\":[\n"
+ " \"delete from test\"\n"
+ " ],\n"
+ " \"jobSpeedByte\":0,\n"
+ " \"jobSpeedRecord\":1000,\n"
+ " \"dtType\":\"MYSQL\",\n"
+ " \"dataSource\":1,\n"
+ " \"dsType\":\"MYSQL\",\n" |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | + " \"dataTarget\":2,\n"
+ " \"sql\":\"select 1 as test from dual\",\n"
+ " \"preStatements\":[\n"
+ " \"delete from test\"\n"
+ " ]\n"
+ "}");
}
taskExecutionContext = Mockito.mock(TaskExecutionContext.class);
Mockito.when(taskExecutionContext.getTaskParams()).thenReturn(props.getTaskParams());
Mockito.when(taskExecutionContext.getExecutePath()).thenReturn("/tmp");
Mockito.when(taskExecutionContext.getTaskAppId()).thenReturn(UUID.randomUUID().toString());
Mockito.when(taskExecutionContext.getTenantCode()).thenReturn("root");
Mockito.when(taskExecutionContext.getStartTime()).thenReturn(new Date());
Mockito.when(taskExecutionContext.getTaskTimeout()).thenReturn(10000);
Mockito.when(taskExecutionContext.getLogPath()).thenReturn("/tmp/dx");
DataxTaskExecutionContext dataxTaskExecutionContext = new DataxTaskExecutionContext();
dataxTaskExecutionContext.setSourcetype(0);
dataxTaskExecutionContext.setTargetType(0);
dataxTaskExecutionContext.setSourceConnectionParams(CONNECTION_PARAMS);
dataxTaskExecutionContext.setTargetConnectionParams(CONNECTION_PARAMS);
Mockito.when(taskExecutionContext.getDataxTaskExecutionContext()).thenReturn(dataxTaskExecutionContext);
dataxTask = PowerMockito.spy(new DataxTask(taskExecutionContext, logger));
dataxTask.init();
props.setCmdTypeIfComplement(START_PROCESS);
Mockito.when(processService.findDataSourceById(1)).thenReturn(getDataSource());
Mockito.when(processService.findDataSourceById(2)).thenReturn(getDataSource());
Mockito.when(processService.findProcessInstanceByTaskId(1)).thenReturn(getProcessInstance());
String fileName = String.format("%s/%s_node.sh", props.getExecutePath(), props.getTaskAppId());
try {
Mockito.when(shellCommandExecutor.run(fileName)).thenReturn(null); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | } catch (Exception e) {
e.printStackTrace();
}
dataxTask = PowerMockito.spy(new DataxTask(taskExecutionContext, logger));
dataxTask.init();
}
private DataSource getDataSource() {
DataSource dataSource = new DataSource();
dataSource.setType(DbType.MYSQL);
dataSource.setConnectionParams(CONNECTION_PARAMS);
dataSource.setUserId(1);
return dataSource;
}
private ProcessInstance getProcessInstance() {
ProcessInstance processInstance = new ProcessInstance();
processInstance.setCommandType(START_PROCESS);
processInstance.setScheduleTime(new Date());
return processInstance;
}
@After
public void after()
throws Exception {
}
/**
* Method: DataxTask()
*/
@Test
public void testDataxTask()
throws Exception {
TaskProps props = new TaskProps(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | props.setExecutePath("/tmp");
props.setTaskAppId(String.valueOf(System.currentTimeMillis()));
props.setTaskInstanceId(1);
props.setTenantCode("1");
Assert.assertNotNull(new DataxTask(null, logger));
}
/**
* Method: init
*/
@Test
public void testInit()
throws Exception {
try {
dataxTask.init();
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
/**
* Method: handle()
*/
@Test
public void testHandle()
throws Exception {
}
/**
* Method: cancelApplication()
*/
@Test
public void testCancelApplication() |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | throws Exception {
try {
dataxTask.cancelApplication(true);
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
/**
* Method: parsingSqlColumnNames(DbType dsType, DbType dtType, BaseDataSource
* dataSourceCfg, String sql)
*/
@Test
public void testParsingSqlColumnNames()
throws Exception {
try {
BaseConnectionParam dataSource = (BaseConnectionParam) DatasourceUtil.buildConnectionParams(
getDataSource().getType(),
getDataSource().getConnectionParams());
Method method = DataxTask.class.getDeclaredMethod("parsingSqlColumnNames", DbType.class, DbType.class, BaseConnectionParam.class, String.class);
method.setAccessible(true);
String[] columns = (String[]) method.invoke(dataxTask, DbType.MYSQL, DbType.MYSQL, dataSource, "select 1 as a, 2 as `table` from dual");
Assert.assertNotNull(columns);
Assert.assertTrue(columns.length == 2);
Assert.assertEquals("[`a`, `table`]", Arrays.toString(columns));
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
/**
* Method: tryGrammaticalParsingSqlColumnNames(DbType dbType, String sql) |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | */
@Test
public void testTryGrammaticalAnalysisSqlColumnNames()
throws Exception {
try {
Method method = DataxTask.class.getDeclaredMethod("tryGrammaticalAnalysisSqlColumnNames", DbType.class, String.class);
method.setAccessible(true);
String[] columns = (String[]) method.invoke(dataxTask, DbType.MYSQL, "select t1.a, t1.b from test t1 union all select a, t2.b from (select a, b from test) t2");
Assert.assertNotNull(columns);
Assert.assertTrue(columns.length == 2);
Assert.assertEquals("[a, b]", Arrays.toString(columns));
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
/**
* Method: tryExecuteSqlResolveColumnNames(BaseDataSource baseDataSource,
* String sql)
*/
@Test
public void testTryExecuteSqlResolveColumnNames()
throws Exception {
}
/**
* Method: buildDataxJsonFile()
*/
@Test
@Ignore("method not found")
public void testBuildDataxJsonFile() |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | throws Exception {
try {
setTaskParems(1);
Method method = DataxTask.class.getDeclaredMethod("buildDataxJsonFile");
method.setAccessible(true);
String filePath = (String) method.invoke(dataxTask, null);
Assert.assertNotNull(filePath);
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
/**
* Method: buildDataxJsonFile()
*/
@Test
@Ignore("method not found")
public void testBuildDataxJsonFile0()
throws Exception {
try {
setTaskParems(0);
Method method = DataxTask.class.getDeclaredMethod("buildDataxJsonFile");
method.setAccessible(true);
String filePath = (String) method.invoke(dataxTask, null);
Assert.assertNotNull(filePath);
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
/**
* Method: buildDataxJobContentJson() |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | */
@Test
public void testBuildDataxJobContentJson()
throws Exception {
try {
Method method = DataxTask.class.getDeclaredMethod("buildDataxJobContentJson");
method.setAccessible(true);
List<ObjectNode> contentList = (List<ObjectNode>) method.invoke(dataxTask, null);
Assert.assertNotNull(contentList);
ObjectNode content = contentList.get(0);
JsonNode reader = JSONUtils.parseObject(content.path("reader").toString());
Assert.assertNotNull(reader);
Assert.assertEquals("{\"name\":\"mysqlreader\",\"parameter\":{\"username\":\"root\","
+ "\"password\":\"123456\",\"connection\":[{\"querySql\":[\"select 1 as test from dual\"],"
+ "\"jdbcUrl\":[\"jdbc:mysql://127.0.0.1:3306/test?allowLoadLocalInfile=false"
+ "&autoDeserialize=false&allowLocalInfile=false&allowUrlInLocalInfile=false\"]}]}}",
reader.toString());
String readerPluginName = reader.path("name").asText();
Assert.assertEquals(DataxUtils.DATAX_READER_PLUGIN_MYSQL, readerPluginName);
JsonNode writer = JSONUtils.parseObject(content.path("writer").toString());
Assert.assertNotNull(writer);
Assert.assertEquals("{\"name\":\"mysqlwriter\",\"parameter\":{\"username\":\"root\","
+ "\"password\":\"123456\",\"column\":[\"`test`\"],\"connection\":[{\"table\":[\"test\"],"
+ "\"jdbcUrl\":\"jdbc:mysql://127.0.0.1:3306/test?allowLoadLocalInfile=false&"
+ "autoDeserialize=false&allowLocalInfile=false&allowUrlInLocalInfile=false\"}],"
+ "\"preSql\":[\"delete from test\"],\"postSql\":[\"delete from test\"]}}",
writer.toString());
String writerPluginName = writer.path("name").asText();
Assert.assertEquals(DataxUtils.DATAX_WRITER_PLUGIN_MYSQL, writerPluginName);
} catch (Exception e) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | Assert.fail(e.getMessage());
}
}
/**
* Method: buildDataxJobSettingJson()
*/
@Test
public void testBuildDataxJobSettingJson()
throws Exception {
try {
Method method = DataxTask.class.getDeclaredMethod("buildDataxJobSettingJson");
method.setAccessible(true);
JsonNode setting = (JsonNode) method.invoke(dataxTask, null);
Assert.assertNotNull(setting);
Assert.assertEquals("{\"channel\":1,\"record\":1000}", setting.get("speed").toString());
Assert.assertEquals("{\"record\":0,\"percentage\":0}", setting.get("errorLimit").toString());
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
/**
* Method: buildDataxCoreJson()
*/
@Test
public void testBuildDataxCoreJson()
throws Exception {
try {
Method method = DataxTask.class.getDeclaredMethod("buildDataxCoreJson");
method.setAccessible(true);
ObjectNode coreConfig = (ObjectNode) method.invoke(dataxTask, null); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | Assert.assertNotNull(coreConfig);
Assert.assertNotNull(coreConfig.get("transport"));
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
/**
* Method: buildShellCommandFile(String jobConfigFilePath)
*/
@Test
@Ignore("method not found")
public void testBuildShellCommandFile()
throws Exception {
try {
Method method = DataxTask.class.getDeclaredMethod("buildShellCommandFile", String.class);
method.setAccessible(true);
Assert.assertNotNull(method.invoke(dataxTask, "test.json"));
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
/**
* Method: getParameters
*/
@Test
public void testGetParameters()
throws Exception {
Assert.assertTrue(dataxTask.getParameters() != null);
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,596 | [Bug][Python] Conflict between python_home and datax_home configuration in dolphinscheduler_env.sh | Environment configuration of dataX and python
dolphinscheduler_ Env.sh configuration
To configure dataX python, you need to configure it in the root directory of Python
To execute a python script, you need to configure the python executable file in the python directory
- [dev]
- [1.3.6] | https://github.com/apache/dolphinscheduler/issues/5596 | https://github.com/apache/dolphinscheduler/pull/5612 | b436ef0a2c7dbfcdffbeb6006430a893897f2271 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | "2021-06-07T09:27:16Z" | java | "2021-06-11T17:23:18Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/worker/task/datax/DataxTaskTest.java | * Method: notNull(Object obj, String message)
*/
@Test
public void testNotNull()
throws Exception {
try {
Method method = DataxTask.class.getDeclaredMethod("notNull", Object.class, String.class);
method.setAccessible(true);
method.invoke(dataxTask, "abc", "test throw RuntimeException");
} catch (Exception e) {
Assert.fail(e.getMessage());
}
}
@Test
public void testLoadJvmEnv() {
DataxTask dataxTask = new DataxTask(null,null);
DataxParameters dataxParameters = new DataxParameters();
dataxParameters.setXms(0);
dataxParameters.setXmx(-100);
String actual = dataxTask.loadJvmEnv(dataxParameters);
String except = " --jvm=\"-Xms1G -Xmx1G\" ";
Assert.assertEquals(except,actual);
dataxParameters.setXms(13);
dataxParameters.setXmx(14);
actual = dataxTask.loadJvmEnv(dataxParameters);
except = " --jvm=\"-Xms13G -Xmx14G\" ";
Assert.assertEquals(except,actual);
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.service.impl;
import static org.apache.dolphinscheduler.common.Constants.DATA_LIST;
import static org.apache.dolphinscheduler.common.Constants.DEPENDENT_SPLIT;
import static org.apache.dolphinscheduler.common.Constants.GLOBAL_PARAMS;
import static org.apache.dolphinscheduler.common.Constants.LOCAL_PARAMS;
import static org.apache.dolphinscheduler.common.Constants.PROCESS_INSTANCE_STATE;
import static org.apache.dolphinscheduler.common.Constants.TASK_LIST;
import org.apache.dolphinscheduler.api.dto.gantt.GanttDto;
import org.apache.dolphinscheduler.api.dto.gantt.Task;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.ExecutorService;
import org.apache.dolphinscheduler.api.service.LoggerService;
import org.apache.dolphinscheduler.api.service.ProcessDefinitionService; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | import org.apache.dolphinscheduler.api.service.ProcessInstanceService;
import org.apache.dolphinscheduler.api.service.ProjectService;
import org.apache.dolphinscheduler.api.service.UsersService;
import org.apache.dolphinscheduler.api.utils.PageInfo;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.DependResult;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.TaskType;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.model.TaskNode;
import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.process.Property;
import org.apache.dolphinscheduler.common.utils.CollectionUtils;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.ParameterUtils;
import org.apache.dolphinscheduler.common.utils.StringUtils;
import org.apache.dolphinscheduler.common.utils.placeholder.BusinessTimeUtils;
import org.apache.dolphinscheduler.dao.entity.ProcessData;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.Project;
import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper;
import org.apache.dolphinscheduler.service.process.ProcessService;
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.stream.Collectors;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
/**
* process instance service impl
*/
@Service |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | public class ProcessInstanceServiceImpl extends BaseServiceImpl implements ProcessInstanceService {
private static final Logger logger = LoggerFactory.getLogger(ProcessInstanceService.class);
public static final String TASK_TYPE = "taskType";
public static final String LOCAL_PARAMS_LIST = "localParamsList";
@Autowired
ProjectMapper projectMapper;
@Autowired
ProjectService projectService; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | @Autowired
ProcessService processService;
@Autowired
ProcessInstanceMapper processInstanceMapper;
@Autowired
ProcessDefinitionMapper processDefineMapper;
@Autowired
ProcessDefinitionService processDefinitionService;
@Autowired
ExecutorService execService;
@Autowired
TaskInstanceMapper taskInstanceMapper;
@Autowired
LoggerService loggerService;
@Autowired
ProcessDefinitionLogMapper processDefinitionLogMapper;
@Autowired
TaskDefinitionLogMapper taskDefinitionLogMapper;
@Autowired
UsersService usersService;
/**
* return top n SUCCESS process instance order by running time which started between startTime and endTime
*/
@Override
public Map<String, Object> queryTopNLongestRunningProcessInstance(User loginUser, String projectName, int size, String startTime, String endTime) {
Map<String, Object> result = new HashMap<>();
Project project = projectMapper.queryByName(projectName);
Map<String, Object> checkResult = projectService.checkProjectAndAuth(loginUser, project, projectName);
Status resultEnum = (Status) checkResult.get(Constants.STATUS);
if (resultEnum != Status.SUCCESS) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | return checkResult;
}
if (0 > size) {
putMsg(result, Status.NEGTIVE_SIZE_NUMBER_ERROR, size);
return result;
}
if (Objects.isNull(startTime)) {
putMsg(result, Status.DATA_IS_NULL, Constants.START_TIME);
return result;
}
Date start = DateUtils.stringToDate(startTime);
if (Objects.isNull(endTime)) {
putMsg(result, Status.DATA_IS_NULL, Constants.END_TIME);
return result;
}
Date end = DateUtils.stringToDate(endTime);
if (start == null || end == null) {
putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR, Constants.START_END_DATE);
return result;
}
if (start.getTime() > end.getTime()) {
putMsg(result, Status.START_TIME_BIGGER_THAN_END_TIME_ERROR, startTime, endTime);
return result;
}
List<ProcessInstance> processInstances = processInstanceMapper.queryTopNProcessInstance(size, start, end, ExecutionStatus.SUCCESS);
result.put(DATA_LIST, processInstances);
putMsg(result, Status.SUCCESS);
return result;
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | * query process instance by id
*
* @param loginUser login user
* @param projectName project name
* @param processId process instance id
* @return process instance detail
*/
@Override
public Map<String, Object> queryProcessInstanceById(User loginUser, String projectName, Integer processId) {
Map<String, Object> result = new HashMap<>();
Project project = projectMapper.queryByName(projectName);
Map<String, Object> checkResult = projectService.checkProjectAndAuth(loginUser, project, projectName);
Status resultEnum = (Status) checkResult.get(Constants.STATUS);
if (resultEnum != Status.SUCCESS) {
return checkResult;
}
ProcessInstance processInstance = processService.findProcessInstanceDetailById(processId);
ProcessDefinition processDefinition = processService.findProcessDefinition(processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion());
if (processDefinition == null) {
putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processId);
} else {
processInstance.setWarningGroupId(processDefinition.getWarningGroupId());
processInstance.setConnects(processDefinition.getConnects());
processInstance.setLocations(processDefinition.getLocations());
ProcessData processData = processService.genProcessData(processDefinition);
processInstance.setProcessInstanceJson(JSONUtils.toJsonString(processData));
result.put(DATA_LIST, processInstance);
putMsg(result, Status.SUCCESS);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | return result;
}
/**
* paging query process instance list, filtering according to project, process definition, time range, keyword, process status
*
* @param loginUser login user
* @param projectName project name
* @param pageNo page number
* @param pageSize page size
* @param processDefineId process definition id
* @param searchVal search value
* @param stateType state type
* @param host host
* @param startDate start time
* @param endDate end time
* @return process instance list
*/
@Override
public Map<String, Object> queryProcessInstanceList(User loginUser, String projectName, Integer processDefineId,
String startDate, String endDate,
String searchVal, String executorName, ExecutionStatus stateType, String host,
Integer pageNo, Integer pageSize) {
Map<String, Object> result = new HashMap<>();
Project project = projectMapper.queryByName(projectName);
Map<String, Object> checkResult = projectService.checkProjectAndAuth(loginUser, project, projectName);
Status resultEnum = (Status) checkResult.get(Constants.STATUS);
if (resultEnum != Status.SUCCESS) {
return checkResult;
}
int[] statusArray = null; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | if (stateType != null) {
statusArray = new int[]{stateType.ordinal()};
}
Map<String, Object> checkAndParseDateResult = checkAndParseDateParameters(startDate, endDate);
if (checkAndParseDateResult.get(Constants.STATUS) != Status.SUCCESS) {
return checkAndParseDateResult;
}
Date start = (Date) checkAndParseDateResult.get(Constants.START_TIME);
Date end = (Date) checkAndParseDateResult.get(Constants.END_TIME);
Page<ProcessInstance> page = new Page<>(pageNo, pageSize);
PageInfo<ProcessInstance> pageInfo = new PageInfo<>(pageNo, pageSize);
int executorId = usersService.getUserIdByName(executorName);
ProcessDefinition processDefinition = processDefineMapper.queryByDefineId(processDefineId);
IPage<ProcessInstance> processInstanceList = processInstanceMapper.queryProcessInstanceListPaging(page,
project.getCode(), processDefinition == null ? 0L : processDefinition.getCode(), searchVal,
executorId, statusArray, host, start, end);
List<ProcessInstance> processInstances = processInstanceList.getRecords();
List<Integer> userIds = CollectionUtils.transformToList(processInstances, ProcessInstance::getExecutorId);
Map<Integer, User> idToUserMap = CollectionUtils.collectionToMap(usersService.queryUser(userIds), User::getId);
for (ProcessInstance processInstance : processInstances) {
processInstance.setDuration(DateUtils.format2Duration(processInstance.getStartTime(), processInstance.getEndTime()));
User executor = idToUserMap.get(processInstance.getExecutorId());
if (null != executor) {
processInstance.setExecutorName(executor.getUserName());
}
}
pageInfo.setTotalCount((int) processInstanceList.getTotal());
pageInfo.setLists(processInstances);
result.put(DATA_LIST, pageInfo); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | putMsg(result, Status.SUCCESS);
return result;
}
/**
* query task list by process instance id
*
* @param loginUser login user
* @param projectName project name
* @param processId process instance id
* @return task list for the process instance
* @throws IOException io exception
*/
@Override
public Map<String, Object> queryTaskListByProcessId(User loginUser, String projectName, Integer processId) throws IOException {
Map<String, Object> result = new HashMap<>();
Project project = projectMapper.queryByName(projectName);
Map<String, Object> checkResult = projectService.checkProjectAndAuth(loginUser, project, projectName);
Status resultEnum = (Status) checkResult.get(Constants.STATUS);
if (resultEnum != Status.SUCCESS) {
return checkResult;
}
ProcessInstance processInstance = processService.findProcessInstanceDetailById(processId);
List<TaskInstance> taskInstanceList = processService.findValidTaskListByProcessId(processId);
addDependResultForTaskList(taskInstanceList);
Map<String, Object> resultMap = new HashMap<>();
resultMap.put(PROCESS_INSTANCE_STATE, processInstance.getState().toString());
resultMap.put(TASK_LIST, taskInstanceList);
result.put(DATA_LIST, resultMap);
putMsg(result, Status.SUCCESS);
return result; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | }
/**
* add dependent result for dependent task
*/
private void addDependResultForTaskList(List<TaskInstance> taskInstanceList) throws IOException {
for (TaskInstance taskInstance : taskInstanceList) {
if (TaskType.DEPENDENT.getDesc().equalsIgnoreCase(taskInstance.getTaskType())) {
Result<String> logResult = loggerService.queryLog(
taskInstance.getId(), Constants.LOG_QUERY_SKIP_LINE_NUMBER, Constants.LOG_QUERY_LIMIT);
if (logResult.getCode() == Status.SUCCESS.ordinal()) {
String log = logResult.getData();
Map<String, DependResult> resultMap = parseLogForDependentResult(log);
taskInstance.setDependentResult(JSONUtils.toJsonString(resultMap));
}
}
}
}
@Override
public Map<String, DependResult> parseLogForDependentResult(String log) throws IOException {
Map<String, DependResult> resultMap = new HashMap<>();
if (StringUtils.isEmpty(log)) {
return resultMap;
}
BufferedReader br = new BufferedReader(new InputStreamReader(new ByteArrayInputStream(log.getBytes(
StandardCharsets.UTF_8)), StandardCharsets.UTF_8));
String line;
while ((line = br.readLine()) != null) {
if (line.contains(DEPENDENT_SPLIT)) {
String[] tmpStringArray = line.split(":\\|\\|");
if (tmpStringArray.length != 2) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | continue;
}
String dependResultString = tmpStringArray[1];
String[] dependStringArray = dependResultString.split(",");
if (dependStringArray.length != 2) {
continue;
}
String key = dependStringArray[0].trim();
DependResult dependResult = DependResult.valueOf(dependStringArray[1].trim());
resultMap.put(key, dependResult);
}
}
return resultMap;
}
/**
* query sub process instance detail info by task id
*
* @param loginUser login user
* @param projectName project name
* @param taskId task id
* @return sub process instance detail
*/
@Override
public Map<String, Object> querySubProcessInstanceByTaskId(User loginUser, String projectName, Integer taskId) {
Map<String, Object> result = new HashMap<>();
Project project = projectMapper.queryByName(projectName);
Map<String, Object> checkResult = projectService.checkProjectAndAuth(loginUser, project, projectName);
Status resultEnum = (Status) checkResult.get(Constants.STATUS);
if (resultEnum != Status.SUCCESS) {
return checkResult; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | }
TaskInstance taskInstance = processService.findTaskInstanceById(taskId);
if (taskInstance == null) {
putMsg(result, Status.TASK_INSTANCE_NOT_EXISTS, taskId);
return result;
}
if (!taskInstance.isSubProcess()) {
putMsg(result, Status.TASK_INSTANCE_NOT_SUB_WORKFLOW_INSTANCE, taskInstance.getName());
return result;
}
ProcessInstance subWorkflowInstance = processService.findSubProcessInstance(
taskInstance.getProcessInstanceId(), taskInstance.getId());
if (subWorkflowInstance == null) {
putMsg(result, Status.SUB_PROCESS_INSTANCE_NOT_EXIST, taskId);
return result;
}
Map<String, Object> dataMap = new HashMap<>();
dataMap.put(Constants.SUBPROCESS_INSTANCE_ID, subWorkflowInstance.getId());
result.put(DATA_LIST, dataMap);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* update process instance
*
* @param loginUser login user
* @param projectName project name
* @param processInstanceJson process instance json
* @param processInstanceId process instance id
* @param scheduleTime schedule time |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | * @param syncDefine sync define
* @param flag flag
* @param locations locations
* @param connects connects
* @return update result code
*/
@Transactional
@Override
public Map<String, Object> updateProcessInstance(User loginUser, String projectName, Integer processInstanceId,
String processInstanceJson, String scheduleTime, Boolean syncDefine,
Flag flag, String locations, String connects) {
Map<String, Object> result = new HashMap<>();
Project project = projectMapper.queryByName(projectName);
Map<String, Object> checkResult = projectService.checkProjectAndAuth(loginUser, project, projectName);
Status resultEnum = (Status) checkResult.get(Constants.STATUS);
if (resultEnum != Status.SUCCESS) {
return checkResult;
}
ProcessInstance processInstance = processService.findProcessInstanceDetailById(processInstanceId);
if (processInstance == null) {
putMsg(result, Status.PROCESS_INSTANCE_NOT_EXIST, processInstanceId);
return result;
}
if (!processInstance.getState().typeIsFinished()) {
putMsg(result, Status.PROCESS_INSTANCE_STATE_OPERATION_ERROR,
processInstance.getName(), processInstance.getState().toString(), "update");
return result; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | }
ProcessDefinition processDefinition = processService.findProcessDefinition(processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion());
ProcessData processData = JSONUtils.parseObject(processInstanceJson, ProcessData.class);
result = processDefinitionService.checkProcessNodeList(processData, processInstanceJson);
if (result.get(Constants.STATUS) != Status.SUCCESS) {
return result;
}
Tenant tenant = processService.getTenantForProcess(processData.getTenantId(),
processDefinition.getUserId());
setProcessInstance(processInstance, tenant, scheduleTime, processData);
int updateDefine = 1;
if (Boolean.TRUE.equals(syncDefine)) {
processDefinition.setId(processDefineMapper.queryByCode(processInstance.getProcessDefinitionCode()).getId());
updateDefine = syncDefinition(loginUser, project, locations, connects,
processInstance, processDefinition, processData);
processInstance.setProcessDefinitionVersion(processDefinitionLogMapper.
queryMaxVersionForDefinition(processInstance.getProcessDefinitionCode()));
}
int update = processService.updateProcessInstance(processInstance);
if (update > 0 && updateDefine > 0) {
putMsg(result, Status.SUCCESS);
} else {
putMsg(result, Status.UPDATE_PROCESS_INSTANCE_ERROR);
}
return result;
}
/**
* sync definition according process instance |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | */
private int syncDefinition(User loginUser, Project project, String locations, String connects,
ProcessInstance processInstance, ProcessDefinition processDefinition,
ProcessData processData) {
String originDefParams = JSONUtils.toJsonString(processData.getGlobalParams());
processDefinition.setGlobalParams(originDefParams);
processDefinition.setLocations(locations);
processDefinition.setConnects(connects);
processDefinition.setTimeout(processInstance.getTimeout());
processDefinition.setUpdateTime(new Date());
return processService.saveProcessDefinition(loginUser, project, processDefinition.getName(),
processDefinition.getDescription(), locations, connects,
processData, processDefinition, false);
}
/**
* update process instance attributes
*/
private void setProcessInstance(ProcessInstance processInstance, Tenant tenant, String scheduleTime, ProcessData processData) {
Date schedule = processInstance.getScheduleTime();
if (scheduleTime != null) {
schedule = DateUtils.getScheduleDate(scheduleTime);
}
processInstance.setScheduleTime(schedule);
List<Property> globalParamList = processData.getGlobalParams();
Map<String, String> globalParamMap = Optional.ofNullable(globalParamList)
.orElse(Collections.emptyList())
.stream()
.collect(Collectors.toMap(Property::getProp, Property::getValue));
String globalParams = ParameterUtils.curingGlobalParams(globalParamMap, globalParamList,
processInstance.getCmdTypeIfComplement(), schedule); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | processInstance.setTimeout(processData.getTimeout());
if (tenant != null) {
processInstance.setTenantCode(tenant.getTenantCode());
}
processInstance.setGlobalParams(globalParams);
}
/**
* query parent process instance detail info by sub process instance id
*
* @param loginUser login user
* @param projectName project name
* @param subId sub process id
* @return parent instance detail
*/
@Override
public Map<String, Object> queryParentInstanceBySubId(User loginUser, String projectName, Integer subId) {
Map<String, Object> result = new HashMap<>();
Project project = projectMapper.queryByName(projectName);
Map<String, Object> checkResult = projectService.checkProjectAndAuth(loginUser, project, projectName);
Status resultEnum = (Status) checkResult.get(Constants.STATUS);
if (resultEnum != Status.SUCCESS) {
return checkResult;
}
ProcessInstance subInstance = processService.findProcessInstanceDetailById(subId);
if (subInstance == null) {
putMsg(result, Status.PROCESS_INSTANCE_NOT_EXIST, subId);
return result;
}
if (subInstance.getIsSubProcess() == Flag.NO) {
putMsg(result, Status.PROCESS_INSTANCE_NOT_SUB_PROCESS_INSTANCE, subInstance.getName()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | return result;
}
ProcessInstance parentWorkflowInstance = processService.findParentProcessInstance(subId);
if (parentWorkflowInstance == null) {
putMsg(result, Status.SUB_PROCESS_INSTANCE_NOT_EXIST);
return result;
}
Map<String, Object> dataMap = new HashMap<>();
dataMap.put(Constants.PARENT_WORKFLOW_INSTANCE, parentWorkflowInstance.getId());
result.put(DATA_LIST, dataMap);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* delete process instance by id, at the same time,delete task instance and their mapping relation data
*
* @param loginUser login user
* @param projectName project name
* @param processInstanceId process instance id
* @return delete result code
*/
@Override
@Transactional(rollbackFor = RuntimeException.class)
public Map<String, Object> deleteProcessInstanceById(User loginUser, String projectName, Integer processInstanceId) {
Map<String, Object> result = new HashMap<>();
Project project = projectMapper.queryByName(projectName);
Map<String, Object> checkResult = projectService.checkProjectAndAuth(loginUser, project, projectName);
Status resultEnum = (Status) checkResult.get(Constants.STATUS);
if (resultEnum != Status.SUCCESS) {
return checkResult; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | }
ProcessInstance processInstance = processService.findProcessInstanceDetailById(processInstanceId);
if (null == processInstance) {
putMsg(result, Status.PROCESS_INSTANCE_NOT_EXIST, processInstanceId);
return result;
}
processService.removeTaskLogFile(processInstanceId);
//
int delete = processService.deleteWorkProcessInstanceById(processInstanceId);
processService.deleteAllSubWorkProcessByParentId(processInstanceId);
processService.deleteWorkProcessMapByParentId(processInstanceId);
if (delete > 0) {
putMsg(result, Status.SUCCESS);
} else {
putMsg(result, Status.DELETE_PROCESS_INSTANCE_BY_ID_ERROR);
}
return result;
}
/**
* view process instance variables
*
* @param processInstanceId process instance id
* @return variables data
*/
@Override
public Map<String, Object> viewVariables(Integer processInstanceId) {
Map<String, Object> result = new HashMap<>();
ProcessInstance processInstance = processInstanceMapper.queryDetailById(processInstanceId);
if (processInstance == null) {
throw new RuntimeException("workflow instance is null"); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | }
Map<String, String> timeParams = BusinessTimeUtils
.getBusinessTime(processInstance.getCmdTypeIfComplement(),
processInstance.getScheduleTime());
String userDefinedParams = processInstance.getGlobalParams();
//
List<Property> globalParams = new ArrayList<>();
//
String globalParamStr = ParameterUtils.convertParameterPlaceholders(JSONUtils.toJsonString(globalParams), timeParams);
globalParams = JSONUtils.toList(globalParamStr, Property.class);
for (Property property : globalParams) {
timeParams.put(property.getProp(), property.getValue());
}
if (userDefinedParams != null && userDefinedParams.length() > 0) {
globalParams = JSONUtils.toList(userDefinedParams, Property.class);
}
Map<String, Map<String, Object>> localUserDefParams = getLocalParams(processInstance, timeParams);
Map<String, Object> resultMap = new HashMap<>();
resultMap.put(GLOBAL_PARAMS, globalParams);
resultMap.put(LOCAL_PARAMS, localUserDefParams);
result.put(DATA_LIST, resultMap);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* get local params
*/
private Map<String, Map<String, Object>> getLocalParams(ProcessInstance processInstance, Map<String, String> timeParams) {
Map<String, Map<String, Object>> localUserDefParams = new HashMap<>();
List<TaskInstance> taskInstanceList = taskInstanceMapper.findValidTaskListByProcessId(processInstance.getId(), Flag.YES); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | for (TaskInstance taskInstance : taskInstanceList) {
TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(
taskInstance.getTaskCode(), taskInstance.getTaskDefinitionVersion());
String parameter = taskDefinitionLog.getTaskParams();
Map<String, String> map = JSONUtils.toMap(parameter);
String localParams = map.get(LOCAL_PARAMS);
if (localParams != null && !localParams.isEmpty()) {
localParams = ParameterUtils.convertParameterPlaceholders(localParams, timeParams);
List<Property> localParamsList = JSONUtils.toList(localParams, Property.class);
Map<String, Object> localParamsMap = new HashMap<>();
localParamsMap.put(TASK_TYPE, taskDefinitionLog.getTaskType());
localParamsMap.put(LOCAL_PARAMS_LIST, localParamsList);
if (CollectionUtils.isNotEmpty(localParamsList)) {
localUserDefParams.put(taskDefinitionLog.getName(), localParamsMap);
}
}
}
return localUserDefParams;
}
/**
* encapsulation gantt structure
*
* @param processInstanceId process instance id
* @return gantt tree data
* @throws Exception exception when json parse
*/
@Override
public Map<String, Object> viewGantt(Integer processInstanceId) throws Exception {
Map<String, Object> result = new HashMap<>();
ProcessInstance processInstance = processInstanceMapper.queryDetailById(processInstanceId); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | if (processInstance == null) {
throw new RuntimeException("workflow instance is null");
}
ProcessDefinition processDefinition = processDefinitionLogMapper.queryByDefinitionCodeAndVersion(
processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion()
);
GanttDto ganttDto = new GanttDto();
DAG<String, TaskNode, TaskNodeRelation> dag = processService.genDagGraph(processDefinition);
//
List<String> nodeList = dag.topologicalSort();
ganttDto.setTaskNames(nodeList);
List<Task> taskList = new ArrayList<>();
for (String node : nodeList) {
TaskInstance taskInstance = taskInstanceMapper.queryByInstanceIdAndName(processInstanceId, node);
if (taskInstance == null) {
continue;
}
Date startTime = taskInstance.getStartTime() == null ? new Date() : taskInstance.getStartTime();
Date endTime = taskInstance.getEndTime() == null ? new Date() : taskInstance.getEndTime();
Task task = new Task();
task.setTaskName(taskInstance.getName());
task.getStartDate().add(startTime.getTime());
task.getEndDate().add(endTime.getTime());
task.setIsoStart(startTime);
task.setIsoEnd(endTime);
task.setStatus(taskInstance.getState().toString());
task.setExecutionDate(taskInstance.getStartTime());
task.setDuration(DateUtils.format2Readable(endTime.getTime() - startTime.getTime()));
taskList.add(task); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java | }
ganttDto.setTasks(taskList);
result.put(DATA_LIST, ganttDto);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* query process instance by processDefinitionCode and stateArray
*
* @param processDefinitionCode processDefinitionCode
* @param states states array
* @return process instance list
*/
@Override
public List<ProcessInstance> queryByProcessDefineCodeAndStatus(Long processDefinitionCode, int[] states) {
return processInstanceMapper.queryByProcessDefineCodeAndStatus(processDefinitionCode, states);
}
/**
* query process instance by processDefinitionCode
*
* @param processDefinitionCode processDefinitionCode
* @param size size
* @return process instance list
*/
@Override
public List<ProcessInstance> queryByProcessDefineCode(Long processDefinitionCode, int size) {
return processInstanceMapper.queryByProcessDefineCode(processDefinitionCode, size);
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import static java.nio.charset.StandardCharsets.UTF_8;
import static com.fasterxml.jackson.databind.DeserializationFeature.ACCEPT_EMPTY_ARRAY_AS_NULL_OBJECT; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | import static com.fasterxml.jackson.databind.DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES;
import static com.fasterxml.jackson.databind.DeserializationFeature.READ_UNKNOWN_ENUM_VALUES_AS_NULL;
import static com.fasterxml.jackson.databind.MapperFeature.REQUIRE_SETTERS_FOR_GETTERS;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.TimeZone;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.fasterxml.jackson.core.JsonGenerator;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.JsonDeserializer;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.JsonSerializer;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.ObjectWriter;
import com.fasterxml.jackson.databind.SerializationFeature;
import com.fasterxml.jackson.databind.SerializerProvider;
import com.fasterxml.jackson.databind.node.ArrayNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.fasterxml.jackson.databind.node.TextNode;
import com.fasterxml.jackson.databind.type.CollectionType;
/**
* json utils
*/
public class JSONUtils { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | private static final Logger logger = LoggerFactory.getLogger(JSONUtils.class);
/**
* can use static singleton, inject: just make sure to reuse!
*/
private static final ObjectMapper objectMapper = new ObjectMapper()
.configure(FAIL_ON_UNKNOWN_PROPERTIES, false)
.configure(ACCEPT_EMPTY_ARRAY_AS_NULL_OBJECT, true)
.configure(READ_UNKNOWN_ENUM_VALUES_AS_NULL, true)
.configure(REQUIRE_SETTERS_FOR_GETTERS, true)
.setTimeZone(TimeZone.getDefault());
private JSONUtils() {
throw new UnsupportedOperationException("Construct JSONUtils");
}
public static ArrayNode createArrayNode() {
return objectMapper.createArrayNode();
}
public static ObjectNode createObjectNode() {
return objectMapper.createObjectNode(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | }
public static JsonNode toJsonNode(Object obj) {
return objectMapper.valueToTree(obj);
}
/**
* json representation of object
*
* @param object object
* @param feature feature
* @return object to json string
*/
public static String toJsonString(Object object, SerializationFeature feature) {
try {
ObjectWriter writer = objectMapper.writer(feature);
return writer.writeValueAsString(object);
} catch (Exception e) {
logger.error("object to json exception!", e);
}
return null;
}
/**
* This method deserializes the specified Json into an object of the specified class. It is not
* suitable to use if the specified class is a generic type since it will not have the generic
* type information because of the Type Erasure feature of Java. Therefore, this method should not
* be used if the desired type is a generic type. Note that this method works fine if the any of
* the fields of the specified object are generics, just the object itself should not be a
* generic type.
*
* @param json the string from which the object is to be deserialized
* @param clazz the class of T |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | * @param <T> T
* @return an object of type T from the string
* classOfT
*/
public static <T> T parseObject(String json, Class<T> clazz) {
if (StringUtils.isEmpty(json)) {
return null;
}
try {
return objectMapper.readValue(json, clazz);
} catch (Exception e) {
logger.error("parse object exception!", e);
}
return null;
}
/**
* deserialize
*
* @param src byte array
* @param clazz class
* @param <T> deserialize type
* @return deserialize type
*/
public static <T> T parseObject(byte[] src, Class<T> clazz) {
if (src == null) {
return null;
}
String json = new String(src, UTF_8);
return parseObject(json, clazz);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | /**
* json to list
*
* @param json json string
* @param clazz class
* @param <T> T
* @return list
*/
public static <T> List<T> toList(String json, Class<T> clazz) {
if (StringUtils.isEmpty(json)) {
return Collections.emptyList();
}
try {
CollectionType listType = objectMapper.getTypeFactory().constructCollectionType(ArrayList.class, clazz);
return objectMapper.readValue(json, listType);
} catch (Exception e) {
logger.error("parse list exception!", e);
}
return Collections.emptyList();
}
/**
* check json object valid
*
* @param json json
* @return true if valid
*/
public static boolean checkJsonValid(String json) {
if (StringUtils.isEmpty(json)) {
return false;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | try {
objectMapper.readTree(json);
return true;
} catch (IOException e) {
logger.error("check json object valid exception!", e);
}
return false;
}
/**
* Method for finding a JSON Object field with specified name in this
* node or its child nodes, and returning value it has.
* If no matching field is found in this node or its descendants, returns null.
*
* @param jsonNode json node
* @param fieldName Name of field to look for
* @return Value of first matching node found, if any; null if none
*/
public static String findValue(JsonNode jsonNode, String fieldName) {
JsonNode node = jsonNode.findValue(fieldName);
if (node == null) {
return null;
}
return node.asText();
}
/**
* json to map
* {@link #toMap(String, Class, Class)}
*
* @param json json
* @return json to map |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | */
public static Map<String, String> toMap(String json) {
return parseObject(json, new TypeReference<Map<String, String>>() {});
}
/**
* json to map
*
* @param json json
* @param classK classK
* @param classV classV
* @param <K> K
* @param <V> V
* @return to map
*/
public static <K, V> Map<K, V> toMap(String json, Class<K> classK, Class<V> classV) {
return parseObject(json, new TypeReference<Map<K, V>>() {});
}
/**
* json to object
*
* @param json json string
* @param type type reference
* @param <T>
* @return return parse object
*/
public static <T> T parseObject(String json, TypeReference<T> type) {
if (StringUtils.isEmpty(json)) {
return null;
}
try { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | return objectMapper.readValue(json, type);
} catch (Exception e) {
logger.error("json to map exception!", e);
}
return null;
}
/**
* object to json string
*
* @param object object
* @return json string
*/
public static String toJsonString(Object object) {
try {
return objectMapper.writeValueAsString(object);
} catch (Exception e) {
throw new RuntimeException("Object json deserialization exception.", e);
}
}
/**
* serialize to json byte
*
* @param obj object
* @param <T> object type
* @return byte array
*/
public static <T> byte[] toJsonByteArray(T obj) {
if (obj == null) {
return null;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | String json = "";
try {
json = toJsonString(obj);
} catch (Exception e) {
logger.error("json serialize exception.", e);
}
return json.getBytes(UTF_8);
}
public static ObjectNode parseObject(String text) {
try {
if (text.isEmpty()) {
return parseObject(text, ObjectNode.class);
} else {
return (ObjectNode) objectMapper.readTree(text);
}
} catch (Exception e) {
throw new RuntimeException("String json deserialization exception.", e);
}
}
public static ArrayNode parseArray(String text) {
try {
return (ArrayNode) objectMapper.readTree(text);
} catch (Exception e) {
throw new RuntimeException("Json deserialization exception.", e);
}
}
/**
* json serializer
*/
public static class JsonDataSerializer extends JsonSerializer<String> { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/JSONUtils.java | @Override
public void serialize(String value, JsonGenerator gen, SerializerProvider provider) throws IOException {
gen.writeRawValue(value);
}
}
/**
* json data deserializer
*/
public static class JsonDataDeserializer extends JsonDeserializer<String> {
@Override
public String deserialize(JsonParser p, DeserializationContext ctxt) throws IOException {
JsonNode node = p.getCodec().readTree(p);
if (node instanceof TextNode) {
return node.asText();
} else {
return node.toString();
}
}
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/test/java/org/apache/dolphinscheduler/common/utils/JSONUtilsTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import org.apache.dolphinscheduler.common.enums.DataType;
import org.apache.dolphinscheduler.common.enums.Direct;
import org.apache.dolphinscheduler.common.model.TaskNode;
import org.apache.dolphinscheduler.common.process.Property;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.List; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/test/java/org/apache/dolphinscheduler/common/utils/JSONUtilsTest.java | import java.util.Map;
import org.junit.Assert;
import org.junit.Test;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.SerializationFeature;
import com.fasterxml.jackson.databind.node.ArrayNode;
import com.fasterxml.jackson.databind.node.JsonNodeFactory;
import com.fasterxml.jackson.databind.node.ObjectNode;
public class JSONUtilsTest {
@Test
public void createArrayNodeTest() {
Property property = new Property();
property.setProp("ds");
property.setDirect(Direct.IN);
property.setType(DataType.VARCHAR);
property.setValue("sssssss");
String str = "[{\"prop\":\"ds\",\"direct\":\"IN\",\"type\":\"VARCHAR\",\"value\":\"sssssss\"},{\"prop\":\"ds\",\"direct\":\"IN\",\"type\":\"VARCHAR\",\"value\":\"sssssss\"}]";
JsonNode jsonNode = JSONUtils.toJsonNode(property);
ArrayNode arrayNode = JSONUtils.createArrayNode();
ArrayList<JsonNode> objects = new ArrayList<>();
objects.add(jsonNode);
objects.add(jsonNode);
ArrayNode jsonNodes = arrayNode.addAll(objects);
String s = JSONUtils.toJsonString(jsonNodes);
Assert.assertEquals(s, str);
}
@Test
public void toJsonNodeTest() {
Property property = new Property();
property.setProp("ds"); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/test/java/org/apache/dolphinscheduler/common/utils/JSONUtilsTest.java | property.setDirect(Direct.IN);
property.setType(DataType.VARCHAR);
property.setValue("sssssss");
String str = "{\"prop\":\"ds\",\"direct\":\"IN\",\"type\":\"VARCHAR\",\"value\":\"sssssss\"}";
JsonNode jsonNodes = JSONUtils.toJsonNode(property);
String s = JSONUtils.toJsonString(jsonNodes);
Assert.assertEquals(s, str);
}
@Test
public void createObjectNodeTest() {
String jsonStr = "{\"a\":\"b\",\"b\":\"d\"}";
ObjectNode objectNode = JSONUtils.createObjectNode();
objectNode.put("a","b");
objectNode.put("b","d");
String s = JSONUtils.toJsonString(objectNode);
Assert.assertEquals(s, jsonStr);
}
@Test
public void toMap() {
String jsonStr = "{\"id\":\"1001\",\"name\":\"Jobs\"}";
Map<String, String> models = JSONUtils.toMap(jsonStr);
Assert.assertEquals("1001", models.get("id"));
Assert.assertEquals("Jobs", models.get("name"));
}
@Test
public void convert2Property() {
Property property = new Property();
property.setProp("ds");
property.setDirect(Direct.IN);
property.setType(DataType.VARCHAR); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/test/java/org/apache/dolphinscheduler/common/utils/JSONUtilsTest.java | property.setValue("sssssss");
String str = "{\"direct\":\"IN\",\"prop\":\"ds\",\"type\":\"VARCHAR\",\"value\":\"sssssss\"}";
Property property1 = JSONUtils.parseObject(str, Property.class);
Direct direct = property1.getDirect();
Assert.assertEquals(Direct.IN, direct);
}
@Test
public void string2MapTest() {
String str = list2String();
List<LinkedHashMap> maps = JSONUtils.toList(str,
LinkedHashMap.class);
Assert.assertEquals(1, maps.size());
Assert.assertEquals("mysql200", maps.get(0).get("mysql service name"));
Assert.assertEquals("192.168.xx.xx", maps.get(0).get("mysql address"));
Assert.assertEquals("3306", maps.get(0).get("port"));
Assert.assertEquals("80", maps.get(0).get("no index of number"));
Assert.assertEquals("190", maps.get(0).get("database client connections"));
}
public String list2String() {
LinkedHashMap<String, String> map1 = new LinkedHashMap<>();
map1.put("mysql service name", "mysql200");
map1.put("mysql address", "192.168.xx.xx");
map1.put("port", "3306");
map1.put("no index of number", "80");
map1.put("database client connections", "190");
List<LinkedHashMap<String, String>> maps = new ArrayList<>();
maps.add(0, map1);
String resultJson = JSONUtils.toJsonString(maps);
return resultJson;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/test/java/org/apache/dolphinscheduler/common/utils/JSONUtilsTest.java | @Test
public void testParseObject() {
Assert.assertNull(JSONUtils.parseObject(""));
Assert.assertNull(JSONUtils.parseObject("foo", String.class));
}
@Test
public void testJsonByteArray() {
String str = "foo";
byte[] serializeByte = JSONUtils.toJsonByteArray(str);
String deserialize = JSONUtils.parseObject(serializeByte, String.class);
Assert.assertEquals(str, deserialize);
str = null;
serializeByte = JSONUtils.toJsonByteArray(str);
deserialize = JSONUtils.parseObject(serializeByte, String.class);
Assert.assertNull(deserialize);
}
@Test
public void testToList() {
Assert.assertEquals(new ArrayList(),
JSONUtils.toList("A1B2C3", null));
Assert.assertEquals(new ArrayList(),
JSONUtils.toList("", null));
}
@Test
public void testCheckJsonValid() {
Assert.assertTrue(JSONUtils.checkJsonValid("3"));
Assert.assertFalse(JSONUtils.checkJsonValid(""));
}
@Test
public void testFindValue() { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,483 | [Bug][Api] Can't view variables | **Describe the bug**
When I want to view the variables defined in process instance, it will throw an exception.
**To Reproduce**
Steps to reproduce the behavior, for example:
1. Create a process definition
2. Add localparams
3. Execute the process definition
4. View params in process instance
**Screenshots**

**Which version of Dolphin Scheduler:**
-[dev]
**Additional context**
This issue caused by deserialize the taskParams in TaskDefinitionLog.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ProcessInstanceServiceImpl.java#L664-L666
For example, there exist list in the json attribute, so it cannot be deserialized as string.
```json
{
"resourceList":[
],
"localParams":[
{
"prop":"BATCH_TIME",
"direct":"IN",
"type":"VARCHAR",
"value":"20210517131849"
}
],
"rawScript":"echo "${BATCH_TIME}"",
"conditionResult":"{"successNode":[""],"failedNode":[""]}",
"dependence":"{}"
}
```
And there are multiple places use different way to deserialize the` taskParams`.
https://github.com/apache/dolphinscheduler/blob/68301db6b914ff4002bfbc531c6810864d8e47c2/dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java#L1611
I think it is better to use the same way to do this transform, otherwise, once we make changes, we need to change many places.
And the `taskParams` is transported by front-end and stored in database as a JSON string. We use Map to represent this field in backend, I think it is better to define a specific class to express the `taskParams`, this maybe helpful for deserialize and code maintain.
| https://github.com/apache/dolphinscheduler/issues/5483 | https://github.com/apache/dolphinscheduler/pull/5631 | 8bf042ae6ef7576209a0489e784684f4960ae6e0 | 0d5037e7c37d7903d9172f165b348058f1ddbf88 | "2021-05-17T06:24:02Z" | java | "2021-06-13T03:43:53Z" | dolphinscheduler-common/src/test/java/org/apache/dolphinscheduler/common/utils/JSONUtilsTest.java | Assert.assertNull(JSONUtils.findValue(
new ArrayNode(new JsonNodeFactory(true)), null));
}
@Test
public void testToMap() {
Map<String, String> map = new HashMap<>();
map.put("foo", "bar");
Assert.assertTrue(map.equals(JSONUtils.toMap(
"{\n" + "\"foo\": \"bar\"\n" + "}")));
Assert.assertFalse(map.equals(JSONUtils.toMap(
"{\n" + "\"bar\": \"foo\"\n" + "}")));
Assert.assertNull(JSONUtils.toMap("3"));
Assert.assertNull(JSONUtils.toMap(null));
Assert.assertNull(JSONUtils.toMap("3", null, null));
Assert.assertNull(JSONUtils.toMap(null, null, null));
String str = "{\"resourceList\":[],\"localParams\":[],\"rawScript\":\"#!/bin/bash\\necho \\\"shell-1\\\"\"}";
Map<String, String> m = JSONUtils.toMap(str);
Assert.assertNotNull(m);
}
@Test
public void testToJsonString() {
Map<String, Object> map = new HashMap<>();
map.put("foo", "bar");
Assert.assertEquals("{\"foo\":\"bar\"}",
JSONUtils.toJsonString(map));
Assert.assertEquals(String.valueOf((Object) null),
JSONUtils.toJsonString(null));
Assert.assertEquals("{\"foo\":\"bar\"}",
JSONUtils.toJsonString(map, SerializationFeature.WRITE_NULL_MAP_VALUES));
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.