status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | String tenantCode = getTenantCode(resource.getUserId(),result);
if (StringUtils.isEmpty(tenantCode)) {
return result;
}
List<Map<String, Object>> list = processDefinitionMapper.listResources();
Map<Integer, Set<Long>> resourceProcessMap = ResourceProcessDefinitionUtils.getResourceProcessDefinitionMap(list);
Set<Integer> resourceIdSet = resourceProcessMap.keySet();
List<Integer> allChildren = listAllChildren(resource,true);
Integer[] needDeleteResourceIdArray = allChildren.toArray(new Integer[allChildren.size()]);
if (resource.getType() == (ResourceType.UDF)) {
List<UdfFunc> udfFuncs = udfFunctionMapper.listUdfByResourceId(needDeleteResourceIdArray);
if (CollectionUtils.isNotEmpty(udfFuncs)) {
logger.error("can't be deleted,because it is bound by UDF functions:{}", udfFuncs);
putMsg(result,Status.UDF_RESOURCE_IS_BOUND,udfFuncs.get(0).getFuncName());
return result;
}
}
if (resourceIdSet.contains(resource.getPid())) {
logger.error("can't be deleted,because it is used of process definition");
putMsg(result, Status.RESOURCE_IS_USED);
return result;
}
resourceIdSet.retainAll(allChildren);
if (CollectionUtils.isNotEmpty(resourceIdSet)) {
logger.error("can't be deleted,because it is used of process definition");
for (Integer resId : resourceIdSet) {
logger.error("resource id:{} is used of process definition {}",resId,resourceProcessMap.get(resId)); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | }
putMsg(result, Status.RESOURCE_IS_USED);
return result;
}
String hdfsFilename = HadoopUtils.getHdfsFileName(resource.getType(), tenantCode, resource.getFullName());
resourcesMapper.deleteIds(needDeleteResourceIdArray);
resourceUserMapper.deleteResourceUserArray(0, needDeleteResourceIdArray);
HadoopUtils.getInstance().delete(hdfsFilename, true);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* verify resource by name and type
* @param loginUser login user
* @param fullName resource full name
* @param type resource type
* @return true if the resource name not exists, otherwise return false
*/
@Override
public Result<Object> verifyResourceName(String fullName, ResourceType type, User loginUser) {
Result<Object> result = new Result<>();
putMsg(result, Status.SUCCESS);
if (checkResourceExists(fullName, type.ordinal())) {
logger.error("resource type:{} name:{} has exist, can't create again.", type, RegexUtils.escapeNRT(fullName));
putMsg(result, Status.RESOURCE_EXIST);
} else { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | Tenant tenant = tenantMapper.queryById(loginUser.getTenantId());
if (tenant != null) {
String tenantCode = tenant.getTenantCode();
try {
String hdfsFilename = HadoopUtils.getHdfsFileName(type,tenantCode,fullName);
if (HadoopUtils.getInstance().exists(hdfsFilename)) {
logger.error("resource type:{} name:{} has exist in hdfs {}, can't create again.", type, RegexUtils.escapeNRT(fullName), hdfsFilename);
putMsg(result, Status.RESOURCE_FILE_EXIST,hdfsFilename);
}
} catch (Exception e) {
logger.error(e.getMessage(),e);
putMsg(result,Status.HDFS_OPERATION_ERROR);
}
} else {
putMsg(result,Status.CURRENT_LOGIN_USER_TENANT_NOT_EXIST);
}
}
return result;
}
/**
* verify resource by full name or pid and type
* @param fullName resource full name
* @param id resource id
* @param type resource type
* @return true if the resource full name or pid not exists, otherwise return false
*/
@Override
public Result<Object> queryResource(String fullName, Integer id, ResourceType type) {
Result<Object> result = new Result<>();
if (StringUtils.isBlank(fullName) && id == null) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR);
return result;
}
if (StringUtils.isNotBlank(fullName)) {
List<Resource> resourceList = resourcesMapper.queryResource(fullName,type.ordinal());
if (CollectionUtils.isEmpty(resourceList)) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
putMsg(result, Status.SUCCESS);
result.setData(resourceList.get(0));
} else {
Resource resource = resourcesMapper.selectById(id);
if (resource == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
Resource parentResource = resourcesMapper.selectById(resource.getPid());
if (parentResource == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
putMsg(result, Status.SUCCESS);
result.setData(parentResource);
}
return result;
}
/**
* get resource by id
* @param id resource id |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | * @return resource
*/
@Override
public Result<Object> queryResourceById(Integer id) {
Result<Object> result = new Result<>();
Resource resource = resourcesMapper.selectById(id);
if (resource == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
putMsg(result, Status.SUCCESS);
result.setData(resource);
return result;
}
/**
* view resource file online
*
* @param resourceId resource id
* @param skipLineNum skip line number
* @param limit limit
* @return resource content
*/
@Override
public Result<Object> readResource(int resourceId, int skipLineNum, int limit) {
Result<Object> result = checkResourceUploadStartupState();
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
Resource resource = resourcesMapper.selectById(resourceId); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (resource == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
String nameSuffix = Files.getFileExtension(resource.getAlias());
String resourceViewSuffixs = FileUtils.getResourceViewSuffixs();
if (StringUtils.isNotEmpty(resourceViewSuffixs)) {
List<String> strList = Arrays.asList(resourceViewSuffixs.split(","));
if (!strList.contains(nameSuffix)) {
logger.error("resource suffix {} not support view, resource id {}", nameSuffix, resourceId);
putMsg(result, Status.RESOURCE_SUFFIX_NOT_SUPPORT_VIEW);
return result;
}
}
String tenantCode = getTenantCode(resource.getUserId(),result);
if (StringUtils.isEmpty(tenantCode)) {
return result;
}
String hdfsFileName = HadoopUtils.getHdfsResourceFileName(tenantCode, resource.getFullName());
logger.info("resource hdfs path is {}", hdfsFileName);
try {
if (HadoopUtils.getInstance().exists(hdfsFileName)) {
List<String> content = HadoopUtils.getInstance().catFile(hdfsFileName, skipLineNum, limit);
putMsg(result, Status.SUCCESS);
Map<String, Object> map = new HashMap<>();
map.put(ALIAS, resource.getAlias());
map.put(CONTENT, String.join("\n", content));
result.setData(map); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | } else {
logger.error("read file {} not exist in hdfs", hdfsFileName);
putMsg(result, Status.RESOURCE_FILE_NOT_EXIST,hdfsFileName);
}
} catch (Exception e) {
logger.error("Resource {} read failed", hdfsFileName, e);
putMsg(result, Status.HDFS_OPERATION_ERROR);
}
return result;
}
/**
* create resource file online
*
* @param loginUser login user
* @param type resource type
* @param fileName file name
* @param fileSuffix file suffix
* @param desc description
* @param content content
* @param pid pid
* @param currentDir current directory
* @return create result code
*/
@Override
@Transactional(rollbackFor = Exception.class)
public Result<Object> onlineCreateResource(User loginUser, ResourceType type, String fileName, String fileSuffix, String desc, String content,int pid,String currentDir) {
Result<Object> result = checkResourceUploadStartupState();
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | String nameSuffix = fileSuffix.trim();
String resourceViewSuffixs = FileUtils.getResourceViewSuffixs();
if (StringUtils.isNotEmpty(resourceViewSuffixs)) {
List<String> strList = Arrays.asList(resourceViewSuffixs.split(","));
if (!strList.contains(nameSuffix)) {
logger.error("resource suffix {} not support create", nameSuffix);
putMsg(result, Status.RESOURCE_SUFFIX_NOT_SUPPORT_VIEW);
return result;
}
}
String name = fileName.trim() + "." + nameSuffix;
String fullName = currentDir.equals("/") ? String.format("%s%s",currentDir,name) : String.format("%s/%s",currentDir,name);
result = verifyResource(loginUser, type, fullName, pid);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
Date now = new Date();
Resource resource = new Resource(pid,name,fullName,false,desc,name,loginUser.getId(),type,content.getBytes().length,now,now);
resourcesMapper.insert(resource);
putMsg(result, Status.SUCCESS);
Map<Object, Object> dataMap = new BeanMap(resource);
Map<String, Object> resultMap = new HashMap<>();
for (Map.Entry<Object, Object> entry: dataMap.entrySet()) {
if (!Constants.CLASS.equalsIgnoreCase(entry.getKey().toString())) {
resultMap.put(entry.getKey().toString(), entry.getValue());
}
}
result.setData(resultMap); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | String tenantCode = tenantMapper.queryById(loginUser.getTenantId()).getTenantCode();
result = uploadContentToHdfs(fullName, tenantCode, content);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
throw new ServiceException(result.getMsg());
}
return result;
}
private Result<Object> checkResourceUploadStartupState() {
Result<Object> result = new Result<>();
putMsg(result, Status.SUCCESS);
if (!PropertyUtils.getResUploadStartupState()) {
logger.error("resource upload startup state: {}", PropertyUtils.getResUploadStartupState());
putMsg(result, Status.HDFS_NOT_STARTUP);
return result;
}
return result;
}
private Result<Object> verifyResource(User loginUser, ResourceType type, String fullName, int pid) {
Result<Object> result = verifyResourceName(fullName, type, loginUser);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
return verifyPid(loginUser, pid);
}
private Result<Object> verifyPid(User loginUser, int pid) {
Result<Object> result = new Result<>();
putMsg(result, Status.SUCCESS);
if (pid != -1) {
Resource parentResource = resourcesMapper.selectById(pid); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (parentResource == null) {
putMsg(result, Status.PARENT_RESOURCE_NOT_EXIST);
return result;
}
if (!hasPerm(loginUser, parentResource.getUserId())) {
putMsg(result, Status.USER_NO_OPERATION_PERM);
return result;
}
}
return result;
}
/**
* updateProcessInstance resource
*
* @param resourceId resource id
* @param content content
* @return update result cod
*/
@Override
@Transactional(rollbackFor = Exception.class)
public Result<Object> updateResourceContent(int resourceId, String content) {
Result<Object> result = checkResourceUploadStartupState();
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
Resource resource = resourcesMapper.selectById(resourceId);
if (resource == null) {
logger.error("read file not exist, resource id {}", resourceId);
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | }
String nameSuffix = Files.getFileExtension(resource.getAlias());
String resourceViewSuffixs = FileUtils.getResourceViewSuffixs();
if (StringUtils.isNotEmpty(resourceViewSuffixs)) {
List<String> strList = Arrays.asList(resourceViewSuffixs.split(","));
if (!strList.contains(nameSuffix)) {
logger.error("resource suffix {} not support updateProcessInstance, resource id {}", nameSuffix, resourceId);
putMsg(result, Status.RESOURCE_SUFFIX_NOT_SUPPORT_VIEW);
return result;
}
}
String tenantCode = getTenantCode(resource.getUserId(),result);
if (StringUtils.isEmpty(tenantCode)) {
return result;
}
resource.setSize(content.getBytes().length);
resource.setUpdateTime(new Date());
resourcesMapper.updateById(resource);
result = uploadContentToHdfs(resource.getFullName(), tenantCode, content);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
throw new ServiceException(result.getMsg());
}
return result;
}
/**
* @param resourceName resource name
* @param tenantCode tenant code
* @param content content
* @return result |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | */
private Result<Object> uploadContentToHdfs(String resourceName, String tenantCode, String content) {
Result<Object> result = new Result<>();
String localFilename = "";
String hdfsFileName = "";
try {
localFilename = FileUtils.getUploadFilename(tenantCode, UUID.randomUUID().toString());
if (!FileUtils.writeContent2File(content, localFilename)) {
logger.error("file {} fail, content is {}", localFilename, RegexUtils.escapeNRT(content));
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
hdfsFileName = HadoopUtils.getHdfsResourceFileName(tenantCode, resourceName);
String resourcePath = HadoopUtils.getHdfsResDir(tenantCode);
logger.info("resource hdfs path is {}, resource dir is {}", hdfsFileName, resourcePath);
HadoopUtils hadoopUtils = HadoopUtils.getInstance();
if (!hadoopUtils.exists(resourcePath)) {
createTenantDirIfNotExists(tenantCode);
}
if (hadoopUtils.exists(hdfsFileName)) {
hadoopUtils.delete(hdfsFileName, false);
}
hadoopUtils.copyLocalToHdfs(localFilename, hdfsFileName, true, true);
} catch (Exception e) {
logger.error(e.getMessage(), e);
result.setCode(Status.HDFS_OPERATION_ERROR.getCode());
result.setMsg(String.format("copy %s to hdfs %s fail", localFilename, hdfsFileName)); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | return result;
}
putMsg(result, Status.SUCCESS);
return result;
}
/**
* download file
*
* @param resourceId resource id
* @return resource content
* @throws IOException exception
*/
@Override
public org.springframework.core.io.Resource downloadResource(int resourceId) throws IOException {
if (!PropertyUtils.getResUploadStartupState()) {
logger.error("resource upload startup state: {}", PropertyUtils.getResUploadStartupState());
throw new ServiceException("hdfs not startup");
}
Resource resource = resourcesMapper.selectById(resourceId);
if (resource == null) {
logger.error("download file not exist, resource id {}", resourceId);
return null;
}
if (resource.isDirectory()) {
logger.error("resource id {} is directory,can't download it", resourceId);
throw new ServiceException("can't download directory");
}
int userId = resource.getUserId();
User user = userMapper.selectById(userId); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (user == null) {
logger.error("user id {} not exists", userId);
throw new ServiceException(String.format("resource owner id %d not exist",userId));
}
Tenant tenant = tenantMapper.queryById(user.getTenantId());
if (tenant == null) {
logger.error("tenant id {} not exists", user.getTenantId());
throw new ServiceException(String.format("The tenant id %d of resource owner not exist",user.getTenantId()));
}
String tenantCode = tenant.getTenantCode();
String hdfsFileName = HadoopUtils.getHdfsFileName(resource.getType(), tenantCode, resource.getFullName());
String localFileName = FileUtils.getDownloadFilename(resource.getAlias());
logger.info("resource hdfs path is {}, download local filename is {}", hdfsFileName, localFileName);
HadoopUtils.getInstance().copyHdfsToLocal(hdfsFileName, localFileName, false, true);
return org.apache.dolphinscheduler.api.utils.FileUtils.file2Resource(localFileName);
}
/**
* list all file
*
* @param loginUser login user
* @param userId user id
* @return unauthorized result code
*/
@Override
public Map<String, Object> authorizeResourceTree(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
List<Resource> resourceList;
if (isAdmin(loginUser)) {
resourceList = resourcesMapper.queryResourceExceptUserId(userId); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | } else {
resourceList = resourcesMapper.queryResourceListAuthored(loginUser.getId(), -1);
}
List<ResourceComponent> list;
if (CollectionUtils.isNotEmpty(resourceList)) {
Visitor visitor = new ResourceTreeVisitor(resourceList);
list = visitor.visit().getChildren();
} else {
list = new ArrayList<>(0);
}
result.put(Constants.DATA_LIST, list);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* unauthorized file
*
* @param loginUser login user
* @param userId user id
* @return unauthorized result code
*/
@Override
public Map<String, Object> unauthorizedFile(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
List<Resource> resourceList;
if (isAdmin(loginUser)) {
resourceList = resourcesMapper.queryResourceExceptUserId(userId);
} else { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | resourceList = resourcesMapper.queryResourceListAuthored(loginUser.getId(), -1);
}
List<Resource> list;
if (resourceList != null && !resourceList.isEmpty()) {
Set<Resource> resourceSet = new HashSet<>(resourceList);
List<Resource> authedResourceList = queryResourceList(userId, Constants.AUTHORIZE_WRITABLE_PERM);
getAuthorizedResourceList(resourceSet, authedResourceList);
list = new ArrayList<>(resourceSet);
} else {
list = new ArrayList<>(0);
}
Visitor visitor = new ResourceTreeVisitor(list);
result.put(Constants.DATA_LIST, visitor.visit().getChildren());
putMsg(result, Status.SUCCESS);
return result;
}
/**
* unauthorized udf function
*
* @param loginUser login user
* @param userId user id
* @return unauthorized result code
*/
@Override
public Map<String, Object> unauthorizedUDFFunction(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
List<UdfFunc> udfFuncList;
if (isAdmin(loginUser)) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | udfFuncList = udfFunctionMapper.queryUdfFuncExceptUserId(userId);
} else {
udfFuncList = udfFunctionMapper.selectByMap(Collections.singletonMap("user_id", loginUser.getId()));
}
List<UdfFunc> resultList = new ArrayList<>();
Set<UdfFunc> udfFuncSet;
if (CollectionUtils.isNotEmpty(udfFuncList)) {
udfFuncSet = new HashSet<>(udfFuncList);
List<UdfFunc> authedUDFFuncList = udfFunctionMapper.queryAuthedUdfFunc(userId);
getAuthorizedResourceList(udfFuncSet, authedUDFFuncList);
resultList = new ArrayList<>(udfFuncSet);
}
result.put(Constants.DATA_LIST, resultList);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* authorized udf function
*
* @param loginUser login user
* @param userId user id
* @return authorized result code
*/
@Override
public Map<String, Object> authorizedUDFFunction(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
List<UdfFunc> udfFuncs = udfFunctionMapper.queryAuthedUdfFunc(userId);
result.put(Constants.DATA_LIST, udfFuncs);
putMsg(result, Status.SUCCESS); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | return result;
}
/**
* authorized file
*
* @param loginUser login user
* @param userId user id
* @return authorized result
*/
@Override
public Map<String, Object> authorizedFile(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
List<Resource> authedResources = queryResourceList(userId, Constants.AUTHORIZE_WRITABLE_PERM);
Visitor visitor = new ResourceTreeVisitor(authedResources);
String visit = JSONUtils.toJsonString(visitor.visit(), SerializationFeature.ORDER_MAP_ENTRIES_BY_KEYS);
logger.info(visit);
String jsonTreeStr = JSONUtils.toJsonString(visitor.visit().getChildren(), SerializationFeature.ORDER_MAP_ENTRIES_BY_KEYS);
logger.info(jsonTreeStr);
result.put(Constants.DATA_LIST, visitor.visit().getChildren());
putMsg(result,Status.SUCCESS);
return result;
}
/**
* get authorized resource list
*
* @param resourceSet resource set
* @param authedResourceList authorized resource list
*/
private void getAuthorizedResourceList(Set<?> resourceSet, List<?> authedResourceList) {
Set<?> authedResourceSet; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (CollectionUtils.isNotEmpty(authedResourceList)) {
authedResourceSet = new HashSet<>(authedResourceList);
resourceSet.removeAll(authedResourceSet);
}
}
/**
* get tenantCode by UserId
*
* @param userId user id
* @param result return result
* @return tenant code
*/
private String getTenantCode(int userId,Result<Object> result) {
User user = userMapper.selectById(userId);
if (user == null) {
logger.error("user {} not exists", userId);
putMsg(result, Status.USER_NOT_EXIST,userId);
return null;
}
Tenant tenant = tenantMapper.queryById(user.getTenantId());
if (tenant == null) {
logger.error("tenant not exists");
putMsg(result, Status.CURRENT_LOGIN_USER_TENANT_NOT_EXIST);
return null;
}
return tenant.getTenantCode();
}
/**
* list all children id
* @param resource resource |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | * @param containSelf whether add self to children list
* @return all children id
*/
List<Integer> listAllChildren(Resource resource,boolean containSelf) {
List<Integer> childList = new ArrayList<>();
if (resource.getId() != -1 && containSelf) {
childList.add(resource.getId());
}
if (resource.isDirectory()) {
listAllChildren(resource.getId(),childList);
}
return childList;
}
/**
* list all children id
* @param resourceId resource id
* @param childList child list
*/
void listAllChildren(int resourceId,List<Integer> childList) {
List<Integer> children = resourcesMapper.listChildren(resourceId);
for (int childId : children) {
childList.add(childId);
listAllChildren(childId, childList);
}
}
/**
* query authored resource list (own and authorized)
* @param loginUser login user
* @param type ResourceType
* @return all authored resource list |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | */
private List<Resource> queryAuthoredResourceList(User loginUser, ResourceType type) {
List<Resource> relationResources;
int userId = loginUser.getId();
if (isAdmin(loginUser)) {
userId = 0;
relationResources = new ArrayList<>();
} else {
relationResources = queryResourceList(userId, 0);
}
List<Resource> relationTypeResources =
relationResources.stream().filter(rs -> rs.getType() == type).collect(Collectors.toList());
List<Resource> ownResourceList = resourcesMapper.queryResourceListAuthored(userId, type.ordinal());
ownResourceList.addAll(relationTypeResources);
return ownResourceList;
}
/**
* query resource list by userId and perm
* @param userId userId
* @param perm perm
* @return resource list
*/
private List<Resource> queryResourceList(Integer userId, int perm) {
List<Integer> resIds = resourceUserMapper.queryResourcesIdListByUserIdAndPerm(userId, perm);
return CollectionUtils.isEmpty(resIds) ? new ArrayList<>() : resourcesMapper.queryResourceListById(resIds);
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/DolphinSchedulerManager.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.tools.datasource;
import org.apache.dolphinscheduler.dao.upgrade.SchemaUtils;
import org.apache.dolphinscheduler.spi.enums.DbType;
import org.apache.dolphinscheduler.tools.datasource.dao.UpgradeDao;
import java.io.IOException;
import java.sql.Connection;
import java.util.List;
import javax.sql.DataSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
@Service |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/DolphinSchedulerManager.java | public class DolphinSchedulerManager {
private static final Logger logger = LoggerFactory.getLogger(DolphinSchedulerManager.class);
private final UpgradeDao upgradeDao;
public DolphinSchedulerManager(DataSource dataSource, List<UpgradeDao> daos) throws Exception {
final DbType type = getCurrentDbType(dataSource);
upgradeDao = daos.stream()
.filter(it -> it.getDbType() == type)
.findFirst()
.orElseThrow(() -> new RuntimeException(
"Cannot find UpgradeDao implementation for db type: " + type
));
}
private DbType getCurrentDbType(DataSource dataSource) throws Exception {
try (Connection conn = dataSource.getConnection()) {
String name = conn.getMetaData().getDatabaseProductName().toUpperCase();
return DbType.valueOf(name);
}
}
public void initDolphinScheduler() {
this.initDolphinSchedulerSchema();
}
/**
* whether schema is initialized |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/DolphinSchedulerManager.java | * @return true if schema is initialized
*/
public boolean schemaIsInitialized() {
if (upgradeDao.isExistsTable("t_escheduler_version")
|| upgradeDao.isExistsTable("t_ds_version")
|| upgradeDao.isExistsTable("t_escheduler_queue")) {
logger.info("The database has been initialized. Skip the initialization step");
return true;
}
return false;
}
public void initDolphinSchedulerSchema() {
logger.info("Start initializing the DolphinScheduler manager table structure");
upgradeDao.initSchema();
}
public void upgradeDolphinScheduler() throws IOException {
List<String> schemaList = SchemaUtils.getAllSchemaList();
if (schemaList == null || schemaList.size() == 0) {
logger.info("There is no schema to upgrade!");
} else {
String version;
if (upgradeDao.isExistsTable("t_escheduler_version")) {
version = upgradeDao.getCurrentVersion("t_escheduler_version");
} else if (upgradeDao.isExistsTable("t_ds_version")) {
version = upgradeDao.getCurrentVersion("t_ds_version");
} else if (upgradeDao.isExistsColumn("t_escheduler_queue", "create_time")) {
version = "1.0.1"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/DolphinSchedulerManager.java | } else if (upgradeDao.isExistsTable("t_escheduler_queue")) {
version = "1.0.0";
} else {
logger.error("Unable to determine current software version, so cannot upgrade");
throw new RuntimeException("Unable to determine current software version, so cannot upgrade");
}
String schemaVersion = "";
for (String schemaDir : schemaList) {
schemaVersion = schemaDir.split("_")[0];
if (SchemaUtils.isAGreatVersion(schemaVersion, version)) {
logger.info("upgrade DolphinScheduler metadata version from {} to {}", version, schemaVersion);
logger.info("Begin upgrading DolphinScheduler's table structure");
upgradeDao.upgradeDolphinScheduler(schemaDir);
if ("1.3.0".equals(schemaVersion)) {
upgradeDao.upgradeDolphinSchedulerWorkerGroup();
} else if ("1.3.2".equals(schemaVersion)) {
upgradeDao.upgradeDolphinSchedulerResourceList();
} else if ("2.0.0".equals(schemaVersion)) {
upgradeDao.upgradeDolphinSchedulerTo200(schemaDir);
}
version = schemaVersion;
}
}
}
upgradeDao.updateVersion(SchemaUtils.getSoftVersion());
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/ResourceDao.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.tools.datasource.dao;
import org.apache.dolphinscheduler.common.utils.ConnectionUtils;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.util.HashMap;
import java.util.Map;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* resource dao
*/
public class ResourceDao { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/ResourceDao.java | public static final Logger logger = LoggerFactory.getLogger(ResourceDao.class);
/**
* list all resources
*
* @param conn connection
* @return map that key is full_name and value is id
*/
Map<String, Integer> listAllResources(Connection conn) {
Map<String, Integer> resourceMap = new HashMap<>();
String sql = String.format("SELECT id,full_name FROM t_ds_resources");
ResultSet rs = null;
PreparedStatement pstmt = null;
try {
pstmt = conn.prepareStatement(sql);
rs = pstmt.executeQuery();
while (rs.next()) {
Integer id = rs.getInt(1);
String fullName = rs.getString(2);
resourceMap.put(fullName, id);
}
} catch (Exception e) {
logger.error(e.getMessage(), e);
throw new RuntimeException("sql: " + sql, e);
} finally {
ConnectionUtils.releaseResource(rs, pstmt, conn);
}
return resourceMap;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.tools.datasource.dao;
import static org.apache.dolphinscheduler.plugin.task.api.TaskConstants.TASK_TYPE_CONDITIONS; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | import static org.apache.dolphinscheduler.plugin.task.api.TaskConstants.TASK_TYPE_DEPENDENT;
import static org.apache.dolphinscheduler.plugin.task.api.TaskConstants.TASK_TYPE_SUB_PROCESS;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.ConditionType;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.Priority;
import org.apache.dolphinscheduler.common.enums.TimeoutFlag;
import org.apache.dolphinscheduler.plugin.task.api.parameters.TaskTimeoutParameter;
import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils;
import org.apache.dolphinscheduler.common.utils.ConnectionUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.ScriptRunner;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelationLog;
import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog;
import org.apache.dolphinscheduler.dao.upgrade.JsonSplitDao;
import org.apache.dolphinscheduler.dao.upgrade.ProcessDefinitionDao;
import org.apache.dolphinscheduler.dao.upgrade.ProjectDao;
import org.apache.dolphinscheduler.dao.upgrade.ScheduleDao;
import org.apache.dolphinscheduler.dao.upgrade.SchemaUtils;
import org.apache.dolphinscheduler.dao.upgrade.WorkerGroupDao;
import org.apache.dolphinscheduler.plugin.task.api.model.ResourceInfo;
import org.apache.dolphinscheduler.spi.enums.DbType;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang.StringUtils;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.Reader; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.stream.Collectors;
import javax.sql.DataSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.core.io.ClassPathResource;
import org.springframework.core.io.Resource;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.node.ArrayNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
public abstract class UpgradeDao {
public static final Logger logger = LoggerFactory.getLogger(UpgradeDao.class);
private static final String T_VERSION_NAME = "t_escheduler_version";
private static final String T_NEW_VERSION_NAME = "t_ds_version";
protected final DataSource dataSource;
protected UpgradeDao(DataSource dataSource) {
this.dataSource = dataSource;
}
protected abstract String initSqlPath(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | public abstract DbType getDbType();
public void initSchema() {
runInitSql(getDbType());
}
/**
* run init sql to init db schema
* @param dbType db type
*/
private void runInitSql(DbType dbType) {
String sqlFile = String.format("dolphinscheduler_%s.sql",dbType.getDescp());
Resource mysqlSQLFilePath = new ClassPathResource("sql/" + sqlFile);
try (Connection conn = dataSource.getConnection()) {
ScriptRunner initScriptRunner = new ScriptRunner(conn, true, true);
Reader initSqlReader = new InputStreamReader(mysqlSQLFilePath.getInputStream());
initScriptRunner.runScript(initSqlReader);
} catch (Exception e) {
logger.error(e.getMessage(), e);
throw new RuntimeException(e.getMessage(), e);
}
}
public abstract boolean isExistsTable(String tableName);
public abstract boolean isExistsColumn(String tableName, String columnName);
public String getCurrentVersion(String versionName) {
String sql = String.format("select version from %s", versionName);
Connection conn = null;
ResultSet rs = null;
PreparedStatement pstmt = null;
String version = null; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | try {
conn = dataSource.getConnection();
pstmt = conn.prepareStatement(sql);
rs = pstmt.executeQuery();
if (rs.next()) {
version = rs.getString(1);
}
return version;
} catch (SQLException e) {
logger.error(e.getMessage(), e);
throw new RuntimeException("sql: " + sql, e);
} finally {
ConnectionUtils.releaseResource(rs, pstmt, conn);
}
}
public void upgradeDolphinScheduler(String schemaDir) {
upgradeDolphinSchedulerDDL(schemaDir, "dolphinscheduler_ddl.sql");
upgradeDolphinSchedulerDML(schemaDir);
}
/**
* upgrade DolphinScheduler worker group
* ds-1.3.0 modify the worker group for process definition json
*/
public void upgradeDolphinSchedulerWorkerGroup() {
updateProcessDefinitionJsonWorkerGroup();
}
/**
* upgrade DolphinScheduler resource list
* ds-1.3.2 modify the resource list for process definition json
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | public void upgradeDolphinSchedulerResourceList() {
updateProcessDefinitionJsonResourceList();
}
/**
* upgrade DolphinScheduler to 2.0.0
*/
public void upgradeDolphinSchedulerTo200(String schemaDir) {
processDefinitionJsonSplit();
upgradeDolphinSchedulerDDL(schemaDir, "dolphinscheduler_ddl_post.sql");
}
/**
* updateProcessDefinitionJsonWorkerGroup
*/
protected void updateProcessDefinitionJsonWorkerGroup() {
WorkerGroupDao workerGroupDao = new WorkerGroupDao();
ProcessDefinitionDao processDefinitionDao = new ProcessDefinitionDao();
Map<Integer, String> replaceProcessDefinitionMap = new HashMap<>();
try {
Map<Integer, String> oldWorkerGroupMap = workerGroupDao.queryAllOldWorkerGroup(dataSource.getConnection());
Map<Integer, String> processDefinitionJsonMap = processDefinitionDao.queryAllProcessDefinition(dataSource.getConnection());
for (Map.Entry<Integer, String> entry : processDefinitionJsonMap.entrySet()) {
ObjectNode jsonObject = JSONUtils.parseObject(entry.getValue());
ArrayNode tasks = JSONUtils.parseArray(jsonObject.get("tasks").toString());
for (int i = 0; i < tasks.size(); i++) {
ObjectNode task = (ObjectNode) tasks.path(i);
ObjectNode workerGroupNode = (ObjectNode) task.path("workerGroupId");
int workerGroupId = -1;
if (workerGroupNode != null && workerGroupNode.canConvertToInt()) {
workerGroupId = workerGroupNode.asInt(-1);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | if (workerGroupId == -1) {
task.put("workerGroup", "default");
} else {
task.put("workerGroup", oldWorkerGroupMap.get(workerGroupId));
}
}
jsonObject.remove("task");
jsonObject.put("tasks", tasks);
replaceProcessDefinitionMap.put(entry.getKey(), jsonObject.toString());
}
if (replaceProcessDefinitionMap.size() > 0) {
processDefinitionDao.updateProcessDefinitionJson(dataSource.getConnection(), replaceProcessDefinitionMap);
}
} catch (Exception e) {
logger.error("update process definition json workergroup error", e);
}
}
protected void updateProcessDefinitionJsonResourceList() {
ResourceDao resourceDao = new ResourceDao();
ProcessDefinitionDao processDefinitionDao = new ProcessDefinitionDao();
Map<Integer, String> replaceProcessDefinitionMap = new HashMap<>();
try {
Map<String, Integer> resourcesMap = resourceDao.listAllResources(dataSource.getConnection());
Map<Integer, String> processDefinitionJsonMap = processDefinitionDao.queryAllProcessDefinition(dataSource.getConnection());
for (Map.Entry<Integer, String> entry : processDefinitionJsonMap.entrySet()) {
ObjectNode jsonObject = JSONUtils.parseObject(entry.getValue());
ArrayNode tasks = JSONUtils.parseArray(jsonObject.get("tasks").toString());
for (int i = 0; i < tasks.size(); i++) {
ObjectNode task = (ObjectNode) tasks.get(i);
ObjectNode param = (ObjectNode) task.get("params"); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | if (param != null) {
List<ResourceInfo> resourceList = JSONUtils.toList(param.get("resourceList").toString(), ResourceInfo.class);
ResourceInfo mainJar = JSONUtils.parseObject(param.get("mainJar").toString(), ResourceInfo.class);
if (mainJar != null && mainJar.getId() == 0) {
String fullName = mainJar.getRes().startsWith("/") ? mainJar.getRes() : String.format("/%s", mainJar.getRes());
if (resourcesMap.containsKey(fullName)) {
mainJar.setId(resourcesMap.get(fullName));
param.put("mainJar", JSONUtils.parseObject(JSONUtils.toJsonString(mainJar)));
}
}
if (CollectionUtils.isNotEmpty(resourceList)) {
List<ResourceInfo> newResourceList = resourceList.stream().map(resInfo -> {
String fullName = resInfo.getRes().startsWith("/") ? resInfo.getRes() : String.format("/%s", resInfo.getRes());
if (resInfo.getId() == 0 && resourcesMap.containsKey(fullName)) {
resInfo.setId(resourcesMap.get(fullName));
}
return resInfo;
}).collect(Collectors.toList());
param.put("resourceList", JSONUtils.parseObject(JSONUtils.toJsonString(newResourceList)));
}
}
task.put("params", param);
}
jsonObject.remove("tasks");
jsonObject.put("tasks", tasks);
replaceProcessDefinitionMap.put(entry.getKey(), jsonObject.toString());
}
if (replaceProcessDefinitionMap.size() > 0) {
processDefinitionDao.updateProcessDefinitionJson(dataSource.getConnection(), replaceProcessDefinitionMap);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | } catch (Exception e) {
logger.error("update process definition json resource list error", e);
}
}
private void upgradeDolphinSchedulerDML(String schemaDir) {
String schemaVersion = schemaDir.split("_")[0];
Resource sqlFilePath = new ClassPathResource(String.format("sql/upgrade/%s/%s/dolphinscheduler_dml.sql", schemaDir, getDbType().name().toLowerCase()));
logger.info("sqlSQLFilePath: {}", sqlFilePath);
Connection conn = null;
PreparedStatement pstmt = null;
try {
conn = dataSource.getConnection();
conn.setAutoCommit(false);
ScriptRunner scriptRunner = new ScriptRunner(conn, false, true);
Reader sqlReader = new InputStreamReader(sqlFilePath.getInputStream());
scriptRunner.runScript(sqlReader);
if (isExistsTable(T_VERSION_NAME)) {
String upgradeSQL = String.format("update %s set version = ?", T_VERSION_NAME);
pstmt = conn.prepareStatement(upgradeSQL);
pstmt.setString(1, schemaVersion);
pstmt.executeUpdate();
} else if (isExistsTable(T_NEW_VERSION_NAME)) {
String upgradeSQL = String.format("update %s set version = ?", T_NEW_VERSION_NAME);
pstmt = conn.prepareStatement(upgradeSQL);
pstmt.setString(1, schemaVersion);
pstmt.executeUpdate();
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | conn.commit();
} catch (FileNotFoundException e) {
try {
conn.rollback();
} catch (SQLException e1) {
logger.error(e1.getMessage(), e1);
}
logger.error(e.getMessage(), e);
throw new RuntimeException("sql file not found ", e);
} catch (IOException e) {
try {
conn.rollback();
} catch (SQLException e1) {
logger.error(e1.getMessage(), e1);
}
logger.error(e.getMessage(), e);
throw new RuntimeException(e.getMessage(), e);
} catch (Exception e) {
try {
if (null != conn) {
conn.rollback();
}
} catch (SQLException e1) {
logger.error(e1.getMessage(), e1);
}
logger.error(e.getMessage(), e);
throw new RuntimeException(e.getMessage(), e);
} finally {
ConnectionUtils.releaseResource(pstmt, conn);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | }
/**
* upgradeDolphinScheduler DDL
*
* @param schemaDir schemaDir
*/
private void upgradeDolphinSchedulerDDL(String schemaDir, String scriptFile) {
Resource sqlFilePath = new ClassPathResource(String.format("sql/upgrade/%s/%s/%s", schemaDir, getDbType().name().toLowerCase(), scriptFile));
Connection conn = null;
PreparedStatement pstmt = null;
try {
conn = dataSource.getConnection();
String dbName = conn.getCatalog();
logger.info(dbName);
conn.setAutoCommit(true);
ScriptRunner scriptRunner = new ScriptRunner(conn, true, true);
Reader sqlReader = new InputStreamReader(sqlFilePath.getInputStream());
scriptRunner.runScript(sqlReader);
} catch (FileNotFoundException e) {
logger.error(e.getMessage(), e);
throw new RuntimeException("sql file not found ", e);
} catch (Exception e) {
logger.error(e.getMessage(), e);
throw new RuntimeException(e.getMessage(), e);
} finally {
ConnectionUtils.releaseResource(pstmt, conn);
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | /**
* update version
*
* @param version version
*/
public void updateVersion(String version) {
String versionName = T_VERSION_NAME;
if (!SchemaUtils.isAGreatVersion("1.2.0", version)) {
versionName = "t_ds_version";
}
String upgradeSQL = String.format("update %s set version = ?", versionName);
PreparedStatement pstmt = null;
Connection conn = null;
try {
conn = dataSource.getConnection();
pstmt = conn.prepareStatement(upgradeSQL);
pstmt.setString(1, version);
pstmt.executeUpdate();
} catch (SQLException e) {
logger.error(e.getMessage(), e);
throw new RuntimeException("sql: " + upgradeSQL, e);
} finally {
ConnectionUtils.releaseResource(pstmt, conn);
}
}
/**
* upgrade DolphinScheduler to 2.0.0, json split
*/
private void processDefinitionJsonSplit() { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | ProjectDao projectDao = new ProjectDao();
ProcessDefinitionDao processDefinitionDao = new ProcessDefinitionDao();
ScheduleDao scheduleDao = new ScheduleDao();
JsonSplitDao jsonSplitDao = new JsonSplitDao();
try {
Map<Integer, Long> projectIdCodeMap = projectDao.queryAllProject(dataSource.getConnection());
projectDao.updateProjectCode(dataSource.getConnection(), projectIdCodeMap);
List<ProcessDefinition> processDefinitions = processDefinitionDao.queryProcessDefinition(dataSource.getConnection());
processDefinitionDao.updateProcessDefinitionCode(dataSource.getConnection(), processDefinitions, projectIdCodeMap);
Map<Integer, Long> allSchedule = scheduleDao.queryAllSchedule(dataSource.getConnection());
Map<Integer, Long> processIdCodeMap = processDefinitions.stream().collect(Collectors.toMap(ProcessDefinition::getId, ProcessDefinition::getCode));
scheduleDao.updateScheduleCode(dataSource.getConnection(), allSchedule, processIdCodeMap);
Map<Integer, String> processDefinitionJsonMap = processDefinitionDao.queryAllProcessDefinition(dataSource.getConnection());
List<ProcessDefinitionLog> processDefinitionLogs = new ArrayList<>();
List<ProcessTaskRelationLog> processTaskRelationLogs = new ArrayList<>();
List<TaskDefinitionLog> taskDefinitionLogs = new ArrayList<>();
Map<Integer, Map<Long, Map<String, Long>>> processTaskMap = new HashMap<>();
splitProcessDefinitionJson(processDefinitions, processDefinitionJsonMap, processDefinitionLogs, processTaskRelationLogs, taskDefinitionLogs, processTaskMap);
convertDependence(taskDefinitionLogs, projectIdCodeMap, processTaskMap);
jsonSplitDao.executeJsonSplitProcessDefinition(dataSource.getConnection(), processDefinitionLogs);
jsonSplitDao.executeJsonSplitProcessTaskRelation(dataSource.getConnection(), processTaskRelationLogs);
jsonSplitDao.executeJsonSplitTaskDefinition(dataSource.getConnection(), taskDefinitionLogs);
} catch (Exception e) {
logger.error("json split error", e);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | }
private void splitProcessDefinitionJson(List<ProcessDefinition> processDefinitions,
Map<Integer, String> processDefinitionJsonMap,
List<ProcessDefinitionLog> processDefinitionLogs,
List<ProcessTaskRelationLog> processTaskRelationLogs,
List<TaskDefinitionLog> taskDefinitionLogs,
Map<Integer, Map<Long, Map<String, Long>>> processTaskMap) throws Exception {
Map<Integer, ProcessDefinition> processDefinitionMap = processDefinitions.stream()
.collect(Collectors.toMap(ProcessDefinition::getId, processDefinition -> processDefinition));
Date now = new Date();
for (Map.Entry<Integer, String> entry : processDefinitionJsonMap.entrySet()) {
if (entry.getValue() == null) {
throw new Exception("processDefinitionJson is null");
}
ObjectNode jsonObject = JSONUtils.parseObject(entry.getValue());
ProcessDefinition processDefinition = processDefinitionMap.get(entry.getKey());
if (processDefinition != null) {
processDefinition.setTenantId(jsonObject.get("tenantId") == null ? -1 : jsonObject.get("tenantId").asInt());
processDefinition.setTimeout(jsonObject.get("timeout").asInt());
processDefinition.setGlobalParams(jsonObject.get("globalParams").toString());
} else {
throw new Exception("It can't find processDefinition, please check !");
}
Map<String, Long> taskIdCodeMap = new HashMap<>();
Map<String, List<String>> taskNamePreMap = new HashMap<>();
Map<String, Long> taskNameCodeMap = new HashMap<>();
Map<Long, Map<String, Long>> processCodeTaskNameCodeMap = new HashMap<>();
List<TaskDefinitionLog> taskDefinitionLogList = new ArrayList<>();
ArrayNode tasks = JSONUtils.parseArray(jsonObject.get("tasks").toString());
for (int i = 0; i < tasks.size(); i++) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | ObjectNode task = (ObjectNode) tasks.path(i);
ObjectNode param = (ObjectNode) task.get("params");
TaskDefinitionLog taskDefinitionLog = new TaskDefinitionLog();
String taskType = task.get("type").asText();
if (param != null) {
JsonNode resourceJsonNode = param.get("resourceList");
if (resourceJsonNode != null && !resourceJsonNode.isEmpty()) {
List<ResourceInfo> resourceList = JSONUtils.toList(param.get("resourceList").toString(), ResourceInfo.class);
List<Integer> resourceIds = resourceList.stream().map(ResourceInfo::getId).collect(Collectors.toList());
taskDefinitionLog.setResourceIds(StringUtils.join(resourceIds, Constants.COMMA));
} else {
taskDefinitionLog.setResourceIds(StringUtils.EMPTY);
}
if (TASK_TYPE_SUB_PROCESS.equals(taskType)) {
JsonNode jsonNodeDefinitionId = param.get("processDefinitionId");
if (jsonNodeDefinitionId != null) {
param.put("processDefinitionCode", processDefinitionMap.get(jsonNodeDefinitionId.asInt()).getCode());
param.remove("processDefinitionId");
}
}
param.put("conditionResult", task.get("conditionResult"));
param.put("dependence", task.get("dependence"));
taskDefinitionLog.setTaskParams(JSONUtils.toJsonString(param));
}
TaskTimeoutParameter timeout = JSONUtils.parseObject(JSONUtils.toJsonString(task.get("timeout")), TaskTimeoutParameter.class);
if (timeout != null) {
taskDefinitionLog.setTimeout(timeout.getInterval());
taskDefinitionLog.setTimeoutFlag(timeout.getEnable() ? TimeoutFlag.OPEN : TimeoutFlag.CLOSE);
taskDefinitionLog.setTimeoutNotifyStrategy(timeout.getStrategy());
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | String desc = task.get("description") != null ? task.get("description").asText() :
task.get("desc") != null ? task.get("desc").asText() : "";
taskDefinitionLog.setDescription(desc);
taskDefinitionLog.setFlag(Constants.FLOWNODE_RUN_FLAG_NORMAL.equals(task.get("runFlag").asText()) ? Flag.YES : Flag.NO);
taskDefinitionLog.setTaskType(taskType);
taskDefinitionLog.setFailRetryInterval(TASK_TYPE_SUB_PROCESS.equals(taskType) ? 1 : task.get("retryInterval").asInt());
taskDefinitionLog.setFailRetryTimes(TASK_TYPE_SUB_PROCESS.equals(taskType) ? 0 : task.get("maxRetryTimes").asInt());
taskDefinitionLog.setTaskPriority(JSONUtils.parseObject(JSONUtils.toJsonString(task.get("taskInstancePriority")), Priority.class));
String name = task.get("name").asText();
taskDefinitionLog.setName(name);
taskDefinitionLog.setWorkerGroup(task.get("workerGroup") == null ? "default" : task.get("workerGroup").asText());
long taskCode = CodeGenerateUtils.getInstance().genCode();
taskDefinitionLog.setCode(taskCode);
taskDefinitionLog.setVersion(Constants.VERSION_FIRST);
taskDefinitionLog.setProjectCode(processDefinition.getProjectCode());
taskDefinitionLog.setUserId(processDefinition.getUserId());
taskDefinitionLog.setEnvironmentCode(-1);
taskDefinitionLog.setDelayTime(0);
taskDefinitionLog.setOperator(1);
taskDefinitionLog.setOperateTime(now);
taskDefinitionLog.setCreateTime(now);
taskDefinitionLog.setUpdateTime(now);
taskDefinitionLogList.add(taskDefinitionLog);
taskIdCodeMap.put(task.get("id").asText(), taskCode);
List<String> preTasks = JSONUtils.toList(task.get("preTasks").toString(), String.class);
taskNamePreMap.put(name, preTasks);
taskNameCodeMap.put(name, taskCode);
}
convertConditions(taskDefinitionLogList, taskNameCodeMap);
taskDefinitionLogs.addAll(taskDefinitionLogList); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | processDefinition.setLocations(convertLocations(processDefinition.getLocations(), taskIdCodeMap));
ProcessDefinitionLog processDefinitionLog = new ProcessDefinitionLog(processDefinition);
processDefinitionLog.setOperator(1);
processDefinitionLog.setOperateTime(now);
processDefinitionLog.setUpdateTime(now);
processDefinitionLogs.add(processDefinitionLog);
handleProcessTaskRelation(taskNamePreMap, taskNameCodeMap, processDefinition, processTaskRelationLogs);
processCodeTaskNameCodeMap.put(processDefinition.getCode(), taskNameCodeMap);
processTaskMap.put(entry.getKey(), processCodeTaskNameCodeMap);
}
}
public void convertConditions(List<TaskDefinitionLog> taskDefinitionLogList, Map<String, Long> taskNameCodeMap) throws Exception {
for (TaskDefinitionLog taskDefinitionLog : taskDefinitionLogList) {
if (TASK_TYPE_CONDITIONS.equals(taskDefinitionLog.getTaskType())) {
ObjectMapper objectMapper = new ObjectMapper();
ObjectNode taskParams = JSONUtils.parseObject(taskDefinitionLog.getTaskParams());
ObjectNode conditionResult = (ObjectNode) taskParams.get("conditionResult");
List<String> successNode = JSONUtils.toList(conditionResult.get("successNode").toString(), String.class);
List<Long> nodeCode = new ArrayList<>();
successNode.forEach(node -> nodeCode.add(taskNameCodeMap.get(node)));
conditionResult.set("successNode", objectMapper.readTree(objectMapper.writeValueAsString(nodeCode)));
List<String> failedNode = JSONUtils.toList(conditionResult.get("failedNode").toString(), String.class);
nodeCode.clear();
failedNode.forEach(node -> nodeCode.add(taskNameCodeMap.get(node)));
conditionResult.set("failedNode", objectMapper.readTree(objectMapper.writeValueAsString(nodeCode)));
ObjectNode dependence = (ObjectNode) taskParams.get("dependence");
ArrayNode dependTaskList = JSONUtils.parseArray(JSONUtils.toJsonString(dependence.get("dependTaskList")));
for (int i = 0; i < dependTaskList.size(); i++) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | ObjectNode dependTask = (ObjectNode) dependTaskList.path(i);
ArrayNode dependItemList = JSONUtils.parseArray(JSONUtils.toJsonString(dependTask.get("dependItemList")));
for (int j = 0; j < dependItemList.size(); j++) {
ObjectNode dependItem = (ObjectNode) dependItemList.path(j);
JsonNode depTasks = dependItem.get("depTasks");
dependItem.put("depTaskCode", taskNameCodeMap.get(depTasks.asText()));
dependItem.remove("depTasks");
dependItemList.set(j, dependItem);
}
dependTask.put("dependItemList", dependItemList);
dependTaskList.set(i, dependTask);
}
dependence.put("dependTaskList", dependTaskList);
taskDefinitionLog.setTaskParams(JSONUtils.toJsonString(taskParams));
}
}
}
private String convertLocations(String locations, Map<String, Long> taskIdCodeMap) {
if (StringUtils.isBlank(locations)) {
return locations;
}
Map<String, ObjectNode> locationsMap = JSONUtils.parseObject(locations, new TypeReference<Map<String, ObjectNode>>() {
});
if (locationsMap == null) {
return locations;
}
ArrayNode jsonNodes = JSONUtils.createArrayNode();
for (Map.Entry<String, ObjectNode> entry : locationsMap.entrySet()) {
ObjectNode nodes = JSONUtils.createObjectNode();
nodes.put("taskCode", taskIdCodeMap.get(entry.getKey())); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | ObjectNode oldNodes = entry.getValue();
nodes.put("x", oldNodes.get("x").asInt());
nodes.put("y", oldNodes.get("y").asInt());
jsonNodes.add(nodes);
}
return jsonNodes.toString();
}
public void convertDependence(List<TaskDefinitionLog> taskDefinitionLogs,
Map<Integer, Long> projectIdCodeMap,
Map<Integer, Map<Long, Map<String, Long>>> processTaskMap) {
for (TaskDefinitionLog taskDefinitionLog : taskDefinitionLogs) {
if (TASK_TYPE_DEPENDENT.equals(taskDefinitionLog.getTaskType())) {
ObjectNode taskParams = JSONUtils.parseObject(taskDefinitionLog.getTaskParams());
ObjectNode dependence = (ObjectNode) taskParams.get("dependence");
ArrayNode dependTaskList = JSONUtils.parseArray(JSONUtils.toJsonString(dependence.get("dependTaskList")));
for (int i = 0; i < dependTaskList.size(); i++) {
ObjectNode dependTask = (ObjectNode) dependTaskList.path(i);
ArrayNode dependItemList = JSONUtils.parseArray(JSONUtils.toJsonString(dependTask.get("dependItemList")));
for (int j = 0; j < dependItemList.size(); j++) {
ObjectNode dependItem = (ObjectNode) dependItemList.path(j);
dependItem.put("projectCode", projectIdCodeMap.get(dependItem.get("projectId").asInt()));
int definitionId = dependItem.get("definitionId").asInt();
Map<Long, Map<String, Long>> processCodeTaskNameCodeMap = processTaskMap.get(definitionId);
if (processCodeTaskNameCodeMap == null) {
logger.warn("We can't find processDefinition [{}], please check it is not exist, remove this dependence", definitionId);
dependItemList.remove(j);
continue;
}
Optional<Map.Entry<Long, Map<String, Long>>> mapEntry = processCodeTaskNameCodeMap.entrySet().stream().findFirst();
if (mapEntry.isPresent()) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | Map.Entry<Long, Map<String, Long>> processCodeTaskNameCodeEntry = mapEntry.get();
dependItem.put("definitionCode", processCodeTaskNameCodeEntry.getKey());
String depTasks = dependItem.get("depTasks").asText();
long taskCode = "ALL".equals(depTasks) || processCodeTaskNameCodeEntry.getValue() == null ? 0L : processCodeTaskNameCodeEntry.getValue().get(depTasks);
dependItem.put("depTaskCode", taskCode);
}
dependItem.remove("projectId");
dependItem.remove("definitionId");
dependItem.remove("depTasks");
dependItemList.set(j, dependItem);
}
dependTask.put("dependItemList", dependItemList);
dependTaskList.set(i, dependTask);
}
dependence.put("dependTaskList", dependTaskList);
taskDefinitionLog.setTaskParams(JSONUtils.toJsonString(taskParams));
}
}
}
private void handleProcessTaskRelation(Map<String, List<String>> taskNamePreMap,
Map<String, Long> taskNameCodeMap,
ProcessDefinition processDefinition,
List<ProcessTaskRelationLog> processTaskRelationLogs) {
Date now = new Date();
for (Map.Entry<String, List<String>> entry : taskNamePreMap.entrySet()) {
List<String> entryValue = entry.getValue();
if (CollectionUtils.isNotEmpty(entryValue)) {
for (String preTaskName : entryValue) {
ProcessTaskRelationLog processTaskRelationLog = setProcessTaskRelationLog(processDefinition, now);
processTaskRelationLog.setPreTaskCode(taskNameCodeMap.get(preTaskName)); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,544 | [Bug] [Resource Center-UDF Management/Resource Management] Folder size statistics error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/76080484/155693270-c322ed34-8867-4ba4-849c-f5bc99249fb4.png">
<img width="1243" alt="image" src="https://user-images.githubusercontent.com/76080484/155693495-91b99fbb-1f12-495e-9643-05990651fcec.png">
### What you expected to happen
The parent folder size is the child file / folder size count
### How to reproduce
1. Create folder
2. Open folder
3. Upload jar package
4. Return to outer folder
### Anything else
_No response_
### Version
2.0.4
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8544 | https://github.com/apache/dolphinscheduler/pull/9107 | 08ea1aa701910d90ed16164e9019557292cc4249 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 2022-02-25T09:55:26Z | java | 2022-03-23T10:58:41Z | dolphinscheduler-tools/src/main/java/org/apache/dolphinscheduler/tools/datasource/dao/UpgradeDao.java | processTaskRelationLog.setPreTaskVersion(Constants.VERSION_FIRST);
processTaskRelationLog.setPostTaskCode(taskNameCodeMap.get(entry.getKey()));
processTaskRelationLog.setPostTaskVersion(Constants.VERSION_FIRST);
processTaskRelationLogs.add(processTaskRelationLog);
}
} else {
ProcessTaskRelationLog processTaskRelationLog = setProcessTaskRelationLog(processDefinition, now);
processTaskRelationLog.setPreTaskCode(0);
processTaskRelationLog.setPreTaskVersion(0);
processTaskRelationLog.setPostTaskCode(taskNameCodeMap.get(entry.getKey()));
processTaskRelationLog.setPostTaskVersion(Constants.VERSION_FIRST);
processTaskRelationLogs.add(processTaskRelationLog);
}
}
}
private ProcessTaskRelationLog setProcessTaskRelationLog(ProcessDefinition processDefinition, Date now) {
ProcessTaskRelationLog processTaskRelationLog = new ProcessTaskRelationLog();
processTaskRelationLog.setProjectCode(processDefinition.getProjectCode());
processTaskRelationLog.setProcessDefinitionCode(processDefinition.getCode());
processTaskRelationLog.setProcessDefinitionVersion(processDefinition.getVersion());
processTaskRelationLog.setConditionType(ConditionType.NONE);
processTaskRelationLog.setConditionParams("{}");
processTaskRelationLog.setOperator(1);
processTaskRelationLog.setOperateTime(now);
processTaskRelationLog.setCreateTime(now);
processTaskRelationLog.setUpdateTime(now);
return processTaskRelationLog;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,127 | [Bug] [DataX Task] DataX Task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
DataX Task run fail.
code version: branch/dev
### What you expected to happen
DataX Task can run success.
### How to reproduce
Create a `MySQL` datasource.
And then create a workflow with a `DataX` Task. This task use `MySQL` datasource and with the config input:
```
select * from test_datax
```
From the worker-server log, I found some Exception:
```
[INFO] 2022-03-23 15:49:15.896 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.datax.DataxTask:[133] - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":"1","dtType":"MYSQL","dataTarget":"1","sql":"select * from test_datax","targetTable":"test_datax_1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[ERROR] 2022-03-23 15:49:15.898 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxParameters.generateExtendedContext(DataxParameters.java:265)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.init(DataxTask.java:140)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:49:15.899 org.apache.dolphinscheduler.server.worker.processor.DBTaskAckProcessor:[56] - dBTask ACK request command : DBTaskAckCommand{taskInstanceId=123, status=7}
```
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9127 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T08:01:42Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTaskChannel.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.datax;
import org.apache.dolphinscheduler.plugin.task.api.AbstractTask;
import org.apache.dolphinscheduler.plugin.task.api.TaskChannel;
import org.apache.dolphinscheduler.plugin.task.api.TaskExecutionContext;
import org.apache.dolphinscheduler.plugin.task.api.parameters.AbstractParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.ParametersNode;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.ResourceParametersHelper;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
public class DataxTaskChannel implements TaskChannel { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,127 | [Bug] [DataX Task] DataX Task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
DataX Task run fail.
code version: branch/dev
### What you expected to happen
DataX Task can run success.
### How to reproduce
Create a `MySQL` datasource.
And then create a workflow with a `DataX` Task. This task use `MySQL` datasource and with the config input:
```
select * from test_datax
```
From the worker-server log, I found some Exception:
```
[INFO] 2022-03-23 15:49:15.896 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.datax.DataxTask:[133] - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":"1","dtType":"MYSQL","dataTarget":"1","sql":"select * from test_datax","targetTable":"test_datax_1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[ERROR] 2022-03-23 15:49:15.898 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxParameters.generateExtendedContext(DataxParameters.java:265)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.init(DataxTask.java:140)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:49:15.899 org.apache.dolphinscheduler.server.worker.processor.DBTaskAckProcessor:[56] - dBTask ACK request command : DBTaskAckCommand{taskInstanceId=123, status=7}
```
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9127 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T08:01:42Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTaskChannel.java | @Override
public void cancelApplication(boolean status) {
}
@Override
public AbstractTask createTask(TaskExecutionContext taskRequest) {
return new DataxTask(taskRequest);
}
@Override
public AbstractParameters parseParameters(ParametersNode parametersNode) {
return JSONUtils.parseObject(parametersNode.getTaskParams(), DataxParameters.class);
}
@Override
public ResourceParametersHelper getResources(String parameters) {
return null;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,127 | [Bug] [DataX Task] DataX Task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
DataX Task run fail.
code version: branch/dev
### What you expected to happen
DataX Task can run success.
### How to reproduce
Create a `MySQL` datasource.
And then create a workflow with a `DataX` Task. This task use `MySQL` datasource and with the config input:
```
select * from test_datax
```
From the worker-server log, I found some Exception:
```
[INFO] 2022-03-23 15:49:15.896 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.datax.DataxTask:[133] - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":"1","dtType":"MYSQL","dataTarget":"1","sql":"select * from test_datax","targetTable":"test_datax_1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[ERROR] 2022-03-23 15:49:15.898 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxParameters.generateExtendedContext(DataxParameters.java:265)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.init(DataxTask.java:140)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:49:15.899 org.apache.dolphinscheduler.server.worker.processor.DBTaskAckProcessor:[56] - dBTask ACK request command : DBTaskAckCommand{taskInstanceId=123, status=7}
```
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9127 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T08:01:42Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-procedure/src/main/java/org/apache/dolphinscheduler/plugin/task/procedure/ProcedureTaskChannel.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.procedure;
import org.apache.dolphinscheduler.plugin.task.api.AbstractTask;
import org.apache.dolphinscheduler.plugin.task.api.TaskChannel;
import org.apache.dolphinscheduler.plugin.task.api.TaskExecutionContext;
import org.apache.dolphinscheduler.plugin.task.api.parameters.AbstractParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.ParametersNode;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.ResourceParametersHelper;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
public class ProcedureTaskChannel implements TaskChannel { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,127 | [Bug] [DataX Task] DataX Task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
DataX Task run fail.
code version: branch/dev
### What you expected to happen
DataX Task can run success.
### How to reproduce
Create a `MySQL` datasource.
And then create a workflow with a `DataX` Task. This task use `MySQL` datasource and with the config input:
```
select * from test_datax
```
From the worker-server log, I found some Exception:
```
[INFO] 2022-03-23 15:49:15.896 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.datax.DataxTask:[133] - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":"1","dtType":"MYSQL","dataTarget":"1","sql":"select * from test_datax","targetTable":"test_datax_1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[ERROR] 2022-03-23 15:49:15.898 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxParameters.generateExtendedContext(DataxParameters.java:265)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.init(DataxTask.java:140)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:49:15.899 org.apache.dolphinscheduler.server.worker.processor.DBTaskAckProcessor:[56] - dBTask ACK request command : DBTaskAckCommand{taskInstanceId=123, status=7}
```
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9127 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T08:01:42Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-procedure/src/main/java/org/apache/dolphinscheduler/plugin/task/procedure/ProcedureTaskChannel.java | @Override
public void cancelApplication(boolean status) {
}
@Override
public AbstractTask createTask(TaskExecutionContext taskRequest) {
return new ProcedureTask(taskRequest);
}
@Override
public AbstractParameters parseParameters(ParametersNode parametersNode) {
return JSONUtils.parseObject(parametersNode.getTaskParams(), ProcedureParameters.class);
}
@Override
public ResourceParametersHelper getResources(String parameters) {
return null;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,127 | [Bug] [DataX Task] DataX Task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
DataX Task run fail.
code version: branch/dev
### What you expected to happen
DataX Task can run success.
### How to reproduce
Create a `MySQL` datasource.
And then create a workflow with a `DataX` Task. This task use `MySQL` datasource and with the config input:
```
select * from test_datax
```
From the worker-server log, I found some Exception:
```
[INFO] 2022-03-23 15:49:15.896 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.datax.DataxTask:[133] - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":"1","dtType":"MYSQL","dataTarget":"1","sql":"select * from test_datax","targetTable":"test_datax_1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[ERROR] 2022-03-23 15:49:15.898 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxParameters.generateExtendedContext(DataxParameters.java:265)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.init(DataxTask.java:140)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:49:15.899 org.apache.dolphinscheduler.server.worker.processor.DBTaskAckProcessor:[56] - dBTask ACK request command : DBTaskAckCommand{taskInstanceId=123, status=7}
```
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9127 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T08:01:42Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/SqoopTaskChannel.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.sqoop;
import org.apache.dolphinscheduler.plugin.task.api.AbstractTask;
import org.apache.dolphinscheduler.plugin.task.api.TaskChannel;
import org.apache.dolphinscheduler.plugin.task.api.TaskExecutionContext;
import org.apache.dolphinscheduler.plugin.task.api.parameters.AbstractParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.ParametersNode;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.ResourceParametersHelper;
import org.apache.dolphinscheduler.plugin.task.sqoop.parameter.SqoopParameters;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
public class SqoopTaskChannel implements TaskChannel { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,127 | [Bug] [DataX Task] DataX Task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
DataX Task run fail.
code version: branch/dev
### What you expected to happen
DataX Task can run success.
### How to reproduce
Create a `MySQL` datasource.
And then create a workflow with a `DataX` Task. This task use `MySQL` datasource and with the config input:
```
select * from test_datax
```
From the worker-server log, I found some Exception:
```
[INFO] 2022-03-23 15:49:15.896 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.datax.DataxTask:[133] - datax task params {"localParams":[],"resourceList":[],"customConfig":0,"dsType":"MYSQL","dataSource":"1","dtType":"MYSQL","dataTarget":"1","sql":"select * from test_datax","targetTable":"test_datax_1","jobSpeedByte":0,"jobSpeedRecord":1000,"preStatements":[],"postStatements":[],"xms":1,"xmx":1}
[ERROR] 2022-03-23 15:49:15.898 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.datax.DataxParameters.generateExtendedContext(DataxParameters.java:265)
at org.apache.dolphinscheduler.plugin.task.datax.DataxTask.init(DataxTask.java:140)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:182)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:49:15.899 org.apache.dolphinscheduler.server.worker.processor.DBTaskAckProcessor:[56] - dBTask ACK request command : DBTaskAckCommand{taskInstanceId=123, status=7}
```
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9127 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T08:01:42Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/SqoopTaskChannel.java | @Override
public void cancelApplication(boolean status) {
}
@Override
public AbstractTask createTask(TaskExecutionContext taskRequest) {
return new SqoopTask(taskRequest);
}
@Override
public AbstractParameters parseParameters(ParametersNode parametersNode) {
return JSONUtils.parseObject(parametersNode.getTaskParams(), SqoopParameters.class);
}
@Override
public ResourceParametersHelper getResources(String parameters) {
return null;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,124 | [Bug] [Task Run] PROCEDURE task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
PROCEDURE task run fail.
code version: branch/dev
When I run a `PROCEDURE` task ,it's failed.
The procedure sql like this:
```
use procedure_test;
drop PROCEDURE if EXISTS tw_base_resource_share;
delimiter d//
CREATE PROCEDURE tw_base_resource_share()
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')
THEN
CREATE table procedure_test.test_test_1(id int NOT NULL);
END IF;
END;
d//
delimiter ;
CALL tw_base_resource_share;
DROP PROCEDURE tw_base_resource_share;
```
From the worker-server log , I found some Exception:
```
[INFO] 2022-03-23 15:26:59.530 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask:[74] - procedure task params {"localParams":[],"resourceList":[],"type":"MYSQL","datasource":"1","method":"use procedure_test;\ndrop PROCEDURE if EXISTS tw_base_resource_share;\ndelimiter d//\nCREATE PROCEDURE tw_base_resource_share()\n BEGIN\n IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS \n WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')\n THEN\n CREATE table procedure_test.test_test_1(id int NOT NULL);\n END IF;\n END;\nd//\ndelimiter ;\nCALL tw_base_resource_share;\nDROP PROCEDURE tw_base_resource_share;"}
[ERROR] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureParameters.generateExtendedContext(ProcedureParameters.java:138)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask.<init>(ProcedureTask.java:83)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTaskChannel.createTask(ProcedureTaskChannel.java:37)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:179)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[226] - develop mode is: false
```
### What you expected to happen
PROCEDURE task can work.
### How to reproduce
Create a `MySQL` datasource. And then create a workflow with a PROCEDURE task. This task use `MySQL` datasource .
Run the workflow and you can get the error.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9124 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T07:39:08Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTaskChannel.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.datax;
import org.apache.dolphinscheduler.plugin.task.api.AbstractTask;
import org.apache.dolphinscheduler.plugin.task.api.TaskChannel;
import org.apache.dolphinscheduler.plugin.task.api.TaskExecutionContext;
import org.apache.dolphinscheduler.plugin.task.api.parameters.AbstractParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.ParametersNode;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.ResourceParametersHelper;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
public class DataxTaskChannel implements TaskChannel { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,124 | [Bug] [Task Run] PROCEDURE task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
PROCEDURE task run fail.
code version: branch/dev
When I run a `PROCEDURE` task ,it's failed.
The procedure sql like this:
```
use procedure_test;
drop PROCEDURE if EXISTS tw_base_resource_share;
delimiter d//
CREATE PROCEDURE tw_base_resource_share()
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')
THEN
CREATE table procedure_test.test_test_1(id int NOT NULL);
END IF;
END;
d//
delimiter ;
CALL tw_base_resource_share;
DROP PROCEDURE tw_base_resource_share;
```
From the worker-server log , I found some Exception:
```
[INFO] 2022-03-23 15:26:59.530 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask:[74] - procedure task params {"localParams":[],"resourceList":[],"type":"MYSQL","datasource":"1","method":"use procedure_test;\ndrop PROCEDURE if EXISTS tw_base_resource_share;\ndelimiter d//\nCREATE PROCEDURE tw_base_resource_share()\n BEGIN\n IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS \n WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')\n THEN\n CREATE table procedure_test.test_test_1(id int NOT NULL);\n END IF;\n END;\nd//\ndelimiter ;\nCALL tw_base_resource_share;\nDROP PROCEDURE tw_base_resource_share;"}
[ERROR] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureParameters.generateExtendedContext(ProcedureParameters.java:138)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask.<init>(ProcedureTask.java:83)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTaskChannel.createTask(ProcedureTaskChannel.java:37)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:179)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[226] - develop mode is: false
```
### What you expected to happen
PROCEDURE task can work.
### How to reproduce
Create a `MySQL` datasource. And then create a workflow with a PROCEDURE task. This task use `MySQL` datasource .
Run the workflow and you can get the error.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9124 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T07:39:08Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxTaskChannel.java | @Override
public void cancelApplication(boolean status) {
}
@Override
public AbstractTask createTask(TaskExecutionContext taskRequest) {
return new DataxTask(taskRequest);
}
@Override
public AbstractParameters parseParameters(ParametersNode parametersNode) {
return JSONUtils.parseObject(parametersNode.getTaskParams(), DataxParameters.class);
}
@Override
public ResourceParametersHelper getResources(String parameters) {
return null;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,124 | [Bug] [Task Run] PROCEDURE task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
PROCEDURE task run fail.
code version: branch/dev
When I run a `PROCEDURE` task ,it's failed.
The procedure sql like this:
```
use procedure_test;
drop PROCEDURE if EXISTS tw_base_resource_share;
delimiter d//
CREATE PROCEDURE tw_base_resource_share()
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')
THEN
CREATE table procedure_test.test_test_1(id int NOT NULL);
END IF;
END;
d//
delimiter ;
CALL tw_base_resource_share;
DROP PROCEDURE tw_base_resource_share;
```
From the worker-server log , I found some Exception:
```
[INFO] 2022-03-23 15:26:59.530 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask:[74] - procedure task params {"localParams":[],"resourceList":[],"type":"MYSQL","datasource":"1","method":"use procedure_test;\ndrop PROCEDURE if EXISTS tw_base_resource_share;\ndelimiter d//\nCREATE PROCEDURE tw_base_resource_share()\n BEGIN\n IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS \n WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')\n THEN\n CREATE table procedure_test.test_test_1(id int NOT NULL);\n END IF;\n END;\nd//\ndelimiter ;\nCALL tw_base_resource_share;\nDROP PROCEDURE tw_base_resource_share;"}
[ERROR] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureParameters.generateExtendedContext(ProcedureParameters.java:138)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask.<init>(ProcedureTask.java:83)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTaskChannel.createTask(ProcedureTaskChannel.java:37)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:179)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[226] - develop mode is: false
```
### What you expected to happen
PROCEDURE task can work.
### How to reproduce
Create a `MySQL` datasource. And then create a workflow with a PROCEDURE task. This task use `MySQL` datasource .
Run the workflow and you can get the error.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9124 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T07:39:08Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-procedure/src/main/java/org/apache/dolphinscheduler/plugin/task/procedure/ProcedureTaskChannel.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.procedure;
import org.apache.dolphinscheduler.plugin.task.api.AbstractTask;
import org.apache.dolphinscheduler.plugin.task.api.TaskChannel;
import org.apache.dolphinscheduler.plugin.task.api.TaskExecutionContext;
import org.apache.dolphinscheduler.plugin.task.api.parameters.AbstractParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.ParametersNode;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.ResourceParametersHelper;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
public class ProcedureTaskChannel implements TaskChannel { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,124 | [Bug] [Task Run] PROCEDURE task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
PROCEDURE task run fail.
code version: branch/dev
When I run a `PROCEDURE` task ,it's failed.
The procedure sql like this:
```
use procedure_test;
drop PROCEDURE if EXISTS tw_base_resource_share;
delimiter d//
CREATE PROCEDURE tw_base_resource_share()
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')
THEN
CREATE table procedure_test.test_test_1(id int NOT NULL);
END IF;
END;
d//
delimiter ;
CALL tw_base_resource_share;
DROP PROCEDURE tw_base_resource_share;
```
From the worker-server log , I found some Exception:
```
[INFO] 2022-03-23 15:26:59.530 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask:[74] - procedure task params {"localParams":[],"resourceList":[],"type":"MYSQL","datasource":"1","method":"use procedure_test;\ndrop PROCEDURE if EXISTS tw_base_resource_share;\ndelimiter d//\nCREATE PROCEDURE tw_base_resource_share()\n BEGIN\n IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS \n WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')\n THEN\n CREATE table procedure_test.test_test_1(id int NOT NULL);\n END IF;\n END;\nd//\ndelimiter ;\nCALL tw_base_resource_share;\nDROP PROCEDURE tw_base_resource_share;"}
[ERROR] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureParameters.generateExtendedContext(ProcedureParameters.java:138)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask.<init>(ProcedureTask.java:83)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTaskChannel.createTask(ProcedureTaskChannel.java:37)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:179)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[226] - develop mode is: false
```
### What you expected to happen
PROCEDURE task can work.
### How to reproduce
Create a `MySQL` datasource. And then create a workflow with a PROCEDURE task. This task use `MySQL` datasource .
Run the workflow and you can get the error.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9124 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T07:39:08Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-procedure/src/main/java/org/apache/dolphinscheduler/plugin/task/procedure/ProcedureTaskChannel.java | @Override
public void cancelApplication(boolean status) {
}
@Override
public AbstractTask createTask(TaskExecutionContext taskRequest) {
return new ProcedureTask(taskRequest);
}
@Override
public AbstractParameters parseParameters(ParametersNode parametersNode) {
return JSONUtils.parseObject(parametersNode.getTaskParams(), ProcedureParameters.class);
}
@Override
public ResourceParametersHelper getResources(String parameters) {
return null;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,124 | [Bug] [Task Run] PROCEDURE task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
PROCEDURE task run fail.
code version: branch/dev
When I run a `PROCEDURE` task ,it's failed.
The procedure sql like this:
```
use procedure_test;
drop PROCEDURE if EXISTS tw_base_resource_share;
delimiter d//
CREATE PROCEDURE tw_base_resource_share()
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')
THEN
CREATE table procedure_test.test_test_1(id int NOT NULL);
END IF;
END;
d//
delimiter ;
CALL tw_base_resource_share;
DROP PROCEDURE tw_base_resource_share;
```
From the worker-server log , I found some Exception:
```
[INFO] 2022-03-23 15:26:59.530 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask:[74] - procedure task params {"localParams":[],"resourceList":[],"type":"MYSQL","datasource":"1","method":"use procedure_test;\ndrop PROCEDURE if EXISTS tw_base_resource_share;\ndelimiter d//\nCREATE PROCEDURE tw_base_resource_share()\n BEGIN\n IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS \n WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')\n THEN\n CREATE table procedure_test.test_test_1(id int NOT NULL);\n END IF;\n END;\nd//\ndelimiter ;\nCALL tw_base_resource_share;\nDROP PROCEDURE tw_base_resource_share;"}
[ERROR] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureParameters.generateExtendedContext(ProcedureParameters.java:138)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask.<init>(ProcedureTask.java:83)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTaskChannel.createTask(ProcedureTaskChannel.java:37)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:179)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[226] - develop mode is: false
```
### What you expected to happen
PROCEDURE task can work.
### How to reproduce
Create a `MySQL` datasource. And then create a workflow with a PROCEDURE task. This task use `MySQL` datasource .
Run the workflow and you can get the error.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9124 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T07:39:08Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/SqoopTaskChannel.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.sqoop;
import org.apache.dolphinscheduler.plugin.task.api.AbstractTask;
import org.apache.dolphinscheduler.plugin.task.api.TaskChannel;
import org.apache.dolphinscheduler.plugin.task.api.TaskExecutionContext;
import org.apache.dolphinscheduler.plugin.task.api.parameters.AbstractParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.ParametersNode;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.ResourceParametersHelper;
import org.apache.dolphinscheduler.plugin.task.sqoop.parameter.SqoopParameters;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
public class SqoopTaskChannel implements TaskChannel { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,124 | [Bug] [Task Run] PROCEDURE task run fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
PROCEDURE task run fail.
code version: branch/dev
When I run a `PROCEDURE` task ,it's failed.
The procedure sql like this:
```
use procedure_test;
drop PROCEDURE if EXISTS tw_base_resource_share;
delimiter d//
CREATE PROCEDURE tw_base_resource_share()
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')
THEN
CREATE table procedure_test.test_test_1(id int NOT NULL);
END IF;
END;
d//
delimiter ;
CALL tw_base_resource_share;
DROP PROCEDURE tw_base_resource_share;
```
From the worker-server log , I found some Exception:
```
[INFO] 2022-03-23 15:26:59.530 TaskLogLogger-class org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask:[74] - procedure task params {"localParams":[],"resourceList":[],"type":"MYSQL","datasource":"1","method":"use procedure_test;\ndrop PROCEDURE if EXISTS tw_base_resource_share;\ndelimiter d//\nCREATE PROCEDURE tw_base_resource_share()\n BEGIN\n IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS \n WHERE TABLE_SCHEMA='procedure_test' AND TABLE_NAME='test_test_1')\n THEN\n CREATE table procedure_test.test_test_1(id int NOT NULL);\n END IF;\n END;\nd//\ndelimiter ;\nCALL tw_base_resource_share;\nDROP PROCEDURE tw_base_resource_share;"}
[ERROR] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - task scheduler failure
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureParameters.generateExtendedContext(ProcedureParameters.java:138)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTask.<init>(ProcedureTask.java:83)
at org.apache.dolphinscheduler.plugin.task.procedure.ProcedureTaskChannel.createTask(ProcedureTaskChannel.java:37)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:179)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2022-03-23 15:26:59.532 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[226] - develop mode is: false
```
### What you expected to happen
PROCEDURE task can work.
### How to reproduce
Create a `MySQL` datasource. And then create a workflow with a PROCEDURE task. This task use `MySQL` datasource .
Run the workflow and you can get the error.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9124 | https://github.com/apache/dolphinscheduler/pull/9134 | 7c5bebea98b64394a74960a5fa0e7a40af26c465 | 25fc1dcb5f48ee01477c75a9b5f0508fc4c9f1b2 | 2022-03-23T07:39:08Z | java | 2022-03-23T11:00:09Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/SqoopTaskChannel.java | @Override
public void cancelApplication(boolean status) {
}
@Override
public AbstractTask createTask(TaskExecutionContext taskRequest) {
return new SqoopTask(taskRequest);
}
@Override
public AbstractParameters parseParameters(ParametersNode parametersNode) {
return JSONUtils.parseObject(parametersNode.getTaskParams(), SqoopParameters.class);
}
@Override
public ResourceParametersHelper getResources(String parameters) {
return null;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.runner.task;
import com.zaxxer.hikari.HikariDataSource;
import org.apache.commons.collections.CollectionUtils;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.utils.HadoopUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.LoggerUtils;
import org.apache.dolphinscheduler.common.utils.PropertyUtils;
import org.apache.dolphinscheduler.dao.entity.DataSource;
import org.apache.dolphinscheduler.dao.entity.DqComparisonType;
import org.apache.dolphinscheduler.dao.entity.DqRule; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | import org.apache.dolphinscheduler.dao.entity.DqRuleExecuteSql;
import org.apache.dolphinscheduler.dao.entity.DqRuleInputEntry;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.Resource;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.UdfFunc;
import org.apache.dolphinscheduler.plugin.task.api.DataQualityTaskExecutionContext;
import org.apache.dolphinscheduler.plugin.task.api.TaskChannel;
import org.apache.dolphinscheduler.plugin.task.api.TaskConstants;
import org.apache.dolphinscheduler.plugin.task.api.TaskExecutionContext;
import org.apache.dolphinscheduler.plugin.task.api.enums.ExecutionStatus;
import org.apache.dolphinscheduler.plugin.task.api.enums.dp.ConnectorType;
import org.apache.dolphinscheduler.plugin.task.api.enums.dp.ExecuteSqlType;
import org.apache.dolphinscheduler.plugin.task.api.model.JdbcInfo;
import org.apache.dolphinscheduler.plugin.task.api.model.ResourceInfo;
import org.apache.dolphinscheduler.plugin.task.api.parameters.AbstractParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.ParametersNode;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.AbstractResourceParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.DataSourceParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.ResourceParametersHelper;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.UdfFuncParameters;
import org.apache.dolphinscheduler.plugin.task.api.utils.JdbcUrlParser;
import org.apache.dolphinscheduler.plugin.task.api.utils.MapUtils;
import org.apache.dolphinscheduler.plugin.task.dq.DataQualityParameters;
import org.apache.dolphinscheduler.server.builder.TaskExecutionContextBuilder;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.dolphinscheduler.service.task.TaskPluginManager; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | import org.apache.dolphinscheduler.spi.enums.DbType;
import org.apache.dolphinscheduler.spi.enums.ResourceType;
import org.apache.dolphinscheduler.spi.utils.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Properties;
import java.util.Set;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static org.apache.dolphinscheduler.common.Constants.ADDRESS;
import static org.apache.dolphinscheduler.common.Constants.DATABASE;
import static org.apache.dolphinscheduler.common.Constants.JDBC_URL;
import static org.apache.dolphinscheduler.common.Constants.OTHER;
import static org.apache.dolphinscheduler.common.Constants.PASSWORD;
import static org.apache.dolphinscheduler.common.Constants.SINGLE_SLASH;
import static org.apache.dolphinscheduler.common.Constants.USER;
import static org.apache.dolphinscheduler.plugin.task.api.TaskConstants.TASK_TYPE_DATA_QUALITY;
import static org.apache.dolphinscheduler.plugin.task.api.utils.DataQualityConstants.COMPARISON_NAME;
import static org.apache.dolphinscheduler.plugin.task.api.utils.DataQualityConstants.COMPARISON_TABLE;
import static org.apache.dolphinscheduler.plugin.task.api.utils.DataQualityConstants.COMPARISON_TYPE;
import static org.apache.dolphinscheduler.plugin.task.api.utils.DataQualityConstants.SRC_CONNECTOR_TYPE;
import static org.apache.dolphinscheduler.plugin.task.api.utils.DataQualityConstants.SRC_DATASOURCE_ID;
import static org.apache.dolphinscheduler.plugin.task.api.utils.DataQualityConstants.TARGET_CONNECTOR_TYPE;
import static org.apache.dolphinscheduler.plugin.task.api.utils.DataQualityConstants.TARGET_DATASOURCE_ID;
public abstract class BaseTaskProcessor implements ITaskProcessor { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | protected final Logger logger = LoggerFactory.getLogger(String.format(TaskConstants.TASK_LOG_LOGGER_NAME_FORMAT, getClass()));
protected boolean killed = false;
protected boolean paused = false;
protected boolean timeout = false; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | protected TaskInstance taskInstance = null;
protected ProcessInstance processInstance;
protected int maxRetryTimes;
protected int commitInterval;
protected ProcessService processService = SpringApplicationContext.getBean(ProcessService.class);
protected MasterConfig masterConfig = SpringApplicationContext.getBean(MasterConfig.class);
protected TaskPluginManager taskPluginManager = SpringApplicationContext.getBean(TaskPluginManager.class);
protected String threadLoggerInfoName;
@Override
public void init(TaskInstance taskInstance, ProcessInstance processInstance) {
if (processService == null) {
processService = SpringApplicationContext.getBean(ProcessService.class);
}
if (masterConfig == null) {
masterConfig = SpringApplicationContext.getBean(MasterConfig.class);
}
this.taskInstance = taskInstance;
this.processInstance = processInstance;
this.maxRetryTimes = masterConfig.getTaskCommitRetryTimes();
this.commitInterval = masterConfig.getTaskCommitInterval();
}
protected javax.sql.DataSource defaultDataSource =
SpringApplicationContext.getBean(javax.sql.DataSource.class);
/**
* pause task, common tasks donot need this.
*/
protected abstract boolean pauseTask();
/**
* kill task, all tasks need to realize this function
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | protected abstract boolean killTask();
/**
* task timeout process
*/
protected abstract boolean taskTimeout();
/**
* submit task
*/
protected abstract boolean submitTask();
/**
* run task
*/
protected abstract boolean runTask();
/**
* dispatch task
*/
protected abstract boolean dispatchTask();
@Override
public boolean action(TaskAction taskAction) {
String threadName = Thread.currentThread().getName();
if (StringUtils.isNotEmpty(threadLoggerInfoName)) {
Thread.currentThread().setName(threadLoggerInfoName);
}
switch (taskAction) {
case STOP:
return stop();
case PAUSE:
return pause();
case TIMEOUT:
return timeout(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | case SUBMIT:
return submit();
case RUN:
return run();
case DISPATCH:
return dispatch();
default:
logger.error("unknown task action: {}", taskAction);
}
Thread.currentThread().setName(threadName);
return false;
}
protected boolean submit() {
return submitTask();
}
protected boolean run() {
return runTask();
}
protected boolean dispatch() {
return dispatchTask();
}
protected boolean timeout() {
if (timeout) {
return true;
}
timeout = taskTimeout();
return timeout;
}
protected boolean pause() { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | if (paused) {
return true;
}
paused = pauseTask();
return paused;
}
protected boolean stop() {
if (killed) {
return true;
}
killed = killTask();
return killed;
}
@Override
public String getType() {
return null;
}
@Override
public TaskInstance taskInstance() {
return this.taskInstance;
}
/**
* set master task running logger.
*/
public void setTaskExecutionLogger() {
threadLoggerInfoName = LoggerUtils.buildTaskId(taskInstance.getFirstSubmitTime(),
processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion(),
taskInstance.getProcessInstanceId(),
taskInstance.getId()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | Thread.currentThread().setName(threadLoggerInfoName);
}
/**
* get TaskExecutionContext
*
* @param taskInstance taskInstance
* @return TaskExecutionContext
*/
protected TaskExecutionContext getTaskExecutionContext(TaskInstance taskInstance) {
int userId = taskInstance.getProcessDefine() == null ? 0 : taskInstance.getProcessDefine().getUserId();
Tenant tenant = processService.getTenantForProcess(taskInstance.getProcessInstance().getTenantId(), userId);
if (verifyTenantIsNull(tenant, taskInstance)) {
processService.changeTaskState(taskInstance, ExecutionStatus.FAILURE,
taskInstance.getStartTime(),
taskInstance.getHost(),
null,
null);
return null;
}
String userQueue = processService.queryUserQueueByProcessInstance(taskInstance.getProcessInstance());
taskInstance.getProcessInstance().setQueue(StringUtils.isEmpty(userQueue) ? tenant.getQueue() : userQueue);
taskInstance.getProcessInstance().setTenantCode(tenant.getTenantCode());
taskInstance.setResources(getResourceFullNames(taskInstance));
TaskChannel taskChannel = taskPluginManager.getTaskChannel(taskInstance.getTaskType());
ResourceParametersHelper resources = taskChannel.getResources(taskInstance.getTaskParams());
this.setTaskResourceInfo(resources);
DataQualityTaskExecutionContext dataQualityTaskExecutionContext = new DataQualityTaskExecutionContext(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | if (TASK_TYPE_DATA_QUALITY.equalsIgnoreCase(taskInstance.getTaskType())) {
setDataQualityTaskRelation(dataQualityTaskExecutionContext,taskInstance,tenant.getTenantCode());
}
return TaskExecutionContextBuilder.get()
.buildTaskInstanceRelatedInfo(taskInstance)
.buildTaskDefinitionRelatedInfo(taskInstance.getTaskDefine())
.buildProcessInstanceRelatedInfo(taskInstance.getProcessInstance())
.buildProcessDefinitionRelatedInfo(taskInstance.getProcessDefine())
.buildResourceParametersInfo(resources)
.buildDataQualityTaskExecutionContext(dataQualityTaskExecutionContext)
.create();
}
private void setTaskResourceInfo(ResourceParametersHelper resourceParametersHelper) {
if (Objects.isNull(resourceParametersHelper)) {
return;
}
resourceParametersHelper.getResourceMap().forEach((type, map) -> {
switch (type) {
case DATASOURCE:
this.setTaskDataSourceResourceInfo(map);
break;
case UDF:
this.setTaskUdfFuncResourceInfo(map);
break;
default:
break;
}
});
}
private void setTaskDataSourceResourceInfo(Map<Integer, AbstractResourceParameters> map) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | if (MapUtils.isEmpty(map)) {
return;
}
map.forEach((code, parameters) -> {
DataSource datasource = processService.findDataSourceById(code);
DataSourceParameters dataSourceParameters = new DataSourceParameters();
dataSourceParameters.setType(datasource.getType());
dataSourceParameters.setConnectionParams(datasource.getConnectionParams());
map.put(code, dataSourceParameters);
});
}
private void setTaskUdfFuncResourceInfo(Map<Integer, AbstractResourceParameters> map) {
if (MapUtils.isEmpty(map)) {
return;
}
List<UdfFunc> udfFuncList = processService.queryUdfFunListByIds(map.keySet().toArray(new Integer[map.size()]));
udfFuncList.forEach(udfFunc -> {
UdfFuncParameters udfFuncParameters = JSONUtils.parseObject(JSONUtils.toJsonString(udfFunc), UdfFuncParameters.class);
udfFuncParameters.setDefaultFS(HadoopUtils.getInstance().getDefaultFS());
String tenantCode = processService.queryTenantCodeByResName(udfFunc.getResourceName(), ResourceType.UDF);
udfFuncParameters.setTenantCode(tenantCode);
map.put(udfFunc.getId(), udfFuncParameters);
});
}
/**
* set data quality task relation
*
* @param dataQualityTaskExecutionContext dataQualityTaskExecutionContext
* @param taskInstance taskInstance
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | private void setDataQualityTaskRelation(DataQualityTaskExecutionContext dataQualityTaskExecutionContext, TaskInstance taskInstance, String tenantCode) {
DataQualityParameters dataQualityParameters =
JSONUtils.parseObject(taskInstance.getTaskParams(), DataQualityParameters.class);
if (dataQualityParameters == null) {
return;
}
Map<String,String> config = dataQualityParameters.getRuleInputParameter();
int ruleId = dataQualityParameters.getRuleId();
DqRule dqRule = processService.getDqRule(ruleId);
if (dqRule == null) {
logger.error("can not get DqRule by id {}",ruleId);
return;
}
dataQualityTaskExecutionContext.setRuleId(ruleId);
dataQualityTaskExecutionContext.setRuleType(dqRule.getType());
dataQualityTaskExecutionContext.setRuleName(dqRule.getName());
List<DqRuleInputEntry> ruleInputEntryList = processService.getRuleInputEntry(ruleId);
if (CollectionUtils.isEmpty(ruleInputEntryList)) {
logger.error("{} rule input entry list is empty ",ruleId);
return;
}
List<DqRuleExecuteSql> executeSqlList = processService.getDqExecuteSql(ruleId);
setComparisonParams(dataQualityTaskExecutionContext, config, ruleInputEntryList, executeSqlList);
dataQualityTaskExecutionContext.setRuleInputEntryList(JSONUtils.toJsonString(ruleInputEntryList));
dataQualityTaskExecutionContext.setExecuteSqlList(JSONUtils.toJsonString(executeSqlList));
dataQualityTaskExecutionContext.setHdfsPath(
PropertyUtils.getString(Constants.FS_DEFAULT_FS)
+ PropertyUtils.getString(
Constants.DATA_QUALITY_ERROR_OUTPUT_PATH, |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | "/user/" + tenantCode + "/data_quality_error_data"));
setSourceConfig(dataQualityTaskExecutionContext, config);
setTargetConfig(dataQualityTaskExecutionContext, config);
setWriterConfig(dataQualityTaskExecutionContext);
setStatisticsValueWriterConfig(dataQualityTaskExecutionContext);
}
/**
* It is used to get comparison params, the param contains
* comparison name、comparison table and execute sql.
* When the type is fixed_value, params will be null.
* @param dataQualityTaskExecutionContext
* @param config
* @param ruleInputEntryList
* @param executeSqlList
*/
private void setComparisonParams(DataQualityTaskExecutionContext dataQualityTaskExecutionContext,
Map<String, String> config,
List<DqRuleInputEntry> ruleInputEntryList,
List<DqRuleExecuteSql> executeSqlList) {
if (config.get(COMPARISON_TYPE) != null) {
int comparisonTypeId = Integer.parseInt(config.get(COMPARISON_TYPE));
//
if (comparisonTypeId > 1) {
DqComparisonType type = processService.getComparisonTypeById(comparisonTypeId);
if (type != null) {
DqRuleInputEntry comparisonName = new DqRuleInputEntry();
comparisonName.setField(COMPARISON_NAME);
comparisonName.setValue(type.getName());
ruleInputEntryList.add(comparisonName);
DqRuleInputEntry comparisonTable = new DqRuleInputEntry(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | comparisonTable.setField(COMPARISON_TABLE);
comparisonTable.setValue(type.getOutputTable());
ruleInputEntryList.add(comparisonTable);
if (executeSqlList == null) {
executeSqlList = new ArrayList<>();
}
DqRuleExecuteSql dqRuleExecuteSql = new DqRuleExecuteSql();
dqRuleExecuteSql.setType(ExecuteSqlType.MIDDLE.getCode());
dqRuleExecuteSql.setIndex(1);
dqRuleExecuteSql.setSql(type.getExecuteSql());
dqRuleExecuteSql.setTableAlias(type.getOutputTable());
executeSqlList.add(0,dqRuleExecuteSql);
if (Boolean.TRUE.equals(type.getInnerSource())) {
dataQualityTaskExecutionContext.setComparisonNeedStatisticsValueTable(true);
}
}
} else if (comparisonTypeId == 1) {
dataQualityTaskExecutionContext.setCompareWithFixedValue(true);
}
}
}
/**
* The default datasource is used to get the dolphinscheduler datasource info,
* and the info will be used in StatisticsValueConfig and WriterConfig
* @return DataSource
*/
public DataSource getDefaultDataSource() {
DataSource dataSource = new DataSource();
HikariDataSource hikariDataSource = (HikariDataSource)defaultDataSource;
dataSource.setUserName(hikariDataSource.getUsername()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | JdbcInfo jdbcInfo = JdbcUrlParser.getJdbcInfo(hikariDataSource.getJdbcUrl());
if (jdbcInfo != null) {
Properties properties = new Properties();
properties.setProperty(USER,hikariDataSource.getUsername());
properties.setProperty(PASSWORD,hikariDataSource.getPassword());
properties.setProperty(DATABASE, jdbcInfo.getDatabase());
properties.setProperty(ADDRESS,jdbcInfo.getAddress());
properties.setProperty(OTHER,jdbcInfo.getParams());
properties.setProperty(JDBC_URL,jdbcInfo.getAddress() + SINGLE_SLASH + jdbcInfo.getDatabase());
dataSource.setType(DbType.of(JdbcUrlParser.getDbType(jdbcInfo.getDriverName()).getCode()));
dataSource.setConnectionParams(JSONUtils.toJsonString(properties));
}
return dataSource;
}
/**
* The StatisticsValueWriterConfig will be used in DataQualityApplication that
* writes the statistics value into dolphin scheduler datasource
* @param dataQualityTaskExecutionContext
*/
private void setStatisticsValueWriterConfig(DataQualityTaskExecutionContext dataQualityTaskExecutionContext) {
DataSource dataSource = getDefaultDataSource();
ConnectorType writerConnectorType = ConnectorType.of(dataSource.getType().isHive() ? 1 : 0);
dataQualityTaskExecutionContext.setStatisticsValueConnectorType(writerConnectorType.getDescription());
dataQualityTaskExecutionContext.setStatisticsValueType(dataSource.getType().getCode());
dataQualityTaskExecutionContext.setStatisticsValueWriterConnectionParams(dataSource.getConnectionParams());
dataQualityTaskExecutionContext.setStatisticsValueTable("t_ds_dq_task_statistics_value");
}
/**
* The WriterConfig will be used in DataQualityApplication that
* writes the data quality check result into dolphin scheduler datasource |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | * @param dataQualityTaskExecutionContext
*/
private void setWriterConfig(DataQualityTaskExecutionContext dataQualityTaskExecutionContext) {
DataSource dataSource = getDefaultDataSource();
ConnectorType writerConnectorType = ConnectorType.of(dataSource.getType().isHive() ? 1 : 0);
dataQualityTaskExecutionContext.setWriterConnectorType(writerConnectorType.getDescription());
dataQualityTaskExecutionContext.setWriterType(dataSource.getType().getCode());
dataQualityTaskExecutionContext.setWriterConnectionParams(dataSource.getConnectionParams());
dataQualityTaskExecutionContext.setWriterTable("t_ds_dq_execute_result");
}
/**
* The TargetConfig will be used in DataQualityApplication that
* get the data which be used to compare to src value
* @param dataQualityTaskExecutionContext
* @param config
*/
private void setTargetConfig(DataQualityTaskExecutionContext dataQualityTaskExecutionContext, Map<String, String> config) {
if (StringUtils.isNotEmpty(config.get(TARGET_DATASOURCE_ID))) {
DataSource dataSource = processService.findDataSourceById(Integer.parseInt(config.get(TARGET_DATASOURCE_ID)));
if (dataSource != null) {
ConnectorType targetConnectorType = ConnectorType.of(
DbType.of(Integer.parseInt(config.get(TARGET_CONNECTOR_TYPE))).isHive() ? 1 : 0);
dataQualityTaskExecutionContext.setTargetConnectorType(targetConnectorType.getDescription());
dataQualityTaskExecutionContext.setTargetType(dataSource.getType().getCode());
dataQualityTaskExecutionContext.setTargetConnectionParams(dataSource.getConnectionParams());
}
}
}
/**
* The SourceConfig will be used in DataQualityApplication that |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | * get the data which be used to get the statistics value
* @param dataQualityTaskExecutionContext
* @param config
*/
private void setSourceConfig(DataQualityTaskExecutionContext dataQualityTaskExecutionContext, Map<String, String> config) {
if (StringUtils.isNotEmpty(config.get(SRC_DATASOURCE_ID))) {
DataSource dataSource = processService.findDataSourceById(Integer.parseInt(config.get(SRC_DATASOURCE_ID)));
if (dataSource != null) {
ConnectorType srcConnectorType = ConnectorType.of(
DbType.of(Integer.parseInt(config.get(SRC_CONNECTOR_TYPE))).isHive() ? 1 : 0);
dataQualityTaskExecutionContext.setSourceConnectorType(srcConnectorType.getDescription());
dataQualityTaskExecutionContext.setSourceType(dataSource.getType().getCode());
dataQualityTaskExecutionContext.setSourceConnectionParams(dataSource.getConnectionParams());
}
}
}
/**
* whehter tenant is null
*
* @param tenant tenant
* @param taskInstance taskInstance
* @return result
*/
protected boolean verifyTenantIsNull(Tenant tenant, TaskInstance taskInstance) {
if (tenant == null) {
logger.error("tenant not exists,process instance id : {},task instance id : {}",
taskInstance.getProcessInstance().getId(),
taskInstance.getId());
return true;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/BaseTaskProcessor.java | return false;
}
/**
* get resource map key is full name and value is tenantCode
*/
protected Map<String, String> getResourceFullNames(TaskInstance taskInstance) {
Map<String, String> resourcesMap = new HashMap<>();
AbstractParameters baseParam = taskPluginManager.getParameters(ParametersNode.builder().taskType(taskInstance.getTaskType()).taskParams(taskInstance.getTaskParams()).build());
if (baseParam != null) {
List<ResourceInfo> projectResourceFiles = baseParam.getResourceFilesList();
if (CollectionUtils.isNotEmpty(projectResourceFiles)) {
//
Set<ResourceInfo> oldVersionResources = projectResourceFiles.stream().filter(t -> t.getId() == 0).collect(Collectors.toSet());
if (CollectionUtils.isNotEmpty(oldVersionResources)) {
oldVersionResources.forEach(t -> resourcesMap.put(t.getRes(), processService.queryTenantCodeByResName(t.getRes(), ResourceType.FILE)));
}
//
Stream<Integer> resourceIdStream = projectResourceFiles.stream().map(ResourceInfo::getId);
Set<Integer> resourceIdsSet = resourceIdStream.collect(Collectors.toSet());
if (CollectionUtils.isNotEmpty(resourceIdsSet)) {
Integer[] resourceIds = resourceIdsSet.toArray(new Integer[resourceIdsSet.size()]);
List<Resource> resources = processService.listResourceByIds(resourceIds);
resources.forEach(t -> resourcesMap.put(t.getFullName(), processService.queryTenantCodeByResName(t.getFullName(), ResourceType.FILE)));
}
}
}
return resourcesMap;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxParameters.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.datax;
import org.apache.dolphinscheduler.plugin.task.api.enums.ResourceType;
import org.apache.dolphinscheduler.plugin.task.api.model.ResourceInfo;
import org.apache.dolphinscheduler.plugin.task.api.parameters.AbstractParameters; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxParameters.java | import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.DataSourceParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.ResourceParametersHelper;
import org.apache.dolphinscheduler.spi.enums.Flag;
import org.apache.dolphinscheduler.spi.utils.StringUtils;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
/**
* DataX parameter
*/
public class DataxParameters extends AbstractParameters {
/**
* if custom json config,eg 0, 1
*/
private int customConfig;
/**
* if customConfig eq 1 ,then json is usable
*/
private String json;
/**
* data source type,eg MYSQL, POSTGRES ...
*/
private String dsType;
/**
* datasource id
*/
private int dataSource;
/**
* data target type,eg MYSQL, POSTGRES ...
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxParameters.java | private String dtType;
/**
* datatarget id
*/
private int dataTarget;
/**
* sql
*/
private String sql;
/**
* target table
*/
private String targetTable;
/**
* Pre Statements
*/
private List<String> preStatements;
/**
* Post Statements
*/
private List<String> postStatements;
/**
* speed byte num
*/
private int jobSpeedByte;
/**
* speed record count
*/
private int jobSpeedRecord;
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxParameters.java | * Xms memory
*/
private int xms;
/**
* Xmx memory
*/
private int xmx;
public int getCustomConfig() {
return customConfig;
}
public void setCustomConfig(int customConfig) {
this.customConfig = customConfig;
}
public String getJson() {
return json;
}
public void setJson(String json) {
this.json = json;
}
public String getDsType() {
return dsType;
}
public void setDsType(String dsType) {
this.dsType = dsType;
}
public int getDataSource() {
return dataSource;
}
public void setDataSource(int dataSource) {
this.dataSource = dataSource; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxParameters.java | }
public String getDtType() {
return dtType;
}
public void setDtType(String dtType) {
this.dtType = dtType;
}
public int getDataTarget() {
return dataTarget;
}
public void setDataTarget(int dataTarget) {
this.dataTarget = dataTarget;
}
public String getSql() {
return sql;
}
public void setSql(String sql) {
this.sql = sql;
}
public String getTargetTable() {
return targetTable;
}
public void setTargetTable(String targetTable) {
this.targetTable = targetTable;
}
public List<String> getPreStatements() {
return preStatements;
}
public void setPreStatements(List<String> preStatements) {
this.preStatements = preStatements; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxParameters.java | }
public List<String> getPostStatements() {
return postStatements;
}
public void setPostStatements(List<String> postStatements) {
this.postStatements = postStatements;
}
public int getJobSpeedByte() {
return jobSpeedByte;
}
public void setJobSpeedByte(int jobSpeedByte) {
this.jobSpeedByte = jobSpeedByte;
}
public int getJobSpeedRecord() {
return jobSpeedRecord;
}
public void setJobSpeedRecord(int jobSpeedRecord) {
this.jobSpeedRecord = jobSpeedRecord;
}
public int getXms() {
return xms;
}
public void setXms(int xms) {
this.xms = xms;
}
public int getXmx() {
return xmx;
}
public void setXmx(int xmx) {
this.xmx = xmx; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxParameters.java | }
@Override
public boolean checkParameters() {
if (customConfig == Flag.NO.ordinal()) {
return dataSource != 0
&& dataTarget != 0
&& StringUtils.isNotEmpty(sql)
&& StringUtils.isNotEmpty(targetTable);
} else {
return StringUtils.isNotEmpty(json);
}
}
@Override
public List<ResourceInfo> getResourceFilesList() {
return new ArrayList<>();
}
@Override
public String toString() {
return "DataxParameters{"
+ "customConfig=" + customConfig
+ ", json='" + json + '\''
+ ", dsType='" + dsType + '\''
+ ", dataSource=" + dataSource
+ ", dtType='" + dtType + '\''
+ ", dataTarget=" + dataTarget
+ ", sql='" + sql + '\''
+ ", targetTable='" + targetTable + '\''
+ ", preStatements=" + preStatements
+ ", postStatements=" + postStatements
+ ", jobSpeedByte=" + jobSpeedByte |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-datax/src/main/java/org/apache/dolphinscheduler/plugin/task/datax/DataxParameters.java | + ", jobSpeedRecord=" + jobSpeedRecord
+ ", xms=" + xms
+ ", xmx=" + xmx
+ '}';
}
@Override
public ResourceParametersHelper getResources() {
ResourceParametersHelper resources = super.getResources();
resources.put(ResourceType.DATASOURCE, dataSource);
resources.put(ResourceType.DATASOURCE, dataTarget);
return resources;
}
public DataxTaskExecutionContext generateExtendedContext(ResourceParametersHelper parametersHelper) {
DataSourceParameters dbSource = (DataSourceParameters) parametersHelper.getResourceParameters(ResourceType.DATASOURCE, dataSource);
DataSourceParameters dbTarget = (DataSourceParameters) parametersHelper.getResourceParameters(ResourceType.DATASOURCE, dataTarget);
DataxTaskExecutionContext dataxTaskExecutionContext = new DataxTaskExecutionContext();
if (Objects.nonNull(dbSource)) {
dataxTaskExecutionContext.setDataSourceId(dataSource);
dataxTaskExecutionContext.setSourcetype(dbSource.getType());
dataxTaskExecutionContext.setSourceConnectionParams(dbSource.getConnectionParams());
}
if (Objects.nonNull(dbTarget)) {
dataxTaskExecutionContext.setDataTargetId(dataTarget);
dataxTaskExecutionContext.setTargetType(dbTarget.getType());
dataxTaskExecutionContext.setTargetConnectionParams(dbTarget.getConnectionParams());
}
return dataxTaskExecutionContext;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/parameter/SqoopParameters.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.task.sqoop.parameter;
import org.apache.dolphinscheduler.plugin.task.api.enums.ResourceType;
import org.apache.dolphinscheduler.plugin.task.api.model.Property;
import org.apache.dolphinscheduler.plugin.task.api.parameters.AbstractParameters; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/parameter/SqoopParameters.java | import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.DataSourceParameters;
import org.apache.dolphinscheduler.plugin.task.api.parameters.resource.ResourceParametersHelper;
import org.apache.dolphinscheduler.plugin.task.sqoop.SqoopJobType;
import org.apache.dolphinscheduler.plugin.task.sqoop.SqoopTaskExecutionContext;
import org.apache.dolphinscheduler.plugin.task.sqoop.parameter.sources.SourceMysqlParameter;
import org.apache.dolphinscheduler.plugin.task.sqoop.parameter.targets.TargetMysqlParameter;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
import org.apache.dolphinscheduler.spi.utils.StringUtils;
import java.util.List;
import java.util.Objects;
/**
* sqoop parameters
*/
public class SqoopParameters extends AbstractParameters {
/**
* sqoop job type:
* CUSTOM - custom sqoop job
* TEMPLATE - sqoop template job
*/
private String jobType;
/**
* customJob eq 1, use customShell
*/
private String customShell;
/**
* sqoop job name - map-reduce job name
*/
private String jobName;
/**
* model type |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/parameter/SqoopParameters.java | */
private String modelType;
/**
* concurrency
*/
private int concurrency;
/**
* source type
*/
private String sourceType;
/**
* target type
*/
private String targetType;
/**
* source params
*/
private String sourceParams;
/**
* target params
*/
private String targetParams;
/**
* hadoop custom param for sqoop job
*/
private List<Property> hadoopCustomParams;
/**
* sqoop advanced param
*/
private List<Property> sqoopAdvancedParams; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/parameter/SqoopParameters.java | public String getModelType() {
return modelType;
}
public void setModelType(String modelType) {
this.modelType = modelType;
}
public int getConcurrency() {
return concurrency;
}
public void setConcurrency(int concurrency) {
this.concurrency = concurrency;
}
public String getSourceType() {
return sourceType;
}
public void setSourceType(String sourceType) {
this.sourceType = sourceType;
}
public String getTargetType() {
return targetType;
}
public void setTargetType(String targetType) {
this.targetType = targetType;
}
public String getSourceParams() {
return sourceParams;
}
public void setSourceParams(String sourceParams) {
this.sourceParams = sourceParams;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/parameter/SqoopParameters.java | public String getTargetParams() {
return targetParams;
}
public void setTargetParams(String targetParams) {
this.targetParams = targetParams;
}
public String getJobType() {
return jobType;
}
public void setJobType(String jobType) {
this.jobType = jobType;
}
public String getJobName() {
return jobName;
}
public void setJobName(String jobName) {
this.jobName = jobName;
}
public String getCustomShell() {
return customShell;
}
public void setCustomShell(String customShell) {
this.customShell = customShell;
}
public List<Property> getHadoopCustomParams() {
return hadoopCustomParams;
}
public void setHadoopCustomParams(List<Property> hadoopCustomParams) {
this.hadoopCustomParams = hadoopCustomParams;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/parameter/SqoopParameters.java | public List<Property> getSqoopAdvancedParams() {
return sqoopAdvancedParams;
}
public void setSqoopAdvancedParams(List<Property> sqoopAdvancedParams) {
this.sqoopAdvancedParams = sqoopAdvancedParams;
}
@Override
public boolean checkParameters() {
boolean sqoopParamsCheck = false;
if (StringUtils.isEmpty(jobType)) {
return sqoopParamsCheck;
}
if (SqoopJobType.TEMPLATE.getDescp().equals(jobType)) {
sqoopParamsCheck = StringUtils.isEmpty(customShell)
&& StringUtils.isNotEmpty(modelType)
&& StringUtils.isNotEmpty(jobName)
&& concurrency != 0
&& StringUtils.isNotEmpty(sourceType)
&& StringUtils.isNotEmpty(targetType)
&& StringUtils.isNotEmpty(sourceParams)
&& StringUtils.isNotEmpty(targetParams);
} else if (SqoopJobType.CUSTOM.getDescp().equals(jobType)) {
sqoopParamsCheck = StringUtils.isNotEmpty(customShell)
&& StringUtils.isEmpty(jobName);
}
return sqoopParamsCheck;
}
@Override
public ResourceParametersHelper getResources() {
ResourceParametersHelper resources = super.getResources(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 9,191 | [Bug] [Task] DataX Task custom JSON configuration and Sqoop task template configuration Null pointer exception | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Datax Task custom JSON configuration and Sqoop task template configuration Null pointer exception
### What you expected to happen
The task executes normally
### How to reproduce
Execute DataX and Sqoop tasks
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/9191 | https://github.com/apache/dolphinscheduler/pull/9192 | e8eb50e7388ae04251d05594880d220da8cc666f | d7d756e7b0165bfcdc2e0bffcfcb45892e23feb5 | 2022-03-25T09:08:55Z | java | 2022-03-26T12:15:41Z | dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/parameter/SqoopParameters.java | if (SqoopJobType.TEMPLATE.getDescp().equals(this.getJobType())) {
SourceMysqlParameter sourceMysqlParameter = JSONUtils.parseObject(this.getSourceParams(), SourceMysqlParameter.class);
TargetMysqlParameter targetMysqlParameter = JSONUtils.parseObject(this.getTargetParams(), TargetMysqlParameter.class);
resources.put(ResourceType.DATASOURCE, sourceMysqlParameter.getSrcDatasource());
resources.put(ResourceType.DATASOURCE, targetMysqlParameter.getTargetDatasource());
}
return resources;
}
public SqoopTaskExecutionContext generateExtendedContext(ResourceParametersHelper parametersHelper) {
SqoopTaskExecutionContext sqoopTaskExecutionContext = new SqoopTaskExecutionContext();
if (SqoopJobType.TEMPLATE.getDescp().equals(this.getJobType())) {
SourceMysqlParameter sourceMysqlParameter = JSONUtils.parseObject(this.getSourceParams(), SourceMysqlParameter.class);
TargetMysqlParameter targetMysqlParameter = JSONUtils.parseObject(this.getTargetParams(), TargetMysqlParameter.class);
DataSourceParameters dataSource = (DataSourceParameters) parametersHelper.getResourceParameters(ResourceType.DATASOURCE, sourceMysqlParameter.getSrcDatasource());
DataSourceParameters dataTarget = (DataSourceParameters) parametersHelper.getResourceParameters(ResourceType.DATASOURCE, targetMysqlParameter.getTargetDatasource());
if (Objects.nonNull(dataSource)) {
sqoopTaskExecutionContext.setDataSourceId(sourceMysqlParameter.getSrcDatasource());
sqoopTaskExecutionContext.setSourcetype(dataSource.getType());
sqoopTaskExecutionContext.setSourceConnectionParams(dataSource.getConnectionParams());
}
if (Objects.nonNull(dataTarget)) {
sqoopTaskExecutionContext.setDataTargetId(targetMysqlParameter.getTargetDatasource());
sqoopTaskExecutionContext.setTargetType(dataTarget.getType());
sqoopTaskExecutionContext.setTargetConnectionParams(dataTarget.getConnectionParams());
}
}
return sqoopTaskExecutionContext;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,980 | [Bug] [Master] master can repeat processing command | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Question one : If master online or offline, the other active masters will execute the `updateMasterNodes` method, first initialize param of `MASTER_SLOT` to zero, and then get lock from zookeeper and execute `syncMasterNodes` method serially, it initialize param of `MASTER_SIZE` and `MASTER_SLOT`.
so, during the get lock, the `MASTER_SIZE` and `MASTER_SLOT` of each master is valid and has the same value, so the scan command is likely to be repeated. Even if there is a double-check slot in `command2ProcessInstance`, if it cannot be changed before double-check, it will be repeated.
(如果master上线或下线,会通知其他master执行updateMasterNodes方法,首先会初始化MASTER_SLOT=0 (0是有效值),然后串行争夺zk锁,抢到锁的才会执行syncMasterNodes,这里才会更改MASTER_SIZE。也就是为抢到锁的其他master的MASTER_SIZE相同,MASTER_SLOT都为0,所以此时扫描的command应该都是一样的。虽然再转为processintance之前有二次校验,但是此时如果还没获取到锁,其实值没变,依旧校验通过,所以会多个master会处理同一command)


Question two: Because the timing of master processing is different, and the data will be deleted when completed, so the master query command that will skip part of the data when getting the next page.

### What you expected to happen
No 1. Do not repeat processing command
No 2. As much as possible, ensure that the order of process instances is generated
### How to reproduce
I am so sorry,I don't have enough nodes to test, I can provide a plan.
1. Prepare four master nodes
2. Sleep for 30 seconds after acquiring the lock
3. Large number of inserts command
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8980 | https://github.com/apache/dolphinscheduler/pull/9220 | 7553ae5a1707b1e735b7b192777bda20e454463f | 258285e6bb196439efdf7fa18d536723e779fe4f | 2022-03-18T06:11:01Z | java | 2022-03-27T15:33:30Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/registry/ServerNodeManager.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.registry;
import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_MASTERS;
import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_WORKERS;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.NodeType;
import org.apache.dolphinscheduler.common.model.Server;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.dao.AlertDao;
import org.apache.dolphinscheduler.dao.entity.WorkerGroup;
import org.apache.dolphinscheduler.dao.mapper.WorkerGroupMapper;
import org.apache.dolphinscheduler.registry.api.Event; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,980 | [Bug] [Master] master can repeat processing command | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Question one : If master online or offline, the other active masters will execute the `updateMasterNodes` method, first initialize param of `MASTER_SLOT` to zero, and then get lock from zookeeper and execute `syncMasterNodes` method serially, it initialize param of `MASTER_SIZE` and `MASTER_SLOT`.
so, during the get lock, the `MASTER_SIZE` and `MASTER_SLOT` of each master is valid and has the same value, so the scan command is likely to be repeated. Even if there is a double-check slot in `command2ProcessInstance`, if it cannot be changed before double-check, it will be repeated.
(如果master上线或下线,会通知其他master执行updateMasterNodes方法,首先会初始化MASTER_SLOT=0 (0是有效值),然后串行争夺zk锁,抢到锁的才会执行syncMasterNodes,这里才会更改MASTER_SIZE。也就是为抢到锁的其他master的MASTER_SIZE相同,MASTER_SLOT都为0,所以此时扫描的command应该都是一样的。虽然再转为processintance之前有二次校验,但是此时如果还没获取到锁,其实值没变,依旧校验通过,所以会多个master会处理同一command)


Question two: Because the timing of master processing is different, and the data will be deleted when completed, so the master query command that will skip part of the data when getting the next page.

### What you expected to happen
No 1. Do not repeat processing command
No 2. As much as possible, ensure that the order of process instances is generated
### How to reproduce
I am so sorry,I don't have enough nodes to test, I can provide a plan.
1. Prepare four master nodes
2. Sleep for 30 seconds after acquiring the lock
3. Large number of inserts command
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8980 | https://github.com/apache/dolphinscheduler/pull/9220 | 7553ae5a1707b1e735b7b192777bda20e454463f | 258285e6bb196439efdf7fa18d536723e779fe4f | 2022-03-18T06:11:01Z | java | 2022-03-27T15:33:30Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/registry/ServerNodeManager.java | import org.apache.dolphinscheduler.registry.api.Event.Type;
import org.apache.dolphinscheduler.registry.api.SubscribeListener;
import org.apache.dolphinscheduler.remote.utils.NamedThreadFactory;
import org.apache.dolphinscheduler.service.queue.MasterPriorityQueue;
import org.apache.dolphinscheduler.service.registry.RegistryClient;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang.StringUtils;
import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
import javax.annotation.PreDestroy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
/**
* server node manager
*/
@Service |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,980 | [Bug] [Master] master can repeat processing command | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Question one : If master online or offline, the other active masters will execute the `updateMasterNodes` method, first initialize param of `MASTER_SLOT` to zero, and then get lock from zookeeper and execute `syncMasterNodes` method serially, it initialize param of `MASTER_SIZE` and `MASTER_SLOT`.
so, during the get lock, the `MASTER_SIZE` and `MASTER_SLOT` of each master is valid and has the same value, so the scan command is likely to be repeated. Even if there is a double-check slot in `command2ProcessInstance`, if it cannot be changed before double-check, it will be repeated.
(如果master上线或下线,会通知其他master执行updateMasterNodes方法,首先会初始化MASTER_SLOT=0 (0是有效值),然后串行争夺zk锁,抢到锁的才会执行syncMasterNodes,这里才会更改MASTER_SIZE。也就是为抢到锁的其他master的MASTER_SIZE相同,MASTER_SLOT都为0,所以此时扫描的command应该都是一样的。虽然再转为processintance之前有二次校验,但是此时如果还没获取到锁,其实值没变,依旧校验通过,所以会多个master会处理同一command)


Question two: Because the timing of master processing is different, and the data will be deleted when completed, so the master query command that will skip part of the data when getting the next page.

### What you expected to happen
No 1. Do not repeat processing command
No 2. As much as possible, ensure that the order of process instances is generated
### How to reproduce
I am so sorry,I don't have enough nodes to test, I can provide a plan.
1. Prepare four master nodes
2. Sleep for 30 seconds after acquiring the lock
3. Large number of inserts command
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8980 | https://github.com/apache/dolphinscheduler/pull/9220 | 7553ae5a1707b1e735b7b192777bda20e454463f | 258285e6bb196439efdf7fa18d536723e779fe4f | 2022-03-18T06:11:01Z | java | 2022-03-27T15:33:30Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/registry/ServerNodeManager.java | public class ServerNodeManager implements InitializingBean {
private final Logger logger = LoggerFactory.getLogger(ServerNodeManager.class);
/**
* master lock
*/
private final Lock masterLock = new ReentrantLock();
/**
* worker group lock
*/
private final Lock workerGroupLock = new ReentrantLock(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,980 | [Bug] [Master] master can repeat processing command | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Question one : If master online or offline, the other active masters will execute the `updateMasterNodes` method, first initialize param of `MASTER_SLOT` to zero, and then get lock from zookeeper and execute `syncMasterNodes` method serially, it initialize param of `MASTER_SIZE` and `MASTER_SLOT`.
so, during the get lock, the `MASTER_SIZE` and `MASTER_SLOT` of each master is valid and has the same value, so the scan command is likely to be repeated. Even if there is a double-check slot in `command2ProcessInstance`, if it cannot be changed before double-check, it will be repeated.
(如果master上线或下线,会通知其他master执行updateMasterNodes方法,首先会初始化MASTER_SLOT=0 (0是有效值),然后串行争夺zk锁,抢到锁的才会执行syncMasterNodes,这里才会更改MASTER_SIZE。也就是为抢到锁的其他master的MASTER_SIZE相同,MASTER_SLOT都为0,所以此时扫描的command应该都是一样的。虽然再转为processintance之前有二次校验,但是此时如果还没获取到锁,其实值没变,依旧校验通过,所以会多个master会处理同一command)


Question two: Because the timing of master processing is different, and the data will be deleted when completed, so the master query command that will skip part of the data when getting the next page.

### What you expected to happen
No 1. Do not repeat processing command
No 2. As much as possible, ensure that the order of process instances is generated
### How to reproduce
I am so sorry,I don't have enough nodes to test, I can provide a plan.
1. Prepare four master nodes
2. Sleep for 30 seconds after acquiring the lock
3. Large number of inserts command
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8980 | https://github.com/apache/dolphinscheduler/pull/9220 | 7553ae5a1707b1e735b7b192777bda20e454463f | 258285e6bb196439efdf7fa18d536723e779fe4f | 2022-03-18T06:11:01Z | java | 2022-03-27T15:33:30Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/registry/ServerNodeManager.java | /**
* worker node info lock
*/
private final Lock workerNodeInfoLock = new ReentrantLock();
/**
* worker group nodes
*/
private final ConcurrentHashMap<String, Set<String>> workerGroupNodes = new ConcurrentHashMap<>();
/**
* master nodes
*/
private final Set<String> masterNodes = new HashSet<>();
/**
* worker node info
*/
private final Map<String, String> workerNodeInfo = new HashMap<>();
/**
* executor service
*/
private ScheduledExecutorService executorService;
@Autowired
private RegistryClient registryClient;
/**
* eg : /node/worker/group/127.0.0.1:xxx
*/
private static final int WORKER_LISTENER_CHECK_LENGTH = 5;
/**
* worker group mapper
*/
@Autowired |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,980 | [Bug] [Master] master can repeat processing command | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Question one : If master online or offline, the other active masters will execute the `updateMasterNodes` method, first initialize param of `MASTER_SLOT` to zero, and then get lock from zookeeper and execute `syncMasterNodes` method serially, it initialize param of `MASTER_SIZE` and `MASTER_SLOT`.
so, during the get lock, the `MASTER_SIZE` and `MASTER_SLOT` of each master is valid and has the same value, so the scan command is likely to be repeated. Even if there is a double-check slot in `command2ProcessInstance`, if it cannot be changed before double-check, it will be repeated.
(如果master上线或下线,会通知其他master执行updateMasterNodes方法,首先会初始化MASTER_SLOT=0 (0是有效值),然后串行争夺zk锁,抢到锁的才会执行syncMasterNodes,这里才会更改MASTER_SIZE。也就是为抢到锁的其他master的MASTER_SIZE相同,MASTER_SLOT都为0,所以此时扫描的command应该都是一样的。虽然再转为processintance之前有二次校验,但是此时如果还没获取到锁,其实值没变,依旧校验通过,所以会多个master会处理同一command)


Question two: Because the timing of master processing is different, and the data will be deleted when completed, so the master query command that will skip part of the data when getting the next page.

### What you expected to happen
No 1. Do not repeat processing command
No 2. As much as possible, ensure that the order of process instances is generated
### How to reproduce
I am so sorry,I don't have enough nodes to test, I can provide a plan.
1. Prepare four master nodes
2. Sleep for 30 seconds after acquiring the lock
3. Large number of inserts command
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8980 | https://github.com/apache/dolphinscheduler/pull/9220 | 7553ae5a1707b1e735b7b192777bda20e454463f | 258285e6bb196439efdf7fa18d536723e779fe4f | 2022-03-18T06:11:01Z | java | 2022-03-27T15:33:30Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/registry/ServerNodeManager.java | private WorkerGroupMapper workerGroupMapper;
private final MasterPriorityQueue masterPriorityQueue = new MasterPriorityQueue();
/**
* alert dao
*/
@Autowired
private AlertDao alertDao;
private static volatile int MASTER_SLOT = 0;
private static volatile int MASTER_SIZE = 0;
public static int getSlot() {
return MASTER_SLOT;
}
public static int getMasterSize() {
return MASTER_SIZE;
}
/**
* init listener
*
* @throws Exception if error throws Exception
*/
@Override
public void afterPropertiesSet() throws Exception {
/**
* load nodes from zookeeper
*/
load();
/**
* init executor service
*/
executorService = Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("ServerNodeManagerExecutor")); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,980 | [Bug] [Master] master can repeat processing command | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Question one : If master online or offline, the other active masters will execute the `updateMasterNodes` method, first initialize param of `MASTER_SLOT` to zero, and then get lock from zookeeper and execute `syncMasterNodes` method serially, it initialize param of `MASTER_SIZE` and `MASTER_SLOT`.
so, during the get lock, the `MASTER_SIZE` and `MASTER_SLOT` of each master is valid and has the same value, so the scan command is likely to be repeated. Even if there is a double-check slot in `command2ProcessInstance`, if it cannot be changed before double-check, it will be repeated.
(如果master上线或下线,会通知其他master执行updateMasterNodes方法,首先会初始化MASTER_SLOT=0 (0是有效值),然后串行争夺zk锁,抢到锁的才会执行syncMasterNodes,这里才会更改MASTER_SIZE。也就是为抢到锁的其他master的MASTER_SIZE相同,MASTER_SLOT都为0,所以此时扫描的command应该都是一样的。虽然再转为processintance之前有二次校验,但是此时如果还没获取到锁,其实值没变,依旧校验通过,所以会多个master会处理同一command)


Question two: Because the timing of master processing is different, and the data will be deleted when completed, so the master query command that will skip part of the data when getting the next page.

### What you expected to happen
No 1. Do not repeat processing command
No 2. As much as possible, ensure that the order of process instances is generated
### How to reproduce
I am so sorry,I don't have enough nodes to test, I can provide a plan.
1. Prepare four master nodes
2. Sleep for 30 seconds after acquiring the lock
3. Large number of inserts command
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8980 | https://github.com/apache/dolphinscheduler/pull/9220 | 7553ae5a1707b1e735b7b192777bda20e454463f | 258285e6bb196439efdf7fa18d536723e779fe4f | 2022-03-18T06:11:01Z | java | 2022-03-27T15:33:30Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/registry/ServerNodeManager.java | executorService.scheduleWithFixedDelay(new WorkerNodeInfoAndGroupDbSyncTask(), 0, 10, TimeUnit.SECONDS);
/*
* init MasterNodeListener listener
*/
registryClient.subscribe(REGISTRY_DOLPHINSCHEDULER_MASTERS, new MasterDataListener());
/*
* init WorkerNodeListener listener
*/
registryClient.subscribe(REGISTRY_DOLPHINSCHEDULER_WORKERS, new WorkerDataListener());
}
/**
* load nodes from zookeeper
*/
public void load() {
/*
* master nodes from zookeeper
*/
updateMasterNodes();
/*
* worker group nodes from zookeeper
*/
Collection<String> workerGroups = registryClient.getWorkerGroupDirectly();
for (String workerGroup : workerGroups) {
syncWorkerGroupNodes(workerGroup, registryClient.getWorkerGroupNodesDirectly(workerGroup));
}
}
/**
* worker node info and worker group db sync task
*/
class WorkerNodeInfoAndGroupDbSyncTask implements Runnable { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 8,980 | [Bug] [Master] master can repeat processing command | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Question one : If master online or offline, the other active masters will execute the `updateMasterNodes` method, first initialize param of `MASTER_SLOT` to zero, and then get lock from zookeeper and execute `syncMasterNodes` method serially, it initialize param of `MASTER_SIZE` and `MASTER_SLOT`.
so, during the get lock, the `MASTER_SIZE` and `MASTER_SLOT` of each master is valid and has the same value, so the scan command is likely to be repeated. Even if there is a double-check slot in `command2ProcessInstance`, if it cannot be changed before double-check, it will be repeated.
(如果master上线或下线,会通知其他master执行updateMasterNodes方法,首先会初始化MASTER_SLOT=0 (0是有效值),然后串行争夺zk锁,抢到锁的才会执行syncMasterNodes,这里才会更改MASTER_SIZE。也就是为抢到锁的其他master的MASTER_SIZE相同,MASTER_SLOT都为0,所以此时扫描的command应该都是一样的。虽然再转为processintance之前有二次校验,但是此时如果还没获取到锁,其实值没变,依旧校验通过,所以会多个master会处理同一command)


Question two: Because the timing of master processing is different, and the data will be deleted when completed, so the master query command that will skip part of the data when getting the next page.

### What you expected to happen
No 1. Do not repeat processing command
No 2. As much as possible, ensure that the order of process instances is generated
### How to reproduce
I am so sorry,I don't have enough nodes to test, I can provide a plan.
1. Prepare four master nodes
2. Sleep for 30 seconds after acquiring the lock
3. Large number of inserts command
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/8980 | https://github.com/apache/dolphinscheduler/pull/9220 | 7553ae5a1707b1e735b7b192777bda20e454463f | 258285e6bb196439efdf7fa18d536723e779fe4f | 2022-03-18T06:11:01Z | java | 2022-03-27T15:33:30Z | dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/registry/ServerNodeManager.java | @Override
public void run() {
try {
Map<String, String> newWorkerNodeInfo = registryClient.getServerMaps(NodeType.WORKER, true);
syncAllWorkerNodeInfo(newWorkerNodeInfo);
List<WorkerGroup> workerGroupList = workerGroupMapper.queryAllWorkerGroup();
if (CollectionUtils.isNotEmpty(workerGroupList)) {
for (WorkerGroup wg : workerGroupList) {
String workerGroup = wg.getName();
Set<String> nodes = new HashSet<>();
String[] addrs = wg.getAddrList().split(Constants.COMMA);
for (String addr : addrs) {
if (newWorkerNodeInfo.containsKey(addr)) {
nodes.add(addr);
}
}
if (!nodes.isEmpty()) {
syncWorkerGroupNodes(workerGroup, nodes);
}
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.