status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | } catch (DuplicateKeyException e) {
logger.error("resource directory {} has exist, can't recreate", fullName);
putMsg(result, Status.RESOURCE_EXIST);
return result;
} catch (Exception e) {
logger.error("resource already exists, can't recreate ", e);
throw new ServiceException("resource already exists, can't recreate");
}
createDirectory(loginUser,fullName,type,result);
return result;
}
/**
* create resource
*
* @param loginUser login user
* @param name alias
* @param desc description
* @param file file
* @param type type
* @param pid parent id
* @param currentDir current directory
* @return create result code
*/
@Override
@Transactional(rollbackFor = Exception.class)
public Result<Object> createResource(User loginUser,
String name,
String desc,
ResourceType type, |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | MultipartFile file,
int pid,
String currentDir) {
Result<Object> result = checkResourceUploadStartupState();
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
result = verifyPid(loginUser, pid);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
result = verifyFile(name, type, file);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
String fullName = currentDir.equals("/") ? String.format("%s%s",currentDir,name) : String.format("%s/%s",currentDir,name);
if (checkResourceExists(fullName, 0, type.ordinal())) {
logger.error("resource {} has exist, can't recreate", RegexUtils.escapeNRT(name));
putMsg(result, Status.RESOURCE_EXIST);
return result;
}
Date now = new Date();
Resource resource = new Resource(pid,name,fullName,false,desc,file.getOriginalFilename(),loginUser.getId(),type,file.getSize(),now,now);
try {
resourcesMapper.insert(resource);
putMsg(result, Status.SUCCESS);
Map<Object, Object> dataMap = new BeanMap(resource);
Map<String, Object> resultMap = new HashMap<>();
for (Map.Entry<Object, Object> entry: dataMap.entrySet()) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (!"class".equalsIgnoreCase(entry.getKey().toString())) {
resultMap.put(entry.getKey().toString(), entry.getValue());
}
}
result.setData(resultMap);
} catch (Exception e) {
logger.error("resource already exists, can't recreate ", e);
throw new ServiceException("resource already exists, can't recreate");
}
if (!upload(loginUser, fullName, file, type)) {
logger.error("upload resource: {} file: {} failed.", RegexUtils.escapeNRT(name), RegexUtils.escapeNRT(file.getOriginalFilename()));
putMsg(result, Status.HDFS_OPERATION_ERROR);
throw new ServiceException(String.format("upload resource: %s file: %s failed.", name, file.getOriginalFilename()));
}
return result;
}
/**
* check resource is exists
*
* @param fullName fullName
* @param userId user id
* @param type type
* @return true if resource exists
*/
private boolean checkResourceExists(String fullName, int userId, int type) {
Boolean existResource = resourcesMapper.existResource(fullName, userId, type);
return BooleanUtils.isTrue(existResource);
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | * update resource
* @param loginUser login user
* @param resourceId resource id
* @param name name
* @param desc description
* @param type resource type
* @param file resource file
* @return update result code
*/
@Override
@Transactional(rollbackFor = Exception.class)
public Result<Object> updateResource(User loginUser,
int resourceId,
String name,
String desc,
ResourceType type,
MultipartFile file) {
Result<Object> result = checkResourceUploadStartupState();
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
Resource resource = resourcesMapper.selectById(resourceId);
if (resource == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
if (!hasPerm(loginUser, resource.getUserId())) {
putMsg(result, Status.USER_NO_OPERATION_PERM);
return result;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (file == null && name.equals(resource.getAlias()) && desc.equals(resource.getDescription())) {
putMsg(result, Status.SUCCESS);
return result;
}
String originFullName = resource.getFullName();
String originResourceName = resource.getAlias();
String fullName = String.format("%s%s",originFullName.substring(0,originFullName.lastIndexOf("/") + 1),name);
if (!originResourceName.equals(name) && checkResourceExists(fullName, 0, type.ordinal())) {
logger.error("resource {} already exists, can't recreate", name);
putMsg(result, Status.RESOURCE_EXIST);
return result;
}
result = verifyFile(name, type, file);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
String tenantCode = getTenantCode(resource.getUserId(),result);
if (StringUtils.isEmpty(tenantCode)) {
return result;
}
String originHdfsFileName = HadoopUtils.getHdfsFileName(resource.getType(),tenantCode,originFullName);
try {
if (!HadoopUtils.getInstance().exists(originHdfsFileName)) {
logger.error("{} not exist", originHdfsFileName);
putMsg(result,Status.RESOURCE_NOT_EXIST);
return result; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | }
} catch (IOException e) {
logger.error(e.getMessage(),e);
throw new ServiceException(Status.HDFS_OPERATION_ERROR);
}
if (!resource.isDirectory()) {
String originSuffix = FileUtils.suffix(originFullName);
String suffix = FileUtils.suffix(fullName);
boolean suffixIsChanged = false;
if (StringUtils.isBlank(suffix) && StringUtils.isNotBlank(originSuffix)) {
suffixIsChanged = true;
}
if (StringUtils.isNotBlank(suffix) && !suffix.equals(originSuffix)) {
suffixIsChanged = true;
}
if (suffixIsChanged) {
Map<String, Object> columnMap = new HashMap<>();
columnMap.put("resources_id", resourceId);
List<ResourcesUser> resourcesUsers = resourceUserMapper.selectByMap(columnMap);
if (CollectionUtils.isNotEmpty(resourcesUsers)) {
List<Integer> userIds = resourcesUsers.stream().map(ResourcesUser::getUserId).collect(Collectors.toList());
List<User> users = userMapper.selectBatchIds(userIds);
String userNames = users.stream().map(User::getUserName).collect(Collectors.toList()).toString();
logger.error("resource is authorized to user {},suffix not allowed to be modified", userNames);
putMsg(result,Status.RESOURCE_IS_AUTHORIZED,userNames);
return result;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | }
}
Date now = new Date();
resource.setAlias(name);
resource.setFileName(name);
resource.setFullName(fullName);
resource.setDescription(desc);
resource.setUpdateTime(now);
if (file != null) {
resource.setSize(file.getSize());
}
try {
resourcesMapper.updateById(resource);
if (resource.isDirectory()) {
List<Integer> childrenResource = listAllChildren(resource,false);
if (CollectionUtils.isNotEmpty(childrenResource)) {
String matcherFullName = Matcher.quoteReplacement(fullName);
List<Resource> childResourceList;
Integer[] childResIdArray = childrenResource.toArray(new Integer[childrenResource.size()]);
List<Resource> resourceList = resourcesMapper.listResourceByIds(childResIdArray);
childResourceList = resourceList.stream().map(t -> {
t.setFullName(t.getFullName().replaceFirst(originFullName, matcherFullName));
t.setUpdateTime(now);
return t;
}).collect(Collectors.toList());
resourcesMapper.batchUpdateResource(childResourceList);
if (ResourceType.UDF.equals(resource.getType())) {
List<UdfFunc> udfFuncs = udfFunctionMapper.listUdfByResourceId(childResIdArray);
if (CollectionUtils.isNotEmpty(udfFuncs)) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | udfFuncs = udfFuncs.stream().map(t -> {
t.setResourceName(t.getResourceName().replaceFirst(originFullName, matcherFullName));
t.setUpdateTime(now);
return t;
}).collect(Collectors.toList());
udfFunctionMapper.batchUpdateUdfFunc(udfFuncs);
}
}
}
} else if (ResourceType.UDF.equals(resource.getType())) {
List<UdfFunc> udfFuncs = udfFunctionMapper.listUdfByResourceId(new Integer[]{resourceId});
if (CollectionUtils.isNotEmpty(udfFuncs)) {
udfFuncs = udfFuncs.stream().map(t -> {
t.setResourceName(fullName);
t.setUpdateTime(now);
return t;
}).collect(Collectors.toList());
udfFunctionMapper.batchUpdateUdfFunc(udfFuncs);
}
}
putMsg(result, Status.SUCCESS);
Map<Object, Object> dataMap = new BeanMap(resource);
Map<String, Object> resultMap = new HashMap<>();
for (Map.Entry<Object, Object> entry: dataMap.entrySet()) {
if (!Constants.CLASS.equalsIgnoreCase(entry.getKey().toString())) {
resultMap.put(entry.getKey().toString(), entry.getValue());
}
}
result.setData(resultMap);
} catch (Exception e) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | logger.error(Status.UPDATE_RESOURCE_ERROR.getMsg(), e);
throw new ServiceException(Status.UPDATE_RESOURCE_ERROR);
}
if (originResourceName.equals(name) && file == null) {
return result;
}
if (file != null) {
if (!upload(loginUser, fullName, file, type)) {
logger.error("upload resource: {} file: {} failed.", name, RegexUtils.escapeNRT(file.getOriginalFilename()));
putMsg(result, Status.HDFS_OPERATION_ERROR);
throw new ServiceException(String.format("upload resource: %s file: %s failed.", name, file.getOriginalFilename()));
}
if (!fullName.equals(originFullName)) {
try {
HadoopUtils.getInstance().delete(originHdfsFileName,false);
} catch (IOException e) {
logger.error(e.getMessage(),e);
throw new ServiceException(String.format("delete resource: %s failed.", originFullName));
}
}
return result;
}
String destHdfsFileName = HadoopUtils.getHdfsFileName(resource.getType(),tenantCode,fullName);
try {
logger.info("start hdfs copy {} -> {}", originHdfsFileName, destHdfsFileName);
HadoopUtils.getInstance().copy(originHdfsFileName, destHdfsFileName, true, true);
} catch (Exception e) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | logger.error(MessageFormat.format("hdfs copy {0} -> {1} fail", originHdfsFileName, destHdfsFileName), e);
putMsg(result,Status.HDFS_COPY_FAIL);
throw new ServiceException(Status.HDFS_COPY_FAIL);
}
return result;
}
private Result<Object> verifyFile(String name, ResourceType type, MultipartFile file) {
Result<Object> result = new Result<>();
putMsg(result, Status.SUCCESS);
if (file != null) {
if (file.isEmpty()) {
logger.error("file is empty: {}", RegexUtils.escapeNRT(file.getOriginalFilename()));
putMsg(result, Status.RESOURCE_FILE_IS_EMPTY);
return result;
}
String fileSuffix = FileUtils.suffix(file.getOriginalFilename());
String nameSuffix = FileUtils.suffix(name);
if (!(StringUtils.isNotEmpty(fileSuffix) && fileSuffix.equalsIgnoreCase(nameSuffix))) {
logger.error("rename file suffix and original suffix must be consistent: {}", RegexUtils.escapeNRT(file.getOriginalFilename()));
putMsg(result, Status.RESOURCE_SUFFIX_FORBID_CHANGE);
return result;
}
if (Constants.UDF.equals(type.name()) && !JAR.equalsIgnoreCase(fileSuffix)) {
logger.error(Status.UDF_RESOURCE_SUFFIX_NOT_JAR.getMsg());
putMsg(result, Status.UDF_RESOURCE_SUFFIX_NOT_JAR); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | return result;
}
if (file.getSize() > Constants.MAX_FILE_SIZE) {
logger.error("file size is too large: {}", RegexUtils.escapeNRT(file.getOriginalFilename()));
putMsg(result, Status.RESOURCE_SIZE_EXCEED_LIMIT);
return result;
}
}
return result;
}
/**
* query resources list paging
*
* @param loginUser login user
* @param type resource type
* @param searchVal search value
* @param pageNo page number
* @param pageSize page size
* @return resource list page
*/
@Override
public Map<String, Object> queryResourceListPaging(User loginUser, int directoryId, ResourceType type, String searchVal, Integer pageNo, Integer pageSize) {
HashMap<String, Object> result = new HashMap<>();
Page<Resource> page = new Page<>(pageNo, pageSize);
int userId = loginUser.getId();
if (isAdmin(loginUser)) {
userId = 0;
}
if (directoryId != -1) {
Resource directory = resourcesMapper.selectById(directoryId); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (directory == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
}
List<Integer> resourcesIds = resourceUserMapper.queryResourcesIdListByUserIdAndPerm(userId, 0);
IPage<Resource> resourceIPage = resourcesMapper.queryResourcePaging(page, userId, directoryId, type.ordinal(), searchVal,resourcesIds);
PageInfo<Resource> pageInfo = new PageInfo<>(pageNo, pageSize);
pageInfo.setTotalCount((int)resourceIPage.getTotal());
pageInfo.setLists(resourceIPage.getRecords());
result.put(Constants.DATA_LIST, pageInfo);
putMsg(result,Status.SUCCESS);
return result;
}
/**
* create directory
* @param loginUser login user
* @param fullName full name
* @param type resource type
* @param result Result
*/
private void createDirectory(User loginUser,String fullName,ResourceType type,Result<Object> result) {
String tenantCode = tenantMapper.queryById(loginUser.getTenantId()).getTenantCode();
String directoryName = HadoopUtils.getHdfsFileName(type,tenantCode,fullName);
String resourceRootPath = HadoopUtils.getHdfsDir(type,tenantCode);
try {
if (!HadoopUtils.getInstance().exists(resourceRootPath)) {
createTenantDirIfNotExists(tenantCode);
}
if (!HadoopUtils.getInstance().mkdir(directoryName)) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | logger.error("create resource directory {} of hdfs failed",directoryName);
putMsg(result,Status.HDFS_OPERATION_ERROR);
throw new ServiceException(String.format("create resource directory: %s failed.", directoryName));
}
} catch (Exception e) {
logger.error("create resource directory {} of hdfs failed",directoryName);
putMsg(result,Status.HDFS_OPERATION_ERROR);
throw new ServiceException(String.format("create resource directory: %s failed.", directoryName));
}
}
/**
* upload file to hdfs
*
* @param loginUser login user
* @param fullName full name
* @param file file
*/
private boolean upload(User loginUser, String fullName, MultipartFile file, ResourceType type) {
String fileSuffix = FileUtils.suffix(file.getOriginalFilename());
String nameSuffix = FileUtils.suffix(fullName);
if (!(StringUtils.isNotEmpty(fileSuffix) && fileSuffix.equalsIgnoreCase(nameSuffix))) {
return false;
}
String tenantCode = tenantMapper.queryById(loginUser.getTenantId()).getTenantCode();
String localFilename = FileUtils.getUploadFilename(tenantCode, UUID.randomUUID().toString()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | String hdfsFilename = HadoopUtils.getHdfsFileName(type,tenantCode,fullName);
String resourcePath = HadoopUtils.getHdfsDir(type,tenantCode);
try {
if (!HadoopUtils.getInstance().exists(resourcePath)) {
createTenantDirIfNotExists(tenantCode);
}
org.apache.dolphinscheduler.api.utils.FileUtils.copyFile(file, localFilename);
HadoopUtils.getInstance().copyLocalToHdfs(localFilename, hdfsFilename, true, true);
} catch (Exception e) {
logger.error(e.getMessage(), e);
return false;
}
return true;
}
/**
* query resource list
*
* @param loginUser login user
* @param type resource type
* @return resource list
*/
@Override
public Map<String, Object> queryResourceList(User loginUser, ResourceType type) {
Map<String, Object> result = new HashMap<>();
List<Resource> allResourceList = queryAuthoredResourceList(loginUser, type);
Visitor resourceTreeVisitor = new ResourceTreeVisitor(allResourceList);
result.put(Constants.DATA_LIST, resourceTreeVisitor.visit().getChildren());
putMsg(result, Status.SUCCESS);
return result; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | }
/**
* query resource list by program type
*
* @param loginUser login user
* @param type resource type
* @return resource list
*/
@Override
public Map<String, Object> queryResourceByProgramType(User loginUser, ResourceType type, ProgramType programType) {
Map<String, Object> result = new HashMap<>();
List<Resource> allResourceList = queryAuthoredResourceList(loginUser, type);
String suffix = ".jar";
if (programType != null) {
switch (programType) {
case JAVA:
case SCALA:
break;
case PYTHON:
suffix = ".py";
break;
default:
}
}
List<Resource> resources = new ResourceFilter(suffix, new ArrayList<>(allResourceList)).filter();
Visitor resourceTreeVisitor = new ResourceTreeVisitor(resources);
result.put(Constants.DATA_LIST, resourceTreeVisitor.visit().getChildren());
putMsg(result, Status.SUCCESS);
return result;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | /**
* delete resource
*
* @param loginUser login user
* @param resourceId resource id
* @return delete result code
* @throws IOException exception
*/
@Override
@Transactional(rollbackFor = Exception.class)
public Result<Object> delete(User loginUser, int resourceId) throws IOException {
Result<Object> result = checkResourceUploadStartupState();
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
Resource resource = resourcesMapper.selectById(resourceId);
if (resource == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
if (!hasPerm(loginUser, resource.getUserId())) {
putMsg(result, Status.USER_NO_OPERATION_PERM);
return result;
}
String tenantCode = getTenantCode(resource.getUserId(),result);
if (StringUtils.isEmpty(tenantCode)) {
return result;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | List<Map<String, Object>> list = processDefinitionMapper.listResources();
Map<Integer, Set<Long>> resourceProcessMap = ResourceProcessDefinitionUtils.getResourceProcessDefinitionMap(list);
Set<Integer> resourceIdSet = resourceProcessMap.keySet();
List<Integer> allChildren = listAllChildren(resource,true);
Integer[] needDeleteResourceIdArray = allChildren.toArray(new Integer[allChildren.size()]);
if (resource.getType() == (ResourceType.UDF)) {
List<UdfFunc> udfFuncs = udfFunctionMapper.listUdfByResourceId(needDeleteResourceIdArray);
if (CollectionUtils.isNotEmpty(udfFuncs)) {
logger.error("can't be deleted,because it is bound by UDF functions:{}", udfFuncs);
putMsg(result,Status.UDF_RESOURCE_IS_BOUND,udfFuncs.get(0).getFuncName());
return result;
}
}
if (resourceIdSet.contains(resource.getPid())) {
logger.error("can't be deleted,because it is used of process definition");
putMsg(result, Status.RESOURCE_IS_USED);
return result;
}
resourceIdSet.retainAll(allChildren);
if (CollectionUtils.isNotEmpty(resourceIdSet)) {
logger.error("can't be deleted,because it is used of process definition");
for (Integer resId : resourceIdSet) {
logger.error("resource id:{} is used of process definition {}",resId,resourceProcessMap.get(resId));
}
putMsg(result, Status.RESOURCE_IS_USED);
return result;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | String hdfsFilename = HadoopUtils.getHdfsFileName(resource.getType(), tenantCode, resource.getFullName());
resourcesMapper.deleteIds(needDeleteResourceIdArray);
resourceUserMapper.deleteResourceUserArray(0, needDeleteResourceIdArray);
HadoopUtils.getInstance().delete(hdfsFilename, true);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* verify resource by name and type
* @param loginUser login user
* @param fullName resource full name
* @param type resource type
* @return true if the resource name not exists, otherwise return false
*/
@Override
public Result<Object> verifyResourceName(String fullName, ResourceType type, User loginUser) {
Result<Object> result = new Result<>();
putMsg(result, Status.SUCCESS);
if (checkResourceExists(fullName, 0, type.ordinal())) {
logger.error("resource type:{} name:{} has exist, can't create again.", type, RegexUtils.escapeNRT(fullName));
putMsg(result, Status.RESOURCE_EXIST);
} else {
Tenant tenant = tenantMapper.queryById(loginUser.getTenantId());
if (tenant != null) {
String tenantCode = tenant.getTenantCode();
try {
String hdfsFilename = HadoopUtils.getHdfsFileName(type,tenantCode,fullName); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (HadoopUtils.getInstance().exists(hdfsFilename)) {
logger.error("resource type:{} name:{} has exist in hdfs {}, can't create again.", type, RegexUtils.escapeNRT(fullName), hdfsFilename);
putMsg(result, Status.RESOURCE_FILE_EXIST,hdfsFilename);
}
} catch (Exception e) {
logger.error(e.getMessage(),e);
putMsg(result,Status.HDFS_OPERATION_ERROR);
}
} else {
putMsg(result,Status.TENANT_NOT_EXIST);
}
}
return result;
}
/**
* verify resource by full name or pid and type
* @param fullName resource full name
* @param id resource id
* @param type resource type
* @return true if the resource full name or pid not exists, otherwise return false
*/
@Override
public Result<Object> queryResource(String fullName, Integer id, ResourceType type) {
Result<Object> result = new Result<>();
if (StringUtils.isBlank(fullName) && id == null) {
putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR);
return result;
}
if (StringUtils.isNotBlank(fullName)) {
List<Resource> resourceList = resourcesMapper.queryResource(fullName,type.ordinal()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (CollectionUtils.isEmpty(resourceList)) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
putMsg(result, Status.SUCCESS);
result.setData(resourceList.get(0));
} else {
Resource resource = resourcesMapper.selectById(id);
if (resource == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
Resource parentResource = resourcesMapper.selectById(resource.getPid());
if (parentResource == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
putMsg(result, Status.SUCCESS);
result.setData(parentResource);
}
return result;
}
/**
* view resource file online
*
* @param resourceId resource id
* @param skipLineNum skip line number
* @param limit limit
* @return resource content
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | @Override
public Result<Object> readResource(int resourceId, int skipLineNum, int limit) {
Result<Object> result = checkResourceUploadStartupState();
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
Resource resource = resourcesMapper.selectById(resourceId);
if (resource == null) {
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
String nameSuffix = FileUtils.suffix(resource.getAlias());
String resourceViewSuffixs = FileUtils.getResourceViewSuffixs();
if (StringUtils.isNotEmpty(resourceViewSuffixs)) {
List<String> strList = Arrays.asList(resourceViewSuffixs.split(","));
if (!strList.contains(nameSuffix)) {
logger.error("resource suffix {} not support view, resource id {}", nameSuffix, resourceId);
putMsg(result, Status.RESOURCE_SUFFIX_NOT_SUPPORT_VIEW);
return result;
}
}
String tenantCode = getTenantCode(resource.getUserId(),result);
if (StringUtils.isEmpty(tenantCode)) {
return result;
}
String hdfsFileName = HadoopUtils.getHdfsResourceFileName(tenantCode, resource.getFullName());
logger.info("resource hdfs path is {}", hdfsFileName); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | try {
if (HadoopUtils.getInstance().exists(hdfsFileName)) {
List<String> content = HadoopUtils.getInstance().catFile(hdfsFileName, skipLineNum, limit);
putMsg(result, Status.SUCCESS);
Map<String, Object> map = new HashMap<>();
map.put(ALIAS, resource.getAlias());
map.put(CONTENT, String.join("\n", content));
result.setData(map);
} else {
logger.error("read file {} not exist in hdfs", hdfsFileName);
putMsg(result, Status.RESOURCE_FILE_NOT_EXIST,hdfsFileName);
}
} catch (Exception e) {
logger.error("Resource {} read failed", hdfsFileName, e);
putMsg(result, Status.HDFS_OPERATION_ERROR);
}
return result;
}
/**
* create resource file online
*
* @param loginUser login user
* @param type resource type
* @param fileName file name
* @param fileSuffix file suffix
* @param desc description
* @param content content
* @param pid pid
* @param currentDir current directory
* @return create result code |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | */
@Override
@Transactional(rollbackFor = Exception.class)
public Result<Object> onlineCreateResource(User loginUser, ResourceType type, String fileName, String fileSuffix, String desc, String content,int pid,String currentDir) {
Result<Object> result = checkResourceUploadStartupState();
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
String nameSuffix = fileSuffix.trim();
String resourceViewSuffixs = FileUtils.getResourceViewSuffixs();
if (StringUtils.isNotEmpty(resourceViewSuffixs)) {
List<String> strList = Arrays.asList(resourceViewSuffixs.split(","));
if (!strList.contains(nameSuffix)) {
logger.error("resource suffix {} not support create", nameSuffix);
putMsg(result, Status.RESOURCE_SUFFIX_NOT_SUPPORT_VIEW);
return result;
}
}
String name = fileName.trim() + "." + nameSuffix;
String fullName = currentDir.equals("/") ? String.format("%s%s",currentDir,name) : String.format("%s/%s",currentDir,name);
result = verifyResource(loginUser, type, fullName, pid);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
Date now = new Date();
Resource resource = new Resource(pid,name,fullName,false,desc,name,loginUser.getId(),type,content.getBytes().length,now,now);
resourcesMapper.insert(resource);
putMsg(result, Status.SUCCESS); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | Map<Object, Object> dataMap = new BeanMap(resource);
Map<String, Object> resultMap = new HashMap<>();
for (Map.Entry<Object, Object> entry: dataMap.entrySet()) {
if (!Constants.CLASS.equalsIgnoreCase(entry.getKey().toString())) {
resultMap.put(entry.getKey().toString(), entry.getValue());
}
}
result.setData(resultMap);
String tenantCode = tenantMapper.queryById(loginUser.getTenantId()).getTenantCode();
result = uploadContentToHdfs(fullName, tenantCode, content);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
throw new ServiceException(result.getMsg());
}
return result;
}
private Result<Object> checkResourceUploadStartupState() {
Result<Object> result = new Result<>();
putMsg(result, Status.SUCCESS);
if (!PropertyUtils.getResUploadStartupState()) {
logger.error("resource upload startup state: {}", PropertyUtils.getResUploadStartupState());
putMsg(result, Status.HDFS_NOT_STARTUP);
return result;
}
return result;
}
private Result<Object> verifyResource(User loginUser, ResourceType type, String fullName, int pid) {
Result<Object> result = verifyResourceName(fullName, type, loginUser);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | }
return verifyPid(loginUser, pid);
}
private Result<Object> verifyPid(User loginUser, int pid) {
Result<Object> result = new Result<>();
putMsg(result, Status.SUCCESS);
if (pid != -1) {
Resource parentResource = resourcesMapper.selectById(pid);
if (parentResource == null) {
putMsg(result, Status.PARENT_RESOURCE_NOT_EXIST);
return result;
}
if (!hasPerm(loginUser, parentResource.getUserId())) {
putMsg(result, Status.USER_NO_OPERATION_PERM);
return result;
}
}
return result;
}
/**
* updateProcessInstance resource
*
* @param resourceId resource id
* @param content content
* @return update result cod
*/
@Override
@Transactional(rollbackFor = Exception.class)
public Result<Object> updateResourceContent(int resourceId, String content) {
Result<Object> result = checkResourceUploadStartupState(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (!result.getCode().equals(Status.SUCCESS.getCode())) {
return result;
}
Resource resource = resourcesMapper.selectById(resourceId);
if (resource == null) {
logger.error("read file not exist, resource id {}", resourceId);
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
String nameSuffix = FileUtils.suffix(resource.getAlias());
String resourceViewSuffixs = FileUtils.getResourceViewSuffixs();
if (StringUtils.isNotEmpty(resourceViewSuffixs)) {
List<String> strList = Arrays.asList(resourceViewSuffixs.split(","));
if (!strList.contains(nameSuffix)) {
logger.error("resource suffix {} not support updateProcessInstance, resource id {}", nameSuffix, resourceId);
putMsg(result, Status.RESOURCE_SUFFIX_NOT_SUPPORT_VIEW);
return result;
}
}
String tenantCode = getTenantCode(resource.getUserId(),result);
if (StringUtils.isEmpty(tenantCode)) {
return result;
}
resource.setSize(content.getBytes().length);
resource.setUpdateTime(new Date());
resourcesMapper.updateById(resource);
result = uploadContentToHdfs(resource.getFullName(), tenantCode, content);
if (!result.getCode().equals(Status.SUCCESS.getCode())) {
throw new ServiceException(result.getMsg()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | }
return result;
}
/**
* @param resourceName resource name
* @param tenantCode tenant code
* @param content content
* @return result
*/
private Result<Object> uploadContentToHdfs(String resourceName, String tenantCode, String content) {
Result<Object> result = new Result<>();
String localFilename = "";
String hdfsFileName = "";
try {
localFilename = FileUtils.getUploadFilename(tenantCode, UUID.randomUUID().toString());
if (!FileUtils.writeContent2File(content, localFilename)) {
logger.error("file {} fail, content is {}", localFilename, RegexUtils.escapeNRT(content));
putMsg(result, Status.RESOURCE_NOT_EXIST);
return result;
}
hdfsFileName = HadoopUtils.getHdfsResourceFileName(tenantCode, resourceName);
String resourcePath = HadoopUtils.getHdfsResDir(tenantCode);
logger.info("resource hdfs path is {}, resource dir is {}", hdfsFileName, resourcePath);
HadoopUtils hadoopUtils = HadoopUtils.getInstance();
if (!hadoopUtils.exists(resourcePath)) {
createTenantDirIfNotExists(tenantCode);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | if (hadoopUtils.exists(hdfsFileName)) {
hadoopUtils.delete(hdfsFileName, false);
}
hadoopUtils.copyLocalToHdfs(localFilename, hdfsFileName, true, true);
} catch (Exception e) {
logger.error(e.getMessage(), e);
result.setCode(Status.HDFS_OPERATION_ERROR.getCode());
result.setMsg(String.format("copy %s to hdfs %s fail", localFilename, hdfsFileName));
return result;
}
putMsg(result, Status.SUCCESS);
return result;
}
/**
* download file
*
* @param resourceId resource id
* @return resource content
* @throws IOException exception
*/
@Override
public org.springframework.core.io.Resource downloadResource(int resourceId) throws IOException {
if (!PropertyUtils.getResUploadStartupState()) {
logger.error("resource upload startup state: {}", PropertyUtils.getResUploadStartupState());
throw new ServiceException("hdfs not startup");
}
Resource resource = resourcesMapper.selectById(resourceId);
if (resource == null) {
logger.error("download file not exist, resource id {}", resourceId); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | return null;
}
if (resource.isDirectory()) {
logger.error("resource id {} is directory,can't download it", resourceId);
throw new ServiceException("can't download directory");
}
int userId = resource.getUserId();
User user = userMapper.selectById(userId);
if (user == null) {
logger.error("user id {} not exists", userId);
throw new ServiceException(String.format("resource owner id %d not exist",userId));
}
Tenant tenant = tenantMapper.queryById(user.getTenantId());
if (tenant == null) {
logger.error("tenant id {} not exists", user.getTenantId());
throw new ServiceException(String.format("The tenant id %d of resource owner not exist",user.getTenantId()));
}
String tenantCode = tenant.getTenantCode();
String hdfsFileName = HadoopUtils.getHdfsFileName(resource.getType(), tenantCode, resource.getFullName());
String localFileName = FileUtils.getDownloadFilename(resource.getAlias());
logger.info("resource hdfs path is {}, download local filename is {}", hdfsFileName, localFileName);
HadoopUtils.getInstance().copyHdfsToLocal(hdfsFileName, localFileName, false, true);
return org.apache.dolphinscheduler.api.utils.FileUtils.file2Resource(localFileName);
}
/**
* list all file
*
* @param loginUser login user
* @param userId user id
* @return unauthorized result code |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | */
@Override
public Map<String, Object> authorizeResourceTree(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
if (isNotAdmin(loginUser, result)) {
return result;
}
List<Resource> resourceList = resourcesMapper.queryResourceExceptUserId(userId);
List<ResourceComponent> list;
if (CollectionUtils.isNotEmpty(resourceList)) {
Visitor visitor = new ResourceTreeVisitor(resourceList);
list = visitor.visit().getChildren();
} else {
list = new ArrayList<>(0);
}
result.put(Constants.DATA_LIST, list);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* unauthorized file
*
* @param loginUser login user
* @param userId user id
* @return unauthorized result code
*/
@Override
public Map<String, Object> unauthorizedFile(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
if (isNotAdmin(loginUser, result)) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | return result;
}
List<Resource> resourceList = resourcesMapper.queryResourceExceptUserId(userId);
List<Resource> list;
if (resourceList != null && !resourceList.isEmpty()) {
Set<Resource> resourceSet = new HashSet<>(resourceList);
List<Resource> authedResourceList = queryResourceList(userId, Constants.AUTHORIZE_WRITABLE_PERM);
getAuthorizedResourceList(resourceSet, authedResourceList);
list = new ArrayList<>(resourceSet);
} else {
list = new ArrayList<>(0);
}
Visitor visitor = new ResourceTreeVisitor(list);
result.put(Constants.DATA_LIST, visitor.visit().getChildren());
putMsg(result, Status.SUCCESS);
return result;
}
/**
* unauthorized udf function
*
* @param loginUser login user
* @param userId user id
* @return unauthorized result code
*/
@Override
public Map<String, Object> unauthorizedUDFFunction(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
if (isNotAdmin(loginUser, result)) {
return result; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | }
List<UdfFunc> udfFuncList = udfFunctionMapper.queryUdfFuncExceptUserId(userId);
List<UdfFunc> resultList = new ArrayList<>();
Set<UdfFunc> udfFuncSet;
if (CollectionUtils.isNotEmpty(udfFuncList)) {
udfFuncSet = new HashSet<>(udfFuncList);
List<UdfFunc> authedUDFFuncList = udfFunctionMapper.queryAuthedUdfFunc(userId);
getAuthorizedResourceList(udfFuncSet, authedUDFFuncList);
resultList = new ArrayList<>(udfFuncSet);
}
result.put(Constants.DATA_LIST, resultList);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* authorized udf function
*
* @param loginUser login user
* @param userId user id
* @return authorized result code
*/
@Override
public Map<String, Object> authorizedUDFFunction(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
if (isNotAdmin(loginUser, result)) {
return result;
}
List<UdfFunc> udfFuncs = udfFunctionMapper.queryAuthedUdfFunc(userId);
result.put(Constants.DATA_LIST, udfFuncs);
putMsg(result, Status.SUCCESS); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | return result;
}
/**
* authorized file
*
* @param loginUser login user
* @param userId user id
* @return authorized result
*/
@Override
public Map<String, Object> authorizedFile(User loginUser, Integer userId) {
Map<String, Object> result = new HashMap<>();
if (isNotAdmin(loginUser, result)) {
return result;
}
List<Resource> authedResources = queryResourceList(userId, Constants.AUTHORIZE_WRITABLE_PERM);
Visitor visitor = new ResourceTreeVisitor(authedResources);
String visit = JSONUtils.toJsonString(visitor.visit(), SerializationFeature.ORDER_MAP_ENTRIES_BY_KEYS);
logger.info(visit);
String jsonTreeStr = JSONUtils.toJsonString(visitor.visit().getChildren(), SerializationFeature.ORDER_MAP_ENTRIES_BY_KEYS);
logger.info(jsonTreeStr);
result.put(Constants.DATA_LIST, visitor.visit().getChildren());
putMsg(result,Status.SUCCESS);
return result;
}
/**
* get authorized resource list
*
* @param resourceSet resource set
* @param authedResourceList authorized resource list |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | */
private void getAuthorizedResourceList(Set<?> resourceSet, List<?> authedResourceList) {
Set<?> authedResourceSet;
if (CollectionUtils.isNotEmpty(authedResourceList)) {
authedResourceSet = new HashSet<>(authedResourceList);
resourceSet.removeAll(authedResourceSet);
}
}
/**
* get tenantCode by UserId
*
* @param userId user id
* @param result return result
* @return tenant code
*/
private String getTenantCode(int userId,Result<Object> result) {
User user = userMapper.selectById(userId);
if (user == null) {
logger.error("user {} not exists", userId);
putMsg(result, Status.USER_NOT_EXIST,userId);
return null;
}
Tenant tenant = tenantMapper.queryById(user.getTenantId());
if (tenant == null) {
logger.error("tenant not exists");
putMsg(result, Status.TENANT_NOT_EXIST);
return null;
}
return tenant.getTenantCode();
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | /**
* list all children id
* @param resource resource
* @param containSelf whether add self to children list
* @return all children id
*/
List<Integer> listAllChildren(Resource resource,boolean containSelf) {
List<Integer> childList = new ArrayList<>();
if (resource.getId() != -1 && containSelf) {
childList.add(resource.getId());
}
if (resource.isDirectory()) {
listAllChildren(resource.getId(),childList);
}
return childList;
}
/**
* list all children id
* @param resourceId resource id
* @param childList child list
*/
void listAllChildren(int resourceId,List<Integer> childList) {
List<Integer> children = resourcesMapper.listChildren(resourceId);
for (int childId : children) {
childList.add(childId);
listAllChildren(childId, childList);
}
}
/**
* query authored resource list (own and authorized) |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java | * @param loginUser login user
* @param type ResourceType
* @return all authored resource list
*/
private List<Resource> queryAuthoredResourceList(User loginUser, ResourceType type) {
List<Resource> relationResources;
int userId = loginUser.getId();
if (isAdmin(loginUser)) {
userId = 0;
relationResources = new ArrayList<>();
} else {
relationResources = queryResourceList(userId, 0);
}
List<Resource> ownResourceList = resourcesMapper.queryResourceListAuthored(userId, type.ordinal());
ownResourceList.addAll(relationResources);
return ownResourceList;
}
/**
* query resource list by userId and perm
* @param userId userId
* @param perm perm
* @return resource list
*/
private List<Resource> queryResourceList(Integer userId, int perm) {
List<Integer> resIds = resourceUserMapper.queryResourcesIdListByUserIdAndPerm(userId, perm);
return CollectionUtils.isEmpty(resIds) ? new ArrayList<>() : resourcesMapper.queryResourceListById(resIds);
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import static org.apache.dolphinscheduler.common.Constants.DATA_BASEDIR_PATH;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_VIEW_SUFFIXS;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_VIEW_SUFFIXS_DEFAULT_VALUE;
import static org.apache.dolphinscheduler.common.Constants.YYYYMMDDHHMMSS;
import org.apache.commons.io.Charsets;
import org.apache.commons.io.IOUtils;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.io.StringReader;
import java.io.UnsupportedEncodingException;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.nio.charset.UnsupportedCharsetException;
import java.util.Optional;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* file utils
*/
public class FileUtils {
public static final Logger logger = LoggerFactory.getLogger(FileUtils.class);
public static final String DATA_BASEDIR = PropertyUtils.getString(DATA_BASEDIR_PATH, "/tmp/dolphinscheduler");
public static final ThreadLocal<Logger> taskLoggerThreadLocal = new ThreadLocal<>();
private FileUtils() {
throw new UnsupportedOperationException("Construct FileUtils");
}
/**
* get file suffix
*
* @param filename file name
* @return file suffix
*/
public static String suffix(String filename) {
String fileSuffix = "";
if (StringUtils.isNotEmpty(filename)) {
int lastIndex = filename.lastIndexOf('.'); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | if (lastIndex > 0) {
fileSuffix = filename.substring(lastIndex + 1);
}
}
return fileSuffix;
}
/**
* get download file absolute path and name
*
* @param filename file name
* @return download file name
*/
public static String getDownloadFilename(String filename) {
String fileName = String.format("%s/download/%s/%s", DATA_BASEDIR, DateUtils.getCurrentTime(YYYYMMDDHHMMSS), filename);
File file = new File(fileName);
if (!file.getParentFile().exists()) {
file.getParentFile().mkdirs();
}
return fileName;
}
/**
* get upload file absolute path and name
*
* @param tenantCode tenant code
* @param filename file name
* @return local file path
*/
public static String getUploadFilename(String tenantCode, String filename) {
String fileName = String.format("%s/%s/resources/%s", DATA_BASEDIR, tenantCode, filename);
File file = new File(fileName); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | if (!file.getParentFile().exists()) {
file.getParentFile().mkdirs();
}
return fileName;
}
/**
* directory of process execution
*
* @param projectCode project code
* @param processDefineCode process definition Code
* @param processDefineVersion process definition version
* @param processInstanceId process instance id
* @param taskInstanceId task instance id
* @return directory of process execution
*/
public static String getProcessExecDir(long projectCode, long processDefineCode, int processDefineVersion, int processInstanceId, int taskInstanceId) {
String fileName = String.format("%s/exec/process/%d/%s/%d/%d", DATA_BASEDIR,
projectCode, processDefineCode + "_" + processDefineVersion, processInstanceId, taskInstanceId);
File file = new File(fileName);
if (!file.getParentFile().exists()) {
file.getParentFile().mkdirs();
}
return fileName;
}
/**
* @return get suffixes for resource files that support online viewing
*/
public static String getResourceViewSuffixs() {
return PropertyUtils.getString(RESOURCE_VIEW_SUFFIXS, RESOURCE_VIEW_SUFFIXS_DEFAULT_VALUE);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | /**
* create directory if absent
*
* @param execLocalPath execute local path
* @throws IOException errors
*/
public static void createWorkDirIfAbsent(String execLocalPath) throws IOException {
File execLocalPathFile = new File(execLocalPath);
if (execLocalPathFile.exists()) {
org.apache.commons.io.FileUtils.forceDelete(execLocalPathFile);
}
org.apache.commons.io.FileUtils.forceMkdir(execLocalPathFile);
String mkdirLog = "create dir success " + execLocalPath;
LoggerUtils.logInfo(Optional.ofNullable(logger), mkdirLog);
LoggerUtils.logInfo(Optional.ofNullable(taskLoggerThreadLocal.get()), mkdirLog);
}
/**
* write content to file ,if parent path not exists, it will do one's utmost to mkdir
*
* @param content content
* @param filePath target file path
* @return true if write success
*/
public static boolean writeContent2File(String content, String filePath) {
boolean flag = true;
BufferedReader bufferedReader = null;
BufferedWriter bufferedWriter = null;
try { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | File distFile = new File(filePath);
if (!distFile.getParentFile().exists() && !distFile.getParentFile().mkdirs()) {
FileUtils.logger.error("mkdir parent failed");
return false;
}
bufferedReader = new BufferedReader(new StringReader(content));
bufferedWriter = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(distFile), StandardCharsets.UTF_8));
char[] buf = new char[1024];
int len;
while ((len = bufferedReader.read(buf)) != -1) {
bufferedWriter.write(buf, 0, len);
}
bufferedWriter.flush();
bufferedReader.close();
bufferedWriter.close();
} catch (IOException e) {
FileUtils.logger.error(e.getMessage(), e);
flag = false;
return flag;
} finally {
IOUtils.closeQuietly(bufferedWriter);
IOUtils.closeQuietly(bufferedReader);
}
return flag;
}
/**
* Writes a String to a file creating the file if it does not exist.
* <p>
* NOTE: As from v1.3, the parent directories of the file will be created
* if they do not exist. |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | *
* @param file the file to write
* @param data the content to write to the file
* @param encoding the encoding to use, {@code null} means platform default
* @throws IOException in case of an I/O error
* @throws java.io.UnsupportedEncodingException if the encoding is not supported by the VM
* @since 2.4
*/
public static void writeStringToFile(File file, String data, Charset encoding) throws IOException {
writeStringToFile(file, data, encoding, false);
}
/**
* Writes a String to a file creating the file if it does not exist.
* <p>
* NOTE: As from v1.3, the parent directories of the file will be created
* if they do not exist.
*
* @param file the file to write
* @param data the content to write to the file
* @param encoding the encoding to use, {@code null} means platform default
* @throws IOException in case of an I/O error
* @throws java.io.UnsupportedEncodingException if the encoding is not supported by the VM
*/
public static void writeStringToFile(File file, String data, String encoding) throws IOException {
writeStringToFile(file, data, encoding, false);
}
/**
* Writes a String to a file creating the file if it does not exist.
*
* @param file the file to write |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | * @param data the content to write to the file
* @param encoding the encoding to use, {@code null} means platform default
* @param append if {@code true}, then the String will be added to the
* end of the file rather than overwriting
* @throws IOException in case of an I/O error
* @since 2.3
*/
public static void writeStringToFile(File file, String data, Charset encoding, boolean append) throws IOException {
OutputStream out = null;
try {
out = openOutputStream(file, append);
IOUtils.write(data, out, encoding);
out.close();
} finally {
IOUtils.closeQuietly(out);
}
}
/**
* Writes a String to a file creating the file if it does not exist.
*
* @param file the file to write
* @param data the content to write to the file
* @param encoding the encoding to use, {@code null} means platform default
* @param append if {@code true}, then the String will be added to the
* end of the file rather than overwriting
* @throws IOException in case of an I/O error
* @throws UnsupportedCharsetException thrown instead of {@link UnsupportedEncodingException} in version 2.2 if the encoding is not
* supported by the VM
* @since 2.1
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | public static void writeStringToFile(File file, String data, String encoding, boolean append) throws IOException {
writeStringToFile(file, data, Charsets.toCharset(encoding), append);
}
/**
* Writes a String to a file creating the file if it does not exist using the default encoding for the VM.
*
* @param file the file to write
* @param data the content to write to the file
* @throws IOException in case of an I/O error
*/
public static void writeStringToFile(File file, String data) throws IOException {
writeStringToFile(file, data, Charset.defaultCharset(), false);
}
/**
* Writes a String to a file creating the file if it does not exist using the default encoding for the VM.
*
* @param file the file to write
* @param data the content to write to the file
* @param append if {@code true}, then the String will be added to the
* end of the file rather than overwriting
* @throws IOException in case of an I/O error
* @since 2.1
*/
public static void writeStringToFile(File file, String data, boolean append) throws IOException {
writeStringToFile(file, data, Charset.defaultCharset(), append);
}
/**
* Opens a {@link FileOutputStream} for the specified file, checking and
* creating the parent directory if it does not exist.
* <p> |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | * At the end of the method either the stream will be successfully opened,
* or an exception will have been thrown.
* <p>
* The parent directory will be created if it does not exist.
* The file will be created if it does not exist.
* An exception is thrown if the file object exists but is a directory.
* An exception is thrown if the file exists but cannot be written to.
* An exception is thrown if the parent directory cannot be created.
*
* @param file the file to open for output, must not be {@code null}
* @return a new {@link FileOutputStream} for the specified file
* @throws IOException if the file object is a directory
* @throws IOException if the file cannot be written to
* @throws IOException if a parent directory needs creating but that fails
* @since 1.3
*/
public static FileOutputStream openOutputStream(File file) throws IOException {
return openOutputStream(file, false);
}
/**
* Opens a {@link FileOutputStream} for the specified file, checking and
* creating the parent directory if it does not exist.
* <p>
* At the end of the method either the stream will be successfully opened,
* or an exception will have been thrown.
* <p>
* The parent directory will be created if it does not exist.
* The file will be created if it does not exist.
* An exception is thrown if the file object exists but is a directory.
* An exception is thrown if the file exists but cannot be written to. |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | * An exception is thrown if the parent directory cannot be created.
*
* @param file the file to open for output, must not be {@code null}
* @param append if {@code true}, then bytes will be added to the
* end of the file rather than overwriting
* @return a new {@link FileOutputStream} for the specified file
* @throws IOException if the file object is a directory
* @throws IOException if the file cannot be written to
* @throws IOException if a parent directory needs creating but that fails
* @since 2.1
*/
public static FileOutputStream openOutputStream(File file, boolean append) throws IOException {
if (file.exists()) {
if (file.isDirectory()) {
throw new IOException("File '" + file + "' exists but is a directory");
}
if (!file.canWrite()) {
throw new IOException("File '" + file + "' cannot be written to");
}
} else {
File parent = file.getParentFile();
if (parent != null && !parent.mkdirs() && !parent.isDirectory()) {
throw new IOException("Directory '" + parent + "' could not be created");
}
}
return new FileOutputStream(file, append);
}
/**
* deletes a directory recursively
* |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | * @param dir directory
* @throws IOException in case deletion is unsuccessful
*/
public static void deleteDir(String dir) throws IOException {
org.apache.commons.io.FileUtils.deleteDirectory(new File(dir));
}
/**
* Deletes a file. If file is a directory, delete it and all sub-directories.
* <p>
* The difference between File.delete() and this method are:
* <ul>
* <li>A directory to be deleted does not have to be empty.</li>
* <li>You get exceptions when a file or directory cannot be deleted.
* (java.io.File methods returns a boolean)</li>
* </ul>
*
* @param filename file name
* @throws IOException in case deletion is unsuccessful
*/
public static void deleteFile(String filename) throws IOException {
org.apache.commons.io.FileUtils.forceDelete(new File(filename));
}
/**
* Gets all the parent subdirectories of the parentDir directory
*
* @param parentDir parent dir
* @return all dirs
*/
public static File[] getAllDir(String parentDir) {
if (parentDir == null || "".equals(parentDir)) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,475 | [Improvement][Api] Upload resource to remote failed, the local tmp file need to be cleared | **Describe the question**
When we upload a resource file, ds will do three thing.
1. create a local tmp file
2. cope the local tmp file to remote and delete the local file
https://github.com/apache/dolphinscheduler/blob/d04f4b60535cd86905e56b0a732f2ec038680eb7/dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ResourcesServiceImpl.java#L595-L605
But when the second step is failed, the local tmp file will not be cleaned,
**Which version of DolphinScheduler:**
-[1.3.6]
-[dev]
**Describe alternatives you've considered**
When upload to remote throw an exception, clean local tmp file
| https://github.com/apache/dolphinscheduler/issues/5475 | https://github.com/apache/dolphinscheduler/pull/5476 | d04f4b60535cd86905e56b0a732f2ec038680eb7 | 68301db6b914ff4002bfbc531c6810864d8e47c2 | "2021-05-15T03:21:13Z" | java | "2021-05-17T03:13:14Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/FileUtils.java | throw new RuntimeException("parentDir can not be empty");
}
File file = new File(parentDir);
if (!file.exists() || !file.isDirectory()) {
throw new RuntimeException("parentDir not exist, or is not a directory:" + parentDir);
}
return file.listFiles(File::isDirectory);
}
/**
* Get Content
*
* @param inputStream input stream
* @return string of input stream
*/
public static String readFile2Str(InputStream inputStream) {
try {
ByteArrayOutputStream output = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int length;
while ((length = inputStream.read(buffer)) != -1) {
output.write(buffer, 0, length);
}
return output.toString();
} catch (Exception e) {
logger.error(e.getMessage(), e);
throw new RuntimeException(e);
}
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/TaskRecordController.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.controller;
import static org.apache.dolphinscheduler.api.enums.Status.QUERY_TASK_RECORD_LIST_PAGING_ERROR;
import org.apache.dolphinscheduler.api.aspect.AccessLogAnnotation;
import org.apache.dolphinscheduler.api.exceptions.ApiException;
import org.apache.dolphinscheduler.api.service.TaskRecordService; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/TaskRecordController.java | import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.dao.entity.User;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestAttribute;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.ResponseStatus;
import org.springframework.web.bind.annotation.RestController;
import springfox.documentation.annotations.ApiIgnore;
/**
* task record controller
*/
@ApiIgnore
@RestController
@RequestMapping("/projects/task-record")
public class TaskRecordController extends BaseController {
@Autowired
TaskRecordService taskRecordService;
/**
* query task record list page
*
* @param loginUser login user
* @param taskName task name
* @param state state
* @param sourceTable source table
* @param destTable destination table |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/TaskRecordController.java | * @param taskDate task date
* @param startTime start time
* @param endTime end time
* @param pageNo page number
* @param pageSize page size
* @return task record list
*/
@GetMapping("/list-paging")
@ResponseStatus(HttpStatus.OK)
@ApiException(QUERY_TASK_RECORD_LIST_PAGING_ERROR)
@AccessLogAnnotation(ignoreRequestArgs = "loginUser")
public Result queryTaskRecordListPaging(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@RequestParam(value = "taskName", required = false) String taskName,
@RequestParam(value = "state", required = false) String state,
@RequestParam(value = "sourceTable", required = false) String sourceTable,
@RequestParam(value = "destTable", required = false) String destTable,
@RequestParam(value = "taskDate", required = false) String taskDate,
@RequestParam(value = "startDate", required = false) String startTime,
@RequestParam(value = "endDate", required = false) String endTime,
@RequestParam("pageNo") Integer pageNo,
@RequestParam("pageSize") Integer pageSize
) {
Map<String, Object> result = taskRecordService.queryTaskRecordListPaging(false, taskName, startTime, taskDate, sourceTable, destTable, endTime, state, pageNo, pageSize);
return returnDataListPaging(result);
}
/**
* query history task record list paging
*
* @param loginUser login user
* @param taskName task name |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/TaskRecordController.java | * @param state state
* @param sourceTable source table
* @param destTable destination table
* @param taskDate task date
* @param startTime start time
* @param endTime end time
* @param pageNo page number
* @param pageSize page size
* @return history task record list
*/
@GetMapping("/history-list-paging")
@ResponseStatus(HttpStatus.OK)
@ApiException(QUERY_TASK_RECORD_LIST_PAGING_ERROR)
@AccessLogAnnotation(ignoreRequestArgs = "loginUser")
public Result queryHistoryTaskRecordListPaging(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser,
@RequestParam(value = "taskName", required = false) String taskName,
@RequestParam(value = "state", required = false) String state,
@RequestParam(value = "sourceTable", required = false) String sourceTable,
@RequestParam(value = "destTable", required = false) String destTable,
@RequestParam(value = "taskDate", required = false) String taskDate,
@RequestParam(value = "startDate", required = false) String startTime,
@RequestParam(value = "endDate", required = false) String endTime,
@RequestParam("pageNo") Integer pageNo,
@RequestParam("pageSize") Integer pageSize
) {
Map<String, Object> result = taskRecordService.queryTaskRecordListPaging(true, taskName, startTime, taskDate, sourceTable, destTable, endTime, state, pageNo, pageSize);
return returnDataListPaging(result);
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/TaskRecordService.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.service;
import java.util.Map;
/**
* task record service
*/
public interface TaskRecordService { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/TaskRecordService.java | /**
* query task record list paging
*
* @param taskName task name
* @param state state
* @param sourceTable source table
* @param destTable destination table
* @param taskDate task date
* @param startDate start time
* @param endDate end time
* @param pageNo page numbere
* @param pageSize page size
* @param isHistory is history
* @return task record list
*/
Map<String,Object> queryTaskRecordListPaging(boolean isHistory, String taskName, String startDate,
String taskDate, String sourceTable,
String destTable, String endDate,
String state, Integer pageNo, Integer pageSize);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TaskRecordServiceImpl.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.service.impl; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TaskRecordServiceImpl.java | import static org.apache.dolphinscheduler.common.Constants.TASK_RECORD_TABLE_HISTORY_HIVE_LOG;
import static org.apache.dolphinscheduler.common.Constants.TASK_RECORD_TABLE_HIVE_LOG;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.TaskRecordService;
import org.apache.dolphinscheduler.api.utils.PageInfo;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.dao.TaskRecordDao;
import org.apache.dolphinscheduler.dao.entity.TaskRecord;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.springframework.stereotype.Service;
/**
* task record service impl
*/
@Service
public class TaskRecordServiceImpl extends BaseServiceImpl implements TaskRecordService {
/**
* query task record list paging
*
* @param taskName task name
* @param state state
* @param sourceTable source table
* @param destTable destination table
* @param taskDate task date
* @param startDate start time
* @param endDate end time
* @param pageNo page numbere
* @param pageSize page size
* @param isHistory is history |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TaskRecordServiceImpl.java | * @return task record list
*/
@Override
public Map<String,Object> queryTaskRecordListPaging(boolean isHistory, String taskName, String startDate,
String taskDate, String sourceTable,
String destTable, String endDate,
String state, Integer pageNo, Integer pageSize) {
Map<String, Object> result = new HashMap<>();
PageInfo<TaskRecord> pageInfo = new PageInfo<>(pageNo, pageSize);
Map<String, String> map = new HashMap<>();
map.put("taskName", taskName);
map.put("taskDate", taskDate);
map.put("state", state);
map.put("sourceTable", sourceTable);
map.put("targetTable", destTable);
map.put("startTime", startDate);
map.put("endTime", endDate);
map.put("offset", pageInfo.getStart().toString());
map.put("pageSize", pageInfo.getPageSize().toString());
String table = isHistory ? TASK_RECORD_TABLE_HISTORY_HIVE_LOG : TASK_RECORD_TABLE_HIVE_LOG;
int count = TaskRecordDao.countTaskRecord(map, table);
List<TaskRecord> recordList = TaskRecordDao.queryAllTaskRecord(map, table);
pageInfo.setTotalCount(count);
pageInfo.setLists(recordList);
result.put(Constants.DATA_LIST, pageInfo);
putMsg(result, Status.SUCCESS);
return result;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/controller/TaskRecordControllerTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.controller;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.content;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.utils.Result; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/controller/TaskRecordControllerTest.java | import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.junit.Assert;
import org.junit.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.MediaType;
import org.springframework.test.web.servlet.MvcResult;
import org.springframework.util.LinkedMultiValueMap;
import org.springframework.util.MultiValueMap;
/**
* task record controller test
*/
public class TaskRecordControllerTest extends AbstractControllerTest {
private static final Logger logger = LoggerFactory.getLogger(TaskRecordControllerTest.class);
@Test
public void testQueryTaskRecordListPaging() throws Exception {
MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>();
paramsMap.add("taskName","taskName");
paramsMap.add("state","state");
paramsMap.add("sourceTable","");
paramsMap.add("destTable","");
paramsMap.add("taskDate","");
paramsMap.add("startDate","2019-12-16 00:00:00");
paramsMap.add("endDate","2019-12-17 00:00:00");
paramsMap.add("pageNo","1");
paramsMap.add("pageSize","30");
MvcResult mvcResult = mockMvc.perform(get("/projects/task-record/list-paging")
.header(SESSION_ID, sessionId)
.params(paramsMap))
.andExpect(status().isOk()) |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/controller/TaskRecordControllerTest.java | .andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
.andReturn();
Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class);
Assert.assertEquals(Status.SUCCESS.getCode(),result.getCode().intValue());
logger.info(mvcResult.getResponse().getContentAsString());
}
@Test
public void testQueryHistoryTaskRecordListPaging() throws Exception {
MultiValueMap<String, String> paramsMap = new LinkedMultiValueMap<>();
paramsMap.add("taskName","taskName");
paramsMap.add("state","state");
paramsMap.add("sourceTable","");
paramsMap.add("destTable","");
paramsMap.add("taskDate","");
paramsMap.add("startDate","2019-12-16 00:00:00");
paramsMap.add("endDate","2019-12-17 00:00:00");
paramsMap.add("pageNo","1");
paramsMap.add("pageSize","30");
MvcResult mvcResult = mockMvc.perform(get("/projects/task-record/history-list-paging")
.header(SESSION_ID, sessionId)
.params(paramsMap))
.andExpect(status().isOk())
.andExpect(content().contentType(MediaType.APPLICATION_JSON_UTF8))
.andReturn();
Result result = JSONUtils.parseObject(mvcResult.getResponse().getContentAsString(), Result.class);
Assert.assertEquals(Status.SUCCESS.getCode(),result.getCode().intValue());
logger.info(mvcResult.getResponse().getContentAsString());
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.utils.OSUtils;
import org.apache.dolphinscheduler.common.utils.StringUtils;
import java.util.regex.Pattern;
/**
* Constants
*/
public final class Constants { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | private Constants() {
throw new UnsupportedOperationException("Construct Constants");
}
/**
* quartz config
*/
public static final String ORG_QUARTZ_JOBSTORE_DRIVERDELEGATECLASS = "org.quartz.jobStore.driverDelegateClass";
public static final String ORG_QUARTZ_SCHEDULER_INSTANCENAME = "org.quartz.scheduler.instanceName";
public static final String ORG_QUARTZ_SCHEDULER_INSTANCEID = "org.quartz.scheduler.instanceId"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String ORG_QUARTZ_SCHEDULER_MAKESCHEDULERTHREADDAEMON = "org.quartz.scheduler.makeSchedulerThreadDaemon";
public static final String ORG_QUARTZ_JOBSTORE_USEPROPERTIES = "org.quartz.jobStore.useProperties";
public static final String ORG_QUARTZ_THREADPOOL_CLASS = "org.quartz.threadPool.class";
public static final String ORG_QUARTZ_THREADPOOL_THREADCOUNT = "org.quartz.threadPool.threadCount";
public static final String ORG_QUARTZ_THREADPOOL_MAKETHREADSDAEMONS = "org.quartz.threadPool.makeThreadsDaemons";
public static final String ORG_QUARTZ_THREADPOOL_THREADPRIORITY = "org.quartz.threadPool.threadPriority";
public static final String ORG_QUARTZ_JOBSTORE_CLASS = "org.quartz.jobStore.class";
public static final String ORG_QUARTZ_JOBSTORE_TABLEPREFIX = "org.quartz.jobStore.tablePrefix";
public static final String ORG_QUARTZ_JOBSTORE_ISCLUSTERED = "org.quartz.jobStore.isClustered";
public static final String ORG_QUARTZ_JOBSTORE_MISFIRETHRESHOLD = "org.quartz.jobStore.misfireThreshold";
public static final String ORG_QUARTZ_JOBSTORE_CLUSTERCHECKININTERVAL = "org.quartz.jobStore.clusterCheckinInterval";
public static final String ORG_QUARTZ_JOBSTORE_ACQUIRETRIGGERSWITHINLOCK = "org.quartz.jobStore.acquireTriggersWithinLock";
public static final String ORG_QUARTZ_JOBSTORE_DATASOURCE = "org.quartz.jobStore.dataSource";
public static final String ORG_QUARTZ_DATASOURCE_MYDS_CONNECTIONPROVIDER_CLASS = "org.quartz.dataSource.myDs.connectionProvider.class";
/**
* quartz config default value
*/
public static final String QUARTZ_TABLE_PREFIX = "QRTZ_";
public static final String QUARTZ_MISFIRETHRESHOLD = "60000";
public static final String QUARTZ_CLUSTERCHECKININTERVAL = "5000";
public static final String QUARTZ_DATASOURCE = "myDs";
public static final String QUARTZ_THREADCOUNT = "25";
public static final String QUARTZ_THREADPRIORITY = "5";
public static final String QUARTZ_INSTANCENAME = "DolphinScheduler";
public static final String QUARTZ_INSTANCEID = "AUTO";
public static final String QUARTZ_ACQUIRETRIGGERSWITHINLOCK = "true";
/**
* common properties path
*/
public static final String COMMON_PROPERTIES_PATH = "/common.properties"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* fs.defaultFS
*/
public static final String FS_DEFAULTFS = "fs.defaultFS";
/**
* fs s3a endpoint
*/
public static final String FS_S3A_ENDPOINT = "fs.s3a.endpoint";
/**
* fs s3a access key
*/
public static final String FS_S3A_ACCESS_KEY = "fs.s3a.access.key";
/**
* fs s3a secret key
*/
public static final String FS_S3A_SECRET_KEY = "fs.s3a.secret.key";
/**
* yarn.resourcemanager.ha.rm.ids
*/
public static final String YARN_RESOURCEMANAGER_HA_RM_IDS = "yarn.resourcemanager.ha.rm.ids";
public static final String YARN_RESOURCEMANAGER_HA_XX = "xx";
/**
* yarn.application.status.address
*/
public static final String YARN_APPLICATION_STATUS_ADDRESS = "yarn.application.status.address";
/**
* yarn.job.history.status.address
*/
public static final String YARN_JOB_HISTORY_STATUS_ADDRESS = "yarn.job.history.status.address";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * hdfs configuration
* hdfs.root.user
*/
public static final String HDFS_ROOT_USER = "hdfs.root.user";
/**
* hdfs/s3 configuration
* resource.upload.path
*/
public static final String RESOURCE_UPLOAD_PATH = "resource.upload.path";
/**
* data basedir path
*/
public static final String DATA_BASEDIR_PATH = "data.basedir.path";
/**
* dolphinscheduler.env.path
*/
public static final String DOLPHINSCHEDULER_ENV_PATH = "dolphinscheduler.env.path";
/**
* environment properties default path
*/
public static final String ENV_PATH = "env/dolphinscheduler_env.sh";
/**
* python home
*/
public static final String PYTHON_HOME = "PYTHON_HOME";
/**
* resource.view.suffixs
*/
public static final String RESOURCE_VIEW_SUFFIXS = "resource.view.suffixs";
public static final String RESOURCE_VIEW_SUFFIXS_DEFAULT_VALUE = "txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* development.state
*/
public static final String DEVELOPMENT_STATE = "development.state";
public static final String DEVELOPMENT_STATE_DEFAULT_VALUE = "true";
/**
* sudo enable
*/
public static final String SUDO_ENABLE = "sudo.enable";
/**
* string true
*/
public static final String STRING_TRUE = "true";
/**
* string false
*/
public static final String STRING_FALSE = "false";
/**
* resource storage type
*/
public static final String RESOURCE_STORAGE_TYPE = "resource.storage.type";
/**
* MasterServer directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_MASTERS = "/nodes/master";
/**
* WorkerServer directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_WORKERS = "/nodes/worker";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * all servers directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_DEAD_SERVERS = "/dead-servers";
/**
* MasterServer lock directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_MASTERS = "/lock/masters";
/**
* MasterServer failover directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_FAILOVER_MASTERS = "/lock/failover/masters";
/**
* WorkerServer failover directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_FAILOVER_WORKERS = "/lock/failover/workers";
/**
* MasterServer startup failover runing and fault tolerance process
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_FAILOVER_STARTUP_MASTERS = "/lock/failover/startup-masters";
/**
* comma ,
*/
public static final String COMMA = ",";
/**
* slash /
*/
public static final String SLASH = "/";
/**
* COLON :
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String COLON = ":";
/**
* SPACE " "
*/
public static final String SPACE = " ";
/**
* SINGLE_SLASH /
*/
public static final String SINGLE_SLASH = "/";
/**
* DOUBLE_SLASH //
*/
public static final String DOUBLE_SLASH = "//";
/**
* SINGLE_QUOTES "'"
*/
public static final String SINGLE_QUOTES = "'";
/**
* DOUBLE_QUOTES "\""
*/
public static final String DOUBLE_QUOTES = "\"";
/**
* SEMICOLON ;
*/
public static final String SEMICOLON = ";";
/**
* EQUAL SIGN
*/
public static final String EQUAL_SIGN = "=";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * AT SIGN
*/
public static final String AT_SIGN = "@";
public static final String WORKER_MAX_CPULOAD_AVG = "worker.max.cpuload.avg";
public static final String WORKER_RESERVED_MEMORY = "worker.reserved.memory";
public static final String MASTER_MAX_CPULOAD_AVG = "master.max.cpuload.avg";
public static final String MASTER_RESERVED_MEMORY = "master.reserved.memory";
/**
* date format of yyyy-MM-dd HH:mm:ss
*/
public static final String YYYY_MM_DD_HH_MM_SS = "yyyy-MM-dd HH:mm:ss";
/**
* date format of yyyyMMddHHmmss
*/
public static final String YYYYMMDDHHMMSS = "yyyyMMddHHmmss";
/**
* date format of yyyyMMddHHmmssSSS
*/
public static final String YYYYMMDDHHMMSSSSS = "yyyyMMddHHmmssSSS";
/**
* http connect time out
*/
public static final int HTTP_CONNECT_TIMEOUT = 60 * 1000;
/**
* http connect request time out
*/
public static final int HTTP_CONNECTION_REQUEST_TIMEOUT = 60 * 1000;
/**
* httpclient soceket time out
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final int SOCKET_TIMEOUT = 60 * 1000;
/**
* http header
*/
public static final String HTTP_HEADER_UNKNOWN = "unKnown";
/**
* http X-Forwarded-For
*/
public static final String HTTP_X_FORWARDED_FOR = "X-Forwarded-For";
/**
* http X-Real-IP
*/
public static final String HTTP_X_REAL_IP = "X-Real-IP";
/**
* UTF-8
*/
public static final String UTF_8 = "UTF-8";
/**
* user name regex
*/
public static final Pattern REGEX_USER_NAME = Pattern.compile("^[a-zA-Z0-9._-]{3,39}$");
/**
* email regex
*/
public static final Pattern REGEX_MAIL_NAME = Pattern.compile("^([a-z0-9A-Z]+[_|\\-|\\.]?)+[a-z0-9A-Z]@([a-z0-9A-Z]+(-[a-z0-9A-Z]+)?\\.)+[a-zA-Z]{2,}$");
/**
* default display rows
*/
public static final int DEFAULT_DISPLAY_ROWS = 10;
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * read permission
*/
public static final int READ_PERMISSION = 2 * 1;
/**
* write permission
*/
public static final int WRITE_PERMISSION = 2 * 2;
/**
* execute permission
*/
public static final int EXECUTE_PERMISSION = 1;
/**
* default admin permission
*/
public static final int DEFAULT_ADMIN_PERMISSION = 7;
/**
* all permissions
*/
public static final int ALL_PERMISSIONS = READ_PERMISSION | WRITE_PERMISSION | EXECUTE_PERMISSION;
/**
* max task timeout
*/
public static final int MAX_TASK_TIMEOUT = 24 * 3600;
/**
* master cpu load
*/
public static final int DEFAULT_MASTER_CPU_LOAD = Runtime.getRuntime().availableProcessors() * 2;
/**
* worker cpu load
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final int DEFAULT_WORKER_CPU_LOAD = Runtime.getRuntime().availableProcessors() * 2;
/**
* worker host weight
*/
public static final int DEFAULT_WORKER_HOST_WEIGHT = 100;
/**
* default log cache rows num,output when reach the number
*/
public static final int DEFAULT_LOG_ROWS_NUM = 4 * 16;
/**
* log flush interval?output when reach the interval
*/
public static final int DEFAULT_LOG_FLUSH_INTERVAL = 1000;
/**
* time unit secong to minutes
*/
public static final int SEC_2_MINUTES_TIME_UNIT = 60;
/***
*
* rpc port
*/
public static final int RPC_PORT = 50051;
/***
* alert rpc port
*/
public static final int ALERT_RPC_PORT = 50052;
/**
* forbid running task
*/
public static final String FLOWNODE_RUN_FLAG_FORBIDDEN = "FORBIDDEN"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* normal running task
*/
public static final String FLOWNODE_RUN_FLAG_NORMAL = "NORMAL";
/**
* datasource configuration path
*/
public static final String DATASOURCE_PROPERTIES = "/datasource.properties";
public static final String TASK_RECORD_URL = "task.record.datasource.url";
public static final String TASK_RECORD_FLAG = "task.record.flag";
public static final String TASK_RECORD_USER = "task.record.datasource.username";
public static final String TASK_RECORD_PWD = "task.record.datasource.password";
public static final String DEFAULT = "Default";
public static final String USER = "user";
public static final String PASSWORD = "password";
public static final String XXXXXX = "******";
public static final String NULL = "NULL";
public static final String THREAD_NAME_MASTER_SERVER = "Master-Server";
public static final String THREAD_NAME_WORKER_SERVER = "Worker-Server";
public static final String TASK_RECORD_TABLE_HIVE_LOG = "eamp_hive_log_hd";
public static final String TASK_RECORD_TABLE_HISTORY_HIVE_LOG = "eamp_hive_hist_log_hd";
/**
* command parameter keys
*/
public static final String CMD_PARAM_RECOVER_PROCESS_ID_STRING = "ProcessInstanceId";
public static final String CMD_PARAM_RECOVERY_START_NODE_STRING = "StartNodeIdList";
public static final String CMD_PARAM_RECOVERY_WAITING_THREAD = "WaitingThreadInstanceId";
public static final String CMD_PARAM_SUB_PROCESS = "processInstanceId";
public static final String CMD_PARAM_EMPTY_SUB_PROCESS = "0";
public static final String CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID = "parentProcessInstanceId"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String CMD_PARAM_SUB_PROCESS_DEFINE_ID = "processDefinitionId";
public static final String CMD_PARAM_START_NODE_NAMES = "StartNodeNameList";
public static final String CMD_PARAM_START_PARAMS = "StartParams";
public static final String CMD_PARAM_FATHER_PARAMS = "fatherParams";
/**
* complement data start date
*/
public static final String CMDPARAM_COMPLEMENT_DATA_START_DATE = "complementStartDate";
/**
* complement data end date
*/
public static final String CMDPARAM_COMPLEMENT_DATA_END_DATE = "complementEndDate";
/**
* hadoop configuration
*/
public static final String HADOOP_RM_STATE_ACTIVE = "ACTIVE";
public static final String HADOOP_RM_STATE_STANDBY = "STANDBY";
public static final String HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT = "resource.manager.httpaddress.port";
/**
* data source config
*/
public static final String SPRING_DATASOURCE_DRIVER_CLASS_NAME = "spring.datasource.driver-class-name";
public static final String SPRING_DATASOURCE_URL = "spring.datasource.url";
public static final String SPRING_DATASOURCE_USERNAME = "spring.datasource.username";
public static final String SPRING_DATASOURCE_PASSWORD = "spring.datasource.password";
public static final String SPRING_DATASOURCE_VALIDATION_QUERY_TIMEOUT = "spring.datasource.validationQueryTimeout";
public static final String SPRING_DATASOURCE_INITIAL_SIZE = "spring.datasource.initialSize";
public static final String SPRING_DATASOURCE_MIN_IDLE = "spring.datasource.minIdle";
public static final String SPRING_DATASOURCE_MAX_ACTIVE = "spring.datasource.maxActive";
public static final String SPRING_DATASOURCE_MAX_WAIT = "spring.datasource.maxWait"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String SPRING_DATASOURCE_TIME_BETWEEN_EVICTION_RUNS_MILLIS = "spring.datasource.timeBetweenEvictionRunsMillis";
public static final String SPRING_DATASOURCE_TIME_BETWEEN_CONNECT_ERROR_MILLIS = "spring.datasource.timeBetweenConnectErrorMillis";
public static final String SPRING_DATASOURCE_MIN_EVICTABLE_IDLE_TIME_MILLIS = "spring.datasource.minEvictableIdleTimeMillis";
public static final String SPRING_DATASOURCE_VALIDATION_QUERY = "spring.datasource.validationQuery";
public static final String SPRING_DATASOURCE_TEST_WHILE_IDLE = "spring.datasource.testWhileIdle";
public static final String SPRING_DATASOURCE_TEST_ON_BORROW = "spring.datasource.testOnBorrow";
public static final String SPRING_DATASOURCE_TEST_ON_RETURN = "spring.datasource.testOnReturn";
public static final String SPRING_DATASOURCE_POOL_PREPARED_STATEMENTS = "spring.datasource.poolPreparedStatements";
public static final String SPRING_DATASOURCE_DEFAULT_AUTO_COMMIT = "spring.datasource.defaultAutoCommit";
public static final String SPRING_DATASOURCE_KEEP_ALIVE = "spring.datasource.keepAlive";
public static final String SPRING_DATASOURCE_MAX_POOL_PREPARED_STATEMENT_PER_CONNECTION_SIZE = "spring.datasource.maxPoolPreparedStatementPerConnectionSize";
public static final String DEVELOPMENT = "development";
public static final String QUARTZ_PROPERTIES_PATH = "quartz.properties";
/**
* sleep time
*/
public static final int SLEEP_TIME_MILLIS = 1000;
/**
* heartbeat for zk info length
*/
public static final int HEARTBEAT_FOR_ZOOKEEPER_INFO_LENGTH = 10;
public static final int HEARTBEAT_WITH_WEIGHT_FOR_ZOOKEEPER_INFO_LENGTH = 11;
/**
* jar
*/
public static final String JAR = "jar";
/**
* hadoop
*/
public static final String HADOOP = "hadoop"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* -D <property>=<value>
*/
public static final String D = "-D";
/**
* -D mapreduce.job.name=name
*/
public static final String MR_NAME = "mapreduce.job.name";
/**
* -D mapreduce.job.queuename=queuename
*/
public static final String MR_QUEUE = "mapreduce.job.queuename";
/**
* spark params constant
*/
public static final String MASTER = "--master";
public static final String DEPLOY_MODE = "--deploy-mode";
/**
* --class CLASS_NAME
*/
public static final String MAIN_CLASS = "--class";
/**
* --driver-cores NUM
*/
public static final String DRIVER_CORES = "--driver-cores";
/**
* --driver-memory MEM
*/
public static final String DRIVER_MEMORY = "--driver-memory";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * --num-executors NUM
*/
public static final String NUM_EXECUTORS = "--num-executors";
/**
* --executor-cores NUM
*/
public static final String EXECUTOR_CORES = "--executor-cores";
/**
* --executor-memory MEM
*/
public static final String EXECUTOR_MEMORY = "--executor-memory";
/**
* --name NAME
*/
public static final String SPARK_NAME = "--name";
/**
* --queue QUEUE
*/
public static final String SPARK_QUEUE = "--queue";
/**
* exit code success
*/
public static final int EXIT_CODE_SUCCESS = 0;
/**
* exit code kill
*/
public static final int EXIT_CODE_KILL = 137;
/**
* exit code failure
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final int EXIT_CODE_FAILURE = -1;
/**
* process or task definition failure
*/
public static final int DEFINITION_FAILURE = -1;
/**
* date format of yyyyMMdd
*/
public static final String PARAMETER_FORMAT_DATE = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String PARAMETER_FORMAT_TIME = "yyyyMMddHHmmss";
/**
* system date(yyyyMMddHHmmss)
*/
public static final String PARAMETER_DATETIME = "system.datetime";
/**
* system date(yyyymmdd) today
*/
public static final String PARAMETER_CURRENT_DATE = "system.biz.curdate";
/**
* system date(yyyymmdd) yesterday
*/
public static final String PARAMETER_BUSINESS_DATE = "system.biz.date";
/**
* ACCEPTED
*/
public static final String ACCEPTED = "ACCEPTED";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * SUCCEEDED
*/
public static final String SUCCEEDED = "SUCCEEDED";
/**
* NEW
*/
public static final String NEW = "NEW";
/**
* NEW_SAVING
*/
public static final String NEW_SAVING = "NEW_SAVING";
/**
* SUBMITTED
*/
public static final String SUBMITTED = "SUBMITTED";
/**
* FAILED
*/
public static final String FAILED = "FAILED";
/**
* KILLED
*/
public static final String KILLED = "KILLED";
/**
* RUNNING
*/
public static final String RUNNING = "RUNNING";
/**
* underline "_"
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String UNDERLINE = "_";
/**
* quartz job prifix
*/
public static final String QUARTZ_JOB_PRIFIX = "job";
/**
* quartz job group prifix
*/
public static final String QUARTZ_JOB_GROUP_PRIFIX = "jobgroup";
/**
* projectId
*/
public static final String PROJECT_ID = "projectId";
/**
* processId
*/
public static final String SCHEDULE_ID = "scheduleId";
/**
* schedule
*/
public static final String SCHEDULE = "schedule";
/**
* application regex
*/
public static final String APPLICATION_REGEX = "application_\\d+_\\d+";
public static final String PID = OSUtils.isWindows() ? "handle" : "pid";
/**
* month_begin
*/
public static final String MONTH_BEGIN = "month_begin"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* add_months
*/
public static final String ADD_MONTHS = "add_months";
/**
* month_end
*/
public static final String MONTH_END = "month_end";
/**
* week_begin
*/
public static final String WEEK_BEGIN = "week_begin";
/**
* week_end
*/
public static final String WEEK_END = "week_end";
/**
* timestamp
*/
public static final String TIMESTAMP = "timestamp";
public static final char SUBTRACT_CHAR = '-';
public static final char ADD_CHAR = '+';
public static final char MULTIPLY_CHAR = '*';
public static final char DIVISION_CHAR = '/';
public static final char LEFT_BRACE_CHAR = '(';
public static final char RIGHT_BRACE_CHAR = ')';
public static final String ADD_STRING = "+";
public static final String MULTIPLY_STRING = "*";
public static final String DIVISION_STRING = "/";
public static final String LEFT_BRACE_STRING = "("; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final char P = 'P';
public static final char N = 'N';
public static final String SUBTRACT_STRING = "-";
public static final String GLOBAL_PARAMS = "globalParams";
public static final String LOCAL_PARAMS = "localParams";
public static final String LOCAL_PARAMS_LIST = "localParamsList";
public static final String SUBPROCESS_INSTANCE_ID = "subProcessInstanceId";
public static final String PROCESS_INSTANCE_STATE = "processInstanceState";
public static final String PARENT_WORKFLOW_INSTANCE = "parentWorkflowInstance";
public static final String CONDITION_RESULT = "conditionResult";
public static final String DEPENDENCE = "dependence";
public static final String TASK_TYPE = "taskType";
public static final String TASK_LIST = "taskList";
public static final String RWXR_XR_X = "rwxr-xr-x";
public static final String QUEUE = "queue";
public static final String QUEUE_NAME = "queueName";
public static final int LOG_QUERY_SKIP_LINE_NUMBER = 0;
public static final int LOG_QUERY_LIMIT = 4096;
/**
* master/worker server use for zk
*/
public static final String MASTER_TYPE = "master";
public static final String WORKER_TYPE = "worker";
public static final String DELETE_ZK_OP = "delete";
public static final String ADD_ZK_OP = "add";
public static final String ALIAS = "alias";
public static final String CONTENT = "content";
public static final String DEPENDENT_SPLIT = ":||";
public static final String DEPENDENT_ALL = "ALL";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * preview schedule execute count
*/
public static final int PREVIEW_SCHEDULE_EXECUTE_COUNT = 5;
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* kerberos expire time
*/
public static final String KERBEROS_EXPIRE_TIME = "kerberos.expire.time";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf";
/**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state";
/**
* com.amazonaws.services.s3.enableV4
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String AWS_S3_V4 = "com.amazonaws.services.s3.enableV4";
/**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* default worker group id
*/
public static final int DEFAULT_WORKER_ID = -1;
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* task log info format
*/
public static final String TASK_LOG_INFO_FORMAT = "TaskLogInfo-%s";
/**
* hive conf
*/
public static final String HIVE_CONF = "hiveconf:";
/**
* flink
*/
public static final String FLINK_YARN_CLUSTER = "yarn-cluster";
public static final String FLINK_RUN_MODE = "-m";
public static final String FLINK_YARN_SLOT = "-ys";
public static final String FLINK_APP_NAME = "-ynm";
public static final String FLINK_QUEUE = "-yqu";
public static final String FLINK_TASK_MANAGE = "-yn"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String FLINK_JOB_MANAGE_MEM = "-yjm";
public static final String FLINK_TASK_MANAGE_MEM = "-ytm";
public static final String FLINK_MAIN_CLASS = "-c";
public static final String FLINK_PARALLELISM = "-p";
public static final String FLINK_SHUTDOWN_ON_ATTACHED_EXIT = "-sae";
public static final int[] NOT_TERMINATED_STATES = new int[] {
ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.RUNNING_EXECUTION.ordinal(),
ExecutionStatus.DELAY_EXECUTION.ordinal(),
ExecutionStatus.READY_PAUSE.ordinal(),
ExecutionStatus.READY_STOP.ordinal(),
ExecutionStatus.NEED_FAULT_TOLERANCE.ordinal(),
ExecutionStatus.WAITTING_THREAD.ordinal(),
ExecutionStatus.WAITTING_DEPEND.ordinal()
};
/**
* status
*/
public static final String STATUS = "status";
/**
* message
*/
public static final String MSG = "msg";
/**
* data total
*/
public static final String COUNT = "count";
/**
* page size
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String PAGE_SIZE = "pageSize";
/**
* current page no
*/
public static final String PAGE_NUMBER = "pageNo";
/**
*
*/
public static final String DATA_LIST = "data";
public static final String TOTAL_LIST = "totalList";
public static final String CURRENT_PAGE = "currentPage";
public static final String TOTAL_PAGE = "totalPage";
public static final String TOTAL = "total";
/**
* workflow
*/
public static final String WORKFLOW_LIST = "workFlowList";
public static final String WORKFLOW_RELATION_LIST = "workFlowRelationList";
/**
* session user
*/
public static final String SESSION_USER = "session.user";
public static final String SESSION_ID = "sessionId";
public static final String PASSWORD_DEFAULT = "******";
/**
* locale
*/
public static final String LOCALE_LANGUAGE = "language";
/**
* driver |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String ORG_POSTGRESQL_DRIVER = "org.postgresql.Driver";
public static final String COM_MYSQL_JDBC_DRIVER = "com.mysql.jdbc.Driver";
public static final String ORG_APACHE_HIVE_JDBC_HIVE_DRIVER = "org.apache.hive.jdbc.HiveDriver";
public static final String COM_CLICKHOUSE_JDBC_DRIVER = "ru.yandex.clickhouse.ClickHouseDriver";
public static final String COM_ORACLE_JDBC_DRIVER = "oracle.jdbc.driver.OracleDriver";
public static final String COM_SQLSERVER_JDBC_DRIVER = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
public static final String COM_DB2_JDBC_DRIVER = "com.ibm.db2.jcc.DB2Driver";
public static final String COM_PRESTO_JDBC_DRIVER = "com.facebook.presto.jdbc.PrestoDriver";
/**
* database type
*/
public static final String MYSQL = "MYSQL";
public static final String POSTGRESQL = "POSTGRESQL";
public static final String HIVE = "HIVE";
public static final String SPARK = "SPARK";
public static final String CLICKHOUSE = "CLICKHOUSE";
public static final String ORACLE = "ORACLE";
public static final String SQLSERVER = "SQLSERVER";
public static final String DB2 = "DB2";
public static final String PRESTO = "PRESTO";
/**
* jdbc url
*/
public static final String JDBC_MYSQL = "jdbc:mysql://";
public static final String JDBC_POSTGRESQL = "jdbc:postgresql://";
public static final String JDBC_HIVE_2 = "jdbc:hive2://";
public static final String JDBC_CLICKHOUSE = "jdbc:clickhouse://";
public static final String JDBC_ORACLE_SID = "jdbc:oracle:thin:@";
public static final String JDBC_ORACLE_SERVICE_NAME = "jdbc:oracle:thin:@//"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String JDBC_SQLSERVER = "jdbc:sqlserver://";
public static final String JDBC_DB2 = "jdbc:db2://";
public static final String JDBC_PRESTO = "jdbc:presto://";
public static final String ADDRESS = "address";
public static final String DATABASE = "database";
public static final String JDBC_URL = "jdbcUrl";
public static final String PRINCIPAL = "principal";
public static final String OTHER = "other";
public static final String ORACLE_DB_CONNECT_TYPE = "connectType";
public static final String KERBEROS_KRB5_CONF_PATH = "javaSecurityKrb5Conf";
public static final String KERBEROS_KEY_TAB_USERNAME = "loginUserKeytabUsername";
public static final String KERBEROS_KEY_TAB_PATH = "loginUserKeytabPath";
/**
* session timeout
*/
public static final int SESSION_TIME_OUT = 7200;
public static final int MAX_FILE_SIZE = 1024 * 1024 * 1024;
public static final String UDF = "UDF";
public static final String CLASS = "class";
public static final String RECEIVERS = "receivers";
public static final String RECEIVERS_CC = "receiversCc";
/**
* dataSource sensitive param
*/
public static final String DATASOURCE_PASSWORD_REGEX = "(?<=(\"password\":\")).*?(?=(\"))";
/**
* default worker group
*/
public static final String DEFAULT_WORKER_GROUP = "default";
public static final Integer TASK_INFO_LENGTH = 5; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* new
* schedule time
*/
public static final String PARAMETER_SHECDULE_TIME = "schedule.time";
/**
* authorize writable perm
*/
public static final int AUTHORIZE_WRITABLE_PERM = 7;
/**
* authorize readable perm
*/
public static final int AUTHORIZE_READABLE_PERM = 4;
/**
* plugin configurations
*/
public static final String PLUGIN_JAR_SUFFIX = ".jar";
public static final int NORMAL_NODE_STATUS = 0;
public static final int ABNORMAL_NODE_STATUS = 1;
public static final String START_TIME = "start time";
public static final String END_TIME = "end time";
public static final String START_END_DATE = "startDate,endDate";
/**
* system line separator
*/
public static final String SYSTEM_LINE_SEPARATOR = System.getProperty("line.separator");
/**
* net system properties
*/
public static final String DOLPHIN_SCHEDULER_PREFERRED_NETWORK_INTERFACE = "dolphin.scheduler.network.interface.preferred"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String EXCEL_SUFFIX_XLS = ".xls";
/**
* datasource encryption salt
*/
public static final String DATASOURCE_ENCRYPTION_SALT_DEFAULT = "!@#$%^&*";
public static final String DATASOURCE_ENCRYPTION_ENABLE = "datasource.encryption.enable";
public static final String DATASOURCE_ENCRYPTION_SALT = "datasource.encryption.salt";
/**
* Network IP gets priority, default inner outer
*/
public static final String NETWORK_PRIORITY_STRATEGY = "dolphin.scheduler.network.priority.strategy";
/**
* exec shell scripts
*/
public static final String SH = "sh";
/**
* pstree, get pud and sub pid
*/
public static final String PSTREE = "pstree";
/**
* snow flake, data center id, this id must be greater than 0 and less than 32
*/
public static final String SNOW_FLAKE_DATA_CENTER_ID = "data.center.id";
/**
* docker & kubernetes
*/
public static final boolean DOCKER_MODE = StringUtils.isNotEmpty(System.getenv("DOCKER"));
public static final boolean KUBERNETES_MODE = StringUtils.isNotEmpty(System.getenv("KUBERNETES_SERVICE_HOST")) && StringUtils.isNotEmpty(System.getenv("KUBERNETES_SERVICE_PORT"));
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/TaskRecordDao.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.dao;
import static org.apache.dolphinscheduler.common.Constants.DATASOURCE_PROPERTIES;
import org.apache.dolphinscheduler.common.Constants; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/TaskRecordDao.java | import org.apache.dolphinscheduler.common.enums.TaskRecordStatus;
import org.apache.dolphinscheduler.common.utils.CollectionUtils;
import org.apache.dolphinscheduler.common.utils.ConnectionUtils;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.PropertyUtils;
import org.apache.dolphinscheduler.common.utils.StringUtils;
import org.apache.dolphinscheduler.dao.entity.TaskRecord;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* task record dao
*/
public class TaskRecordDao {
private static Logger logger = LoggerFactory.getLogger(TaskRecordDao.class.getName());
static {
PropertyUtils.loadPropertyFile(DATASOURCE_PROPERTIES);
}
/**
* get task record flag
*
* @return whether startup taskrecord
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/TaskRecordDao.java | public static boolean getTaskRecordFlag() {
return PropertyUtils.getBoolean(Constants.TASK_RECORD_FLAG, false);
}
/**
* create connection
*
* @return connection
*/
private static Connection getConn() {
if (!getTaskRecordFlag()) {
return null;
}
String driver = "com.mysql.jdbc.Driver";
String url = PropertyUtils.getString(Constants.TASK_RECORD_URL);
String username = PropertyUtils.getString(Constants.TASK_RECORD_USER);
String password = PropertyUtils.getString(Constants.TASK_RECORD_PWD);
Connection conn = null;
try {
Class.forName(driver);
conn = DriverManager.getConnection(url, username, password);
} catch (ClassNotFoundException e) {
logger.error("Class not found Exception ", e);
} catch (SQLException e) {
logger.error("SQL Exception ", e);
}
return conn;
}
/**
* generate where sql string |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/TaskRecordDao.java | *
* @param filterMap filterMap
* @return sql string
*/
private static String getWhereString(Map<String, String> filterMap) {
if (filterMap.size() == 0) {
return "";
}
String result = " where 1=1 ";
Object taskName = filterMap.get("taskName");
if (taskName != null && StringUtils.isNotEmpty(taskName.toString())) {
result += " and PROC_NAME like concat('%', '" + taskName.toString() + "', '%') ";
}
Object taskDate = filterMap.get("taskDate");
if (taskDate != null && StringUtils.isNotEmpty(taskDate.toString())) {
result += " and PROC_DATE='" + taskDate.toString() + "'";
}
Object state = filterMap.get("state");
if (state != null && StringUtils.isNotEmpty(state.toString())) {
result += " and NOTE='" + state.toString() + "'";
}
Object sourceTable = filterMap.get("sourceTable");
if (sourceTable != null && StringUtils.isNotEmpty(sourceTable.toString())) {
result += " and SOURCE_TAB like concat('%', '" + sourceTable.toString() + "', '%')";
}
Object targetTable = filterMap.get("targetTable");
if (sourceTable != null && StringUtils.isNotEmpty(targetTable.toString())) {
result += " and TARGET_TAB like concat('%', '" + targetTable.toString() + "', '%') ";
}
Object start = filterMap.get("startTime"); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/TaskRecordDao.java | if (start != null && StringUtils.isNotEmpty(start.toString())) {
result += " and STARTDATE>='" + start.toString() + "'";
}
Object end = filterMap.get("endTime");
if (end != null && StringUtils.isNotEmpty(end.toString())) {
result += " and ENDDATE>='" + end.toString() + "'";
}
return result;
}
/**
* count task record
*
* @param filterMap filterMap
* @param table table
* @return task record count
*/
public static int countTaskRecord(Map<String, String> filterMap, String table) {
int count = 0;
Connection conn = null;
PreparedStatement pstmt = null;
ResultSet rs = null;
try {
conn = getConn();
if (conn == null) {
return count;
}
String sql = String.format("select count(1) as count from %s", table);
sql += getWhereString(filterMap);
pstmt = conn.prepareStatement(sql);
rs = pstmt.executeQuery(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/TaskRecordDao.java | while (rs.next()) {
count = rs.getInt("count");
break;
}
} catch (SQLException e) {
logger.error("Exception ", e);
} finally {
ConnectionUtils.releaseResource(rs, pstmt, conn);
}
return count;
}
/**
* query task record by filter map paging
*
* @param filterMap filterMap
* @param table table
* @return task record list
*/
public static List<TaskRecord> queryAllTaskRecord(Map<String, String> filterMap, String table) {
String sql = String.format("select * from %s", table);
sql += getWhereString(filterMap);
int offset = Integer.parseInt(filterMap.get("offset"));
int pageSize = Integer.parseInt(filterMap.get("pageSize"));
sql += String.format(" order by STARTDATE desc limit %d,%d", offset, pageSize);
List<TaskRecord> recordList = new ArrayList<>();
try {
recordList = getQueryResult(sql);
} catch (Exception e) {
logger.error("Exception ", e);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/TaskRecordDao.java | return recordList;
}
/**
* convert result set to task record
*
* @param resultSet resultSet
* @return task record
* @throws SQLException if error throws SQLException
*/
private static TaskRecord convertToTaskRecord(ResultSet resultSet) throws SQLException {
TaskRecord taskRecord = new TaskRecord();
taskRecord.setId(resultSet.getInt("ID"));
taskRecord.setProcId(resultSet.getInt("PROC_ID"));
taskRecord.setProcName(resultSet.getString("PROC_NAME"));
taskRecord.setProcDate(resultSet.getString("PROC_DATE"));
taskRecord.setStartTime(DateUtils.stringToDate(resultSet.getString("STARTDATE")));
taskRecord.setEndTime(DateUtils.stringToDate(resultSet.getString("ENDDATE")));
taskRecord.setResult(resultSet.getString("RESULT"));
taskRecord.setDuration(resultSet.getInt("DURATION"));
taskRecord.setNote(resultSet.getString("NOTE"));
taskRecord.setSchema(resultSet.getString("SCHEMA"));
taskRecord.setJobId(resultSet.getString("JOB_ID"));
taskRecord.setSourceTab(resultSet.getString("SOURCE_TAB"));
taskRecord.setSourceRowCount(resultSet.getLong("SOURCE_ROW_COUNT"));
taskRecord.setTargetTab(resultSet.getString("TARGET_TAB"));
taskRecord.setTargetRowCount(resultSet.getLong("TARGET_ROW_COUNT"));
taskRecord.setErrorCode(resultSet.getString("ERROR_CODE"));
return taskRecord;
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/TaskRecordDao.java | * query task list by select sql
*
* @param selectSql select sql
* @return task record list
*/
private static List<TaskRecord> getQueryResult(String selectSql) {
List<TaskRecord> recordList = new ArrayList<>();
Connection conn = null;
PreparedStatement pstmt = null;
ResultSet rs = null;
try {
conn = getConn();
if (conn == null) {
return recordList;
}
pstmt = conn.prepareStatement(selectSql);
rs = pstmt.executeQuery();
while (rs.next()) {
TaskRecord taskRecord = convertToTaskRecord(rs);
recordList.add(taskRecord);
}
} catch (SQLException e) {
logger.error("Exception ", e);
} finally {
ConnectionUtils.releaseResource(rs, pstmt, conn);
}
return recordList;
}
/**
* according to procname and procdate query task record |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,487 | [Improvement][Task] Remove TaskRecordDao And simply the after() in the AbstractTask class | Dolphin scheduler 目前已经移除了数据质量检测,
可见在配置文件中也已经移除了对 相关数据质量涉及的db的
但是代码中依旧存在TaskRecordDao对数据质量的query,
并且SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
中涉及的eamp_hive_log_hd db明显已经不存在于配置的默认数据库中,
但是在重要的抽象类AbstractTask 中依旧存在对
TaskRecordDao的数据质量检测逻辑的判定,建议移除来保持对重要抽象类的纯净
public void after() {
if (getExitStatusCode() == Constants.EXIT_CODE_SUCCESS) {
// task recor flat : if true , start up qianfan
if (TaskRecordDao.getTaskRecordFlag()
&& TaskType.typeIsNormalTask(taskExecutionContext.getTaskType())) {
AbstractParameters params = TaskParametersUtils.getParameters(taskExecutionContext.getTaskType(), taskExecutionContext.getTaskParams());
// replace placeholder
Map<String, Property> paramsMap = ParamUtils.convert(ParamUtils.getUserDefParamsMap(taskExecutionContext.getDefinedParams()),
taskExecutionContext.getDefinedParams(),
params.getLocalParametersMap(),
CommandType.of(taskExecutionContext.getCmdTypeIfComplement()),
taskExecutionContext.getScheduleTime());
if (paramsMap != null && !paramsMap.isEmpty()
&& paramsMap.containsKey("v_proc_date")) {
String vProcDate = paramsMap.get("v_proc_date").getValue();
if (!StringUtils.isEmpty(vProcDate)) {
TaskRecordStatus taskRecordState = TaskRecordDao.getTaskRecordState(taskExecutionContext.getTaskName(), vProcDate);
logger.info("task record status : {}", taskRecordState);
if (taskRecordState == TaskRecordStatus.FAILURE) {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
}
}
} else if (getExitStatusCode() == Constants.EXIT_CODE_KILL) {
setExitStatusCode(Constants.EXIT_CODE_KILL);
} else {
setExitStatusCode(Constants.EXIT_CODE_FAILURE);
}
}
| https://github.com/apache/dolphinscheduler/issues/5487 | https://github.com/apache/dolphinscheduler/pull/5492 | 018f5c89f6ee1dbb8259a6036c4beb1874cd3f5c | bc22ae7c91c9cbd7c971796ba3a45358c2f11864 | "2021-05-17T09:46:25Z" | java | "2021-05-18T09:00:03Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/TaskRecordDao.java | *
* @param procName procName
* @param procDate procDate
* @return task record status
*/
public static TaskRecordStatus getTaskRecordState(String procName, String procDate) {
String sql = String.format("SELECT * FROM eamp_hive_log_hd WHERE PROC_NAME='%s' and PROC_DATE like '%s'"
, procName, procDate + "%");
List<TaskRecord> taskRecordList = getQueryResult(sql);
//
if (CollectionUtils.isEmpty(taskRecordList)) {
//
return TaskRecordStatus.EXCEPTION;
} else if (taskRecordList.size() > 1) {
return TaskRecordStatus.EXCEPTION;
} else {
TaskRecord taskRecord = taskRecordList.get(0);
if (taskRecord == null) {
return TaskRecordStatus.EXCEPTION;
}
Long targetRowCount = taskRecord.getTargetRowCount();
if (targetRowCount <= 0) {
return TaskRecordStatus.FAILURE;
} else {
return TaskRecordStatus.SUCCESS;
}
}
}
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.