status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | String.format("property: %s can not to be empty, please set!", Constants.FS_DEFAULTFS)
);
}
} else {
logger.info("get property:{} -> {}, from core-site.xml hdfs-site.xml ", Constants.FS_DEFAULTFS, defaultFS);
}
if (fs == null) {
if (StringUtils.isNotEmpty(hdfsUser)) {
UserGroupInformation ugi = UserGroupInformation.createRemoteUser(hdfsUser);
ugi.doAs(new PrivilegedExceptionAction<Boolean>() {
@Override
public Boolean run() throws Exception {
fs = FileSystem.get(configuration);
return true;
}
});
} else {
logger.warn("hdfs.root.user is not set value!");
fs = FileSystem.get(configuration);
}
}
} else if (resUploadType == ResUploadType.S3) {
configuration.set(Constants.FS_DEFAULTFS, PropertyUtils.getString(Constants.FS_DEFAULTFS));
configuration.set(Constants.FS_S3A_ENDPOINT, PropertyUtils.getString(Constants.FS_S3A_ENDPOINT));
configuration.set(Constants.FS_S3A_ACCESS_KEY, PropertyUtils.getString(Constants.FS_S3A_ACCESS_KEY));
configuration.set(Constants.FS_S3A_SECRET_KEY, PropertyUtils.getString(Constants.FS_S3A_SECRET_KEY));
fs = FileSystem.get(configuration);
}
} catch (Exception e) {
logger.error(e.getMessage(), e); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | }
}
/**
* @return Configuration
*/
public Configuration getConfiguration() {
return configuration;
}
/**
* get application url
*
* @param applicationId application id
* @return url of application
*/
public String getApplicationUrl(String applicationId) throws Exception {
/**
* if rmHaIds contains xx, it signs not use resourcemanager
* otherwise:
* if rmHaIds is empty, single resourcemanager enabled
* if rmHaIds not empty: resourcemanager HA enabled
*/
String appUrl = "";
if (StringUtils.isEmpty(rmHaIds)){
appUrl = appAddress;
yarnEnabled = true;
} else {
appUrl = getAppAddress(appAddress, rmHaIds);
yarnEnabled = true; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | logger.info("application url : {}", appUrl);
}
if(StringUtils.isBlank(appUrl)){
throw new Exception("application url is blank");
}
return String.format(appUrl, applicationId);
}
public String getJobHistoryUrl(String applicationId) {
String jobId = applicationId.replace("application", "job");
return String.format(jobHistoryAddress, jobId);
}
/**
* cat file on hdfs
*
* @param hdfsFilePath hdfs file path
* @return byte[] byte array
* @throws IOException errors
*/
public byte[] catFile(String hdfsFilePath) throws IOException {
if (StringUtils.isBlank(hdfsFilePath)) {
logger.error("hdfs file path:{} is blank", hdfsFilePath);
return new byte[0];
}
FSDataInputStream fsDataInputStream = fs.open(new Path(hdfsFilePath));
return IOUtils.toByteArray(fsDataInputStream);
}
/**
* cat file on hdfs
* |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * @param hdfsFilePath hdfs file path
* @param skipLineNums skip line numbers
* @param limit read how many lines
* @return content of file
* @throws IOException errors
*/
public List<String> catFile(String hdfsFilePath, int skipLineNums, int limit) throws IOException {
if (StringUtils.isBlank(hdfsFilePath)) {
logger.error("hdfs file path:{} is blank", hdfsFilePath);
return Collections.emptyList();
}
try (FSDataInputStream in = fs.open(new Path(hdfsFilePath))) {
BufferedReader br = new BufferedReader(new InputStreamReader(in));
Stream<String> stream = br.lines().skip(skipLineNums).limit(limit);
return stream.collect(Collectors.toList());
}
}
/**
* make the given file and all non-existent parents into
* directories. Has the semantics of Unix 'mkdir -p'.
* Existence of the directory hierarchy is not an error.
*
* @param hdfsPath path to create
* @return mkdir result
* @throws IOException errors
*/
public boolean mkdir(String hdfsPath) throws IOException {
return fs.mkdirs(new Path(hdfsPath));
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * copy files between FileSystems
*
* @param srcPath source hdfs path
* @param dstPath destination hdfs path
* @param deleteSource whether to delete the src
* @param overwrite whether to overwrite an existing file
* @return if success or not
* @throws IOException errors
*/
public boolean copy(String srcPath, String dstPath, boolean deleteSource, boolean overwrite) throws IOException {
return FileUtil.copy(fs, new Path(srcPath), fs, new Path(dstPath), deleteSource, overwrite, fs.getConf());
}
/**
* the src file is on the local disk. Add it to FS at
* the given dst name.
*
* @param srcFile local file
* @param dstHdfsPath destination hdfs path
* @param deleteSource whether to delete the src
* @param overwrite whether to overwrite an existing file
* @return if success or not
* @throws IOException errors
*/
public boolean copyLocalToHdfs(String srcFile, String dstHdfsPath, boolean deleteSource, boolean overwrite) throws IOException {
Path srcPath = new Path(srcFile);
Path dstPath = new Path(dstHdfsPath);
fs.copyFromLocalFile(deleteSource, overwrite, srcPath, dstPath);
return true;
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * copy hdfs file to local
*
* @param srcHdfsFilePath source hdfs file path
* @param dstFile destination file
* @param deleteSource delete source
* @param overwrite overwrite
* @return result of copy hdfs file to local
* @throws IOException errors
*/
public boolean copyHdfsToLocal(String srcHdfsFilePath, String dstFile, boolean deleteSource, boolean overwrite) throws IOException {
Path srcPath = new Path(srcHdfsFilePath);
File dstPath = new File(dstFile);
if (dstPath.exists()) {
if (dstPath.isFile()) {
if (overwrite) {
Files.delete(dstPath.toPath());
}
} else {
logger.error("destination file must be a file");
}
}
if (!dstPath.getParentFile().exists()) {
dstPath.getParentFile().mkdirs();
}
return FileUtil.copy(fs, srcPath, dstPath, deleteSource, fs.getConf());
}
/**
* delete a file
*
* @param hdfsFilePath the path to delete. |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * @param recursive if path is a directory and set to
* true, the directory is deleted else throws an exception. In
* case of a file the recursive can be set to either true or false.
* @return true if delete is successful else false.
* @throws IOException errors
*/
public boolean delete(String hdfsFilePath, boolean recursive) throws IOException {
return fs.delete(new Path(hdfsFilePath), recursive);
}
/**
* check if exists
*
* @param hdfsFilePath source file path
* @return result of exists or not
* @throws IOException errors
*/
public boolean exists(String hdfsFilePath) throws IOException {
return fs.exists(new Path(hdfsFilePath));
}
/**
* Gets a list of files in the directory
*
* @param filePath file path
* @return {@link FileStatus} file status
* @throws Exception errors
*/
public FileStatus[] listFileStatus(String filePath) throws Exception {
try {
return fs.listStatus(new Path(filePath));
} catch (IOException e) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | logger.error("Get file list exception", e);
throw new Exception("Get file list exception", e);
}
}
/**
* Renames Path src to Path dst. Can take place on local fs
* or remote DFS.
*
* @param src path to be renamed
* @param dst new path after rename
* @return true if rename is successful
* @throws IOException on failure
*/
public boolean rename(String src, String dst) throws IOException {
return fs.rename(new Path(src), new Path(dst));
}
/**
* hadoop resourcemanager enabled or not
*
* @return result
*/
public boolean isYarnEnabled() {
return yarnEnabled;
}
/**
* get the state of an application
*
* @param applicationId application id
* @return the return may be null or there may be other parse exceptions
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | public ExecutionStatus getApplicationStatus(String applicationId) throws Exception{
if (StringUtils.isEmpty(applicationId)) {
return null;
}
String result = Constants.FAILED;
String applicationUrl = getApplicationUrl(applicationId);
logger.info("applicationUrl={}", applicationUrl);
String responseContent = HttpUtils.get(applicationUrl);
if (responseContent != null) {
ObjectNode jsonObject = JSONUtils.parseObject(responseContent);
result = jsonObject.path("app").path("finalStatus").asText();
} else {
String jobHistoryUrl = getJobHistoryUrl(applicationId);
logger.info("jobHistoryUrl={}", jobHistoryUrl);
responseContent = HttpUtils.get(jobHistoryUrl);
ObjectNode jsonObject = JSONUtils.parseObject(responseContent);
if (!jsonObject.has("job")){
return ExecutionStatus.FAILURE;
}
result = jsonObject.path("job").path("state").asText();
}
switch (result) {
case Constants.ACCEPTED:
return ExecutionStatus.SUBMITTED_SUCCESS;
case Constants.SUCCEEDED:
return ExecutionStatus.SUCCESS;
case Constants.NEW:
case Constants.NEW_SAVING:
case Constants.SUBMITTED: |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | case Constants.FAILED:
return ExecutionStatus.FAILURE;
case Constants.KILLED:
return ExecutionStatus.KILL;
case Constants.RUNNING:
default:
return ExecutionStatus.RUNNING_EXEUTION;
}
}
/**
* get data hdfs path
*
* @return data hdfs path
*/
public static String getHdfsDataBasePath() {
if ("/".equals(resourceUploadPath)) {
return "";
} else {
return resourceUploadPath;
}
}
/**
* hdfs resource dir
*
* @param tenantCode tenant code
* @param resourceType resource type
* @return hdfs resource dir
*/
public static String getHdfsDir(ResourceType resourceType, String tenantCode) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | String hdfsDir = "";
if (resourceType.equals(ResourceType.FILE)) {
hdfsDir = getHdfsResDir(tenantCode);
} else if (resourceType.equals(ResourceType.UDF)) {
hdfsDir = getHdfsUdfDir(tenantCode);
}
return hdfsDir;
}
/**
* hdfs resource dir
*
* @param tenantCode tenant code
* @return hdfs resource dir
*/
public static String getHdfsResDir(String tenantCode) {
return String.format("%s/resources", getHdfsTenantDir(tenantCode));
}
/**
* hdfs user dir
*
* @param tenantCode tenant code
* @param userId user id
* @return hdfs resource dir
*/
public static String getHdfsUserDir(String tenantCode, int userId) {
return String.format("%s/home/%d", getHdfsTenantDir(tenantCode), userId);
}
/**
* hdfs udf dir
* |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * @param tenantCode tenant code
* @return get udf dir on hdfs
*/
public static String getHdfsUdfDir(String tenantCode) {
return String.format("%s/udfs", getHdfsTenantDir(tenantCode));
}
/**
* get hdfs file name
*
* @param resourceType resource type
* @param tenantCode tenant code
* @param fileName file name
* @return hdfs file name
*/
public static String getHdfsFileName(ResourceType resourceType, String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
}
return String.format("%s/%s", getHdfsDir(resourceType, tenantCode), fileName);
}
/**
* get absolute path and name for resource file on hdfs
*
* @param tenantCode tenant code
* @param fileName file name
* @return get absolute path and name for file on hdfs
*/
public static String getHdfsResourceFileName(String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", ""); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | }
return String.format("%s/%s", getHdfsResDir(tenantCode), fileName);
}
/**
* get absolute path and name for udf file on hdfs
*
* @param tenantCode tenant code
* @param fileName file name
* @return get absolute path and name for udf file on hdfs
*/
public static String getHdfsUdfFileName(String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
}
return String.format("%s/%s", getHdfsUdfDir(tenantCode), fileName);
}
/**
* @param tenantCode tenant code
* @return file directory of tenants on hdfs
*/
public static String getHdfsTenantDir(String tenantCode) {
return String.format("%s/%s", getHdfsDataBasePath(), tenantCode);
}
/**
* getAppAddress
*
* @param appAddress app address
* @param rmHa resource manager ha
* @return app address
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | public static String getAppAddress(String appAddress, String rmHa) {
String activeRM = YarnHAAdminUtils.getAcitveRMName(rmHa);
String[] split1 = appAddress.split(Constants.DOUBLE_SLASH);
if (split1.length != 2) {
return null;
}
String start = split1[0] + Constants.DOUBLE_SLASH;
String[] split2 = split1[1].split(Constants.COLON);
if (split2.length != 2) {
return null;
}
String end = Constants.COLON + split2[1];
return start + activeRM + end;
}
@Override
public void close() throws IOException {
if (fs != null) {
try {
fs.close();
} catch (IOException e) {
logger.error("Close HadoopUtils instance failed", e);
throw new IOException("Close HadoopUtils instance failed", e);
}
}
}
/**
* yarn ha admin utils
*/
private static final class YarnHAAdminUtils extends RMAdminCLI { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | /**
* get active resourcemanager
*
* @param rmIds
* @return
*/
public static String getAcitveRMName(String rmIds) {
String[] rmIdArr = rmIds.split(Constants.COMMA);
int activeResourceManagerPort = PropertyUtils.getInt(Constants.HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT, 8088);
String yarnUrl = "http://%s:" + activeResourceManagerPort + "/ws/v1/cluster/info";
String state = null;
try {
/**
* send http get request to rm1
*/
state = getRMState(String.format(yarnUrl, rmIdArr[0]));
if (Constants.HADOOP_RM_STATE_ACTIVE.equals(state)) {
return rmIdArr[0];
} else if (Constants.HADOOP_RM_STATE_STANDBY.equals(state)) {
state = getRMState(String.format(yarnUrl, rmIdArr[1]));
if (Constants.HADOOP_RM_STATE_ACTIVE.equals(state)) {
return rmIdArr[1];
}
} else {
return null;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | } catch (Exception e) {
state = getRMState(String.format(yarnUrl, rmIdArr[1]));
if (Constants.HADOOP_RM_STATE_ACTIVE.equals(state)) {
return rmIdArr[0];
}
}
return null;
}
/**
* get ResourceManager state
*
* @param url
* @return
*/
public static String getRMState(String url) {
String retStr = HttpUtils.get(url);
if (StringUtils.isEmpty(retStr)) {
return null;
}
ObjectNode jsonObject = JSONUtils.parseObject(retStr);
if (!jsonObject.has("clusterInfo")){
return null;
}
return jsonObject.get("clusterInfo").path("haState").asText();
}
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
* |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | * http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.ResUploadType;
import org.apache.commons.io.IOUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import static org.apache.dolphinscheduler.common.Constants.COMMON_PROPERTIES_PATH;
/**
* property utils
* single instance
*/
public class PropertyUtils {
/**
* logger
*/
private static final Logger logger = LoggerFactory.getLogger(PropertyUtils.class);
private static final Properties properties = new Properties(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | private PropertyUtils() {
throw new IllegalStateException("PropertyUtils class");
}
static {
String[] propertyFiles = new String[]{COMMON_PROPERTIES_PATH};
for (String fileName : propertyFiles) {
InputStream fis = null;
try {
fis = PropertyUtils.class.getResourceAsStream(fileName);
properties.load(fis);
} catch (IOException e) {
logger.error(e.getMessage(), e);
if (fis != null) {
IOUtils.closeQuietly(fis);
}
System.exit(1);
} finally {
IOUtils.closeQuietly(fis);
}
}
}
/**
*
* @return judge whether resource upload startup
*/
public static Boolean getResUploadStartupState(){
String resUploadStartupType = PropertyUtils.getString(Constants.RESOURCE_STORAGE_TYPE).toUpperCase();
ResUploadType resUploadType = ResUploadType.valueOf(resUploadStartupType);
return resUploadType == ResUploadType.HDFS || resUploadType == ResUploadType.S3;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | /**
* get property value
*
* @param key property name
* @return property value
*/
public static String getString(String key) {
return properties.getProperty(key.trim());
}
/**
* get property value
*
* @param key property name
* @param defaultVal default value
* @return property value
*/
public static String getString(String key, String defaultVal) {
String val = properties.getProperty(key.trim());
return val == null ? defaultVal : val;
}
/**
* get property value
*
* @param key property name
* @return get property int value , if key == null, then return -1
*/
public static int getInt(String key) {
return getInt(key, -1);
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | *
* @param key key
* @param defaultValue default value
* @return property value
*/
public static int getInt(String key, int defaultValue) {
String value = getString(key);
if (value == null) {
return defaultValue;
}
try {
return Integer.parseInt(value);
} catch (NumberFormatException e) {
logger.info(e.getMessage(),e);
}
return defaultValue;
}
/**
* get property value
*
* @param key property name
* @return property value
*/
public static boolean getBoolean(String key) {
String value = properties.getProperty(key.trim());
if(null != value){
return Boolean.parseBoolean(value);
}
return false;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | /**
* get property value
*
* @param key property name
* @param defaultValue default value
* @return property value
*/
public static Boolean getBoolean(String key, boolean defaultValue) {
String value = properties.getProperty(key.trim());
if(null != value){
return Boolean.parseBoolean(value);
}
return defaultValue;
}
/**
* get property long value
* @param key key
* @param defaultVal default value
* @return property value
*/
public static long getLong(String key, long defaultVal) {
String val = getString(key);
return val == null ? defaultVal : Long.parseLong(val);
}
/**
*
* @param key key
* @return property value
*/
public static long getLong(String key) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | return getLong(key,-1);
}
/**
*
* @param key key
* @param defaultVal default value
* @return property value
*/
public double getDouble(String key, double defaultVal) {
String val = getString(key);
return val == null ? defaultVal : Double.parseDouble(val);
}
/**
* get array
* @param key property name
* @param splitStr separator
* @return property value through array
*/
public static String[] getArray(String key, String splitStr) {
String value = getString(key);
if (value == null) {
return new String[0];
}
try {
String[] propertyArray = value.split(splitStr);
return propertyArray;
} catch (NumberFormatException e) {
logger.info(e.getMessage(),e);
}
return new String[0]; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,176 | [BUG] optimize #3165 Gets the value of this property “resource.storage.type”, Comparison with enumerated types | By looking at the ResourcesService code, I found a potential problem.
Enumeration type comparisons are used in both classes :HadoopUtils.java ,CommonUtils.java
I don't think this [submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) is perfect
All comparisons of ResUploadType need to be optimized.
I'm going to modify this part
中文:
通过阅读ResourcesService代码,发现了潜在隐患。
在HadoopUtils和CommonUtils,资源中心类型,都是通过枚举类型比较。
所以感觉到[submission](https://github.com/apache/incubator-dolphinscheduler/pull/3166) 这个提交是不完备的。
需要优化所有的ResUploadType类型比较部分的代码。
| https://github.com/apache/dolphinscheduler/issues/3176 | https://github.com/apache/dolphinscheduler/pull/3178 | 1eb8fb6db33af4b644492a9fb0b0348d7de14407 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | "2020-07-10T06:05:07Z" | java | "2020-07-10T07:21:42Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | }
/**
*
* @param key key
* @param type type
* @param defaultValue default value
* @param <T> T
* @return get enum value
*/
public <T extends Enum<T>> T getEnum(String key, Class<T> type,
T defaultValue) {
String val = getString(key);
return val == null ? defaultValue : Enum.valueOf(type, val);
}
/**
* get all properties with specified prefix, like: fs.
* @param prefix prefix to search
* @return all properties with specified prefix
*/
public static Map<String, String> getPrefixedProperties(String prefix) {
Map<String, String> matchedProperties = new HashMap<>();
for (String propName : properties.stringPropertyNames()) {
if (propName.startsWith(prefix)) {
matchedProperties.put(propName, properties.getProperty(propName));
}
}
return matchedProperties;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.utils.OSUtils;
import java.util.regex.Pattern;
/**
* Constants
*/
public final class Constants { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | private Constants() {
throw new IllegalStateException("Constants class");
}
/**
* quartz config
*/
public static final String ORG_QUARTZ_JOBSTORE_DRIVERDELEGATECLASS = "org.quartz.jobStore.driverDelegateClass";
public static final String ORG_QUARTZ_SCHEDULER_INSTANCENAME = "org.quartz.scheduler.instanceName";
public static final String ORG_QUARTZ_SCHEDULER_INSTANCEID = "org.quartz.scheduler.instanceId";
public static final String ORG_QUARTZ_SCHEDULER_MAKESCHEDULERTHREADDAEMON = "org.quartz.scheduler.makeSchedulerThreadDaemon";
public static final String ORG_QUARTZ_JOBSTORE_USEPROPERTIES = "org.quartz.jobStore.useProperties";
public static final String ORG_QUARTZ_THREADPOOL_CLASS = "org.quartz.threadPool.class";
public static final String ORG_QUARTZ_THREADPOOL_THREADCOUNT = "org.quartz.threadPool.threadCount";
public static final String ORG_QUARTZ_THREADPOOL_MAKETHREADSDAEMONS = "org.quartz.threadPool.makeThreadsDaemons";
public static final String ORG_QUARTZ_THREADPOOL_THREADPRIORITY = "org.quartz.threadPool.threadPriority";
public static final String ORG_QUARTZ_JOBSTORE_CLASS = "org.quartz.jobStore.class";
public static final String ORG_QUARTZ_JOBSTORE_TABLEPREFIX = "org.quartz.jobStore.tablePrefix";
public static final String ORG_QUARTZ_JOBSTORE_ISCLUSTERED = "org.quartz.jobStore.isClustered";
public static final String ORG_QUARTZ_JOBSTORE_MISFIRETHRESHOLD = "org.quartz.jobStore.misfireThreshold";
public static final String ORG_QUARTZ_JOBSTORE_CLUSTERCHECKININTERVAL = "org.quartz.jobStore.clusterCheckinInterval"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String ORG_QUARTZ_JOBSTORE_ACQUIRETRIGGERSWITHINLOCK = "org.quartz.jobStore.acquireTriggersWithinLock";
public static final String ORG_QUARTZ_JOBSTORE_DATASOURCE = "org.quartz.jobStore.dataSource";
public static final String ORG_QUARTZ_DATASOURCE_MYDS_CONNECTIONPROVIDER_CLASS = "org.quartz.dataSource.myDs.connectionProvider.class";
/**
* quartz config default value
*/
public static final String QUARTZ_TABLE_PREFIX = "QRTZ_";
public static final String QUARTZ_MISFIRETHRESHOLD = "60000";
public static final String QUARTZ_CLUSTERCHECKININTERVAL = "5000";
public static final String QUARTZ_DATASOURCE = "myDs";
public static final String QUARTZ_THREADCOUNT = "25";
public static final String QUARTZ_THREADPRIORITY = "5";
public static final String QUARTZ_INSTANCENAME = "DolphinScheduler";
public static final String QUARTZ_INSTANCEID = "AUTO";
public static final String QUARTZ_ACQUIRETRIGGERSWITHINLOCK = "true";
/**
* common properties path
*/
public static final String COMMON_PROPERTIES_PATH = "/common.properties";
/**
* fs.defaultFS
*/
public static final String FS_DEFAULTFS = "fs.defaultFS";
/**
* fs s3a endpoint
*/
public static final String FS_S3A_ENDPOINT = "fs.s3a.endpoint";
/**
* fs s3a access key
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String FS_S3A_ACCESS_KEY = "fs.s3a.access.key";
/**
* fs s3a secret key
*/
public static final String FS_S3A_SECRET_KEY = "fs.s3a.secret.key";
/**
* yarn.resourcemanager.ha.rm.ids
*/
public static final String YARN_RESOURCEMANAGER_HA_RM_IDS = "yarn.resourcemanager.ha.rm.ids";
public static final String YARN_RESOURCEMANAGER_HA_XX = "xx";
/**
* yarn.application.status.address
*/
public static final String YARN_APPLICATION_STATUS_ADDRESS = "yarn.application.status.address";
/**
* yarn.job.history.status.address
*/
public static final String YARN_JOB_HISTORY_STATUS_ADDRESS = "yarn.job.history.status.address";
/**
* hdfs configuration
* hdfs.root.user
*/
public static final String HDFS_ROOT_USER = "hdfs.root.user";
/**
* hdfs/s3 configuration
* resource.upload.path
*/
public static final String RESOURCE_UPLOAD_PATH = "resource.upload.path";
/**
* data basedir path |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String DATA_BASEDIR_PATH = "data.basedir.path";
/**
* dolphinscheduler.env.path
*/
public static final String DOLPHINSCHEDULER_ENV_PATH = "dolphinscheduler.env.path";
/**
* environment properties default path
*/
public static final String ENV_PATH = "env/dolphinscheduler_env.sh";
/**
* python home
*/
public static final String PYTHON_HOME="PYTHON_HOME";
/**
* resource.view.suffixs
*/
public static final String RESOURCE_VIEW_SUFFIXS = "resource.view.suffixs";
public static final String RESOURCE_VIEW_SUFFIXS_DEFAULT_VALUE = "txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties";
/**
* development.state
*/
public static final String DEVELOPMENT_STATE = "development.state";
public static final String DEVELOPMENT_STATE_DEFAULT_VALUE = "true";
/**
* string true
*/
public static final String STRING_TRUE = "true";
/**
* string false |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String STRING_FALSE = "false";
/**
* resource storage type
*/
public static final String RESOURCE_STORAGE_TYPE = "resource.storage.type";
/**
* MasterServer directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_MASTERS = "/nodes/master";
/**
* WorkerServer directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_WORKERS = "/nodes/worker";
/**
* all servers directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_DEAD_SERVERS = "/dead-servers";
/**
* MasterServer lock directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_MASTERS = "/lock/masters";
/**
* MasterServer failover directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_FAILOVER_MASTERS = "/lock/failover/masters";
/**
* WorkerServer failover directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_FAILOVER_WORKERS = "/lock/failover/workers"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* MasterServer startup failover runing and fault tolerance process
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_FAILOVER_STARTUP_MASTERS = "/lock/failover/startup-masters";
/**
* comma ,
*/
public static final String COMMA = ",";
/**
* slash /
*/
public static final String SLASH = "/";
/**
* COLON :
*/
public static final String COLON = ":";
/**
* SINGLE_SLASH /
*/
public static final String SINGLE_SLASH = "/";
/**
* DOUBLE_SLASH //
*/
public static final String DOUBLE_SLASH = "//";
/**
* SEMICOLON ;
*/
public static final String SEMICOLON = ";";
/**
* EQUAL SIGN |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String EQUAL_SIGN = "=";
/**
* AT SIGN
*/
public static final String AT_SIGN = "@";
public static final String WORKER_MAX_CPULOAD_AVG = "worker.max.cpuload.avg";
public static final String WORKER_RESERVED_MEMORY = "worker.reserved.memory";
public static final String MASTER_MAX_CPULOAD_AVG = "master.max.cpuload.avg";
public static final String MASTER_RESERVED_MEMORY = "master.reserved.memory";
/**
* date format of yyyy-MM-dd HH:mm:ss
*/
public static final String YYYY_MM_DD_HH_MM_SS = "yyyy-MM-dd HH:mm:ss";
/**
* date format of yyyyMMddHHmmss
*/
public static final String YYYYMMDDHHMMSS = "yyyyMMddHHmmss";
/**
* http connect time out
*/
public static final int HTTP_CONNECT_TIMEOUT = 60 * 1000;
/**
* http connect request time out
*/
public static final int HTTP_CONNECTION_REQUEST_TIMEOUT = 60 * 1000;
/**
* httpclient soceket time out
*/
public static final int SOCKET_TIMEOUT = 60 * 1000; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* http header
*/
public static final String HTTP_HEADER_UNKNOWN = "unKnown";
/**
* http X-Forwarded-For
*/
public static final String HTTP_X_FORWARDED_FOR = "X-Forwarded-For";
/**
* http X-Real-IP
*/
public static final String HTTP_X_REAL_IP = "X-Real-IP";
/**
* UTF-8
*/
public static final String UTF_8 = "UTF-8";
/**
* user name regex
*/
public static final Pattern REGEX_USER_NAME = Pattern.compile("^[a-zA-Z0-9._-]{3,39}$");
/**
* email regex
*/
public static final Pattern REGEX_MAIL_NAME = Pattern.compile("^([a-z0-9A-Z]+[_|\\-|\\.]?)+[a-z0-9A-Z]@([a-z0-9A-Z]+(-[a-z0-9A-Z]+)?\\.)+[a-zA-Z]{2,}$");
/**
* read permission
*/
public static final int READ_PERMISSION = 2 * 1;
/**
* write permission |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final int WRITE_PERMISSION = 2 * 2;
/**
* execute permission
*/
public static final int EXECUTE_PERMISSION = 1;
/**
* default admin permission
*/
public static final int DEFAULT_ADMIN_PERMISSION = 7;
/**
* all permissions
*/
public static final int ALL_PERMISSIONS = READ_PERMISSION | WRITE_PERMISSION | EXECUTE_PERMISSION;
/**
* max task timeout
*/
public static final int MAX_TASK_TIMEOUT = 24 * 3600;
/**
* master cpu load
*/
public static final int DEFAULT_MASTER_CPU_LOAD = Runtime.getRuntime().availableProcessors() * 2;
/**
* master reserved memory
*/
public static final double DEFAULT_MASTER_RESERVED_MEMORY = OSUtils.totalMemorySize() / 10;
/**
* worker cpu load
*/
public static final int DEFAULT_WORKER_CPU_LOAD = Runtime.getRuntime().availableProcessors() * 2; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* worker reserved memory
*/
public static final double DEFAULT_WORKER_RESERVED_MEMORY = OSUtils.totalMemorySize() / 10;
/**
* default log cache rows num,output when reach the number
*/
public static final int DEFAULT_LOG_ROWS_NUM = 4 * 16;
/**
* log flush interval?output when reach the interval
*/
public static final int DEFAULT_LOG_FLUSH_INTERVAL = 1000;
/**
* time unit secong to minutes
*/
public static final int SEC_2_MINUTES_TIME_UNIT = 60;
/***
*
* rpc port
*/
public static final int RPC_PORT = 50051;
/**
* forbid running task
*/
public static final String FLOWNODE_RUN_FLAG_FORBIDDEN = "FORBIDDEN";
/**
* datasource configuration path
*/
public static final String DATASOURCE_PROPERTIES = "/datasource.properties";
public static final String TASK_RECORD_URL = "task.record.datasource.url"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String TASK_RECORD_FLAG = "task.record.flag";
public static final String TASK_RECORD_USER = "task.record.datasource.username";
public static final String TASK_RECORD_PWD = "task.record.datasource.password";
public static final String DEFAULT = "Default";
public static final String USER = "user";
public static final String PASSWORD = "password";
public static final String XXXXXX = "******";
public static final String NULL = "NULL";
public static final String THREAD_NAME_MASTER_SERVER = "Master-Server";
public static final String THREAD_NAME_WORKER_SERVER = "Worker-Server";
public static final String TASK_RECORD_TABLE_HIVE_LOG = "eamp_hive_log_hd";
public static final String TASK_RECORD_TABLE_HISTORY_HIVE_LOG = "eamp_hive_hist_log_hd";
/**
* command parameter keys
*/
public static final String CMDPARAM_RECOVER_PROCESS_ID_STRING = "ProcessInstanceId";
public static final String CMDPARAM_RECOVERY_START_NODE_STRING = "StartNodeIdList";
public static final String CMDPARAM_RECOVERY_WAITTING_THREAD = "WaittingThreadInstanceId";
public static final String CMDPARAM_SUB_PROCESS = "processInstanceId";
public static final String CMDPARAM_EMPTY_SUB_PROCESS = "0";
public static final String CMDPARAM_SUB_PROCESS_PARENT_INSTANCE_ID = "parentProcessInstanceId";
public static final String CMDPARAM_SUB_PROCESS_DEFINE_ID = "processDefinitionId";
public static final String CMDPARAM_START_NODE_NAMES = "StartNodeNameList";
/**
* complement data start date
*/
public static final String CMDPARAM_COMPLEMENT_DATA_START_DATE = "complementStartDate";
/**
* complement data end date
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String CMDPARAM_COMPLEMENT_DATA_END_DATE = "complementEndDate";
/**
* hadoop configuration
*/
public static final String HADOOP_RM_STATE_ACTIVE = "ACTIVE";
public static final String HADOOP_RM_STATE_STANDBY = "STANDBY";
public static final String HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT = "resource.manager.httpaddress.port";
/**
* data source config
*/
public static final String SPRING_DATASOURCE_DRIVER_CLASS_NAME = "spring.datasource.driver-class-name";
public static final String SPRING_DATASOURCE_URL = "spring.datasource.url";
public static final String SPRING_DATASOURCE_USERNAME = "spring.datasource.username";
public static final String SPRING_DATASOURCE_PASSWORD = "spring.datasource.password";
public static final String SPRING_DATASOURCE_VALIDATION_QUERY_TIMEOUT = "spring.datasource.validationQueryTimeout";
public static final String SPRING_DATASOURCE_INITIAL_SIZE = "spring.datasource.initialSize";
public static final String SPRING_DATASOURCE_MIN_IDLE = "spring.datasource.minIdle";
public static final String SPRING_DATASOURCE_MAX_ACTIVE = "spring.datasource.maxActive";
public static final String SPRING_DATASOURCE_MAX_WAIT = "spring.datasource.maxWait";
public static final String SPRING_DATASOURCE_TIME_BETWEEN_EVICTION_RUNS_MILLIS = "spring.datasource.timeBetweenEvictionRunsMillis";
public static final String SPRING_DATASOURCE_TIME_BETWEEN_CONNECT_ERROR_MILLIS = "spring.datasource.timeBetweenConnectErrorMillis";
public static final String SPRING_DATASOURCE_MIN_EVICTABLE_IDLE_TIME_MILLIS = "spring.datasource.minEvictableIdleTimeMillis";
public static final String SPRING_DATASOURCE_VALIDATION_QUERY = "spring.datasource.validationQuery";
public static final String SPRING_DATASOURCE_TEST_WHILE_IDLE = "spring.datasource.testWhileIdle";
public static final String SPRING_DATASOURCE_TEST_ON_BORROW = "spring.datasource.testOnBorrow";
public static final String SPRING_DATASOURCE_TEST_ON_RETURN = "spring.datasource.testOnReturn";
public static final String SPRING_DATASOURCE_POOL_PREPARED_STATEMENTS = "spring.datasource.poolPreparedStatements";
public static final String SPRING_DATASOURCE_DEFAULT_AUTO_COMMIT = "spring.datasource.defaultAutoCommit";
public static final String SPRING_DATASOURCE_KEEP_ALIVE = "spring.datasource.keepAlive";
public static final String SPRING_DATASOURCE_MAX_POOL_PREPARED_STATEMENT_PER_CONNECTION_SIZE = "spring.datasource.maxPoolPreparedStatementPerConnectionSize"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String DEVELOPMENT = "development";
public static final String QUARTZ_PROPERTIES_PATH = "quartz.properties";
/**
* sleep time
*/
public static final int SLEEP_TIME_MILLIS = 1000;
/**
* heartbeat for zk info length
*/
public static final int HEARTBEAT_FOR_ZOOKEEPER_INFO_LENGTH = 10;
/**
* hadoop params constant
*/
/**
* jar
*/
public static final String JAR = "jar";
/**
* hadoop
*/
public static final String HADOOP = "hadoop";
/**
* -D parameter
*/
public static final String D = "-D";
/**
* -D mapreduce.job.queuename=ququename
*/
public static final String MR_QUEUE = "mapreduce.job.queuename";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * spark params constant
*/
public static final String MASTER = "--master";
public static final String DEPLOY_MODE = "--deploy-mode";
/**
* --class CLASS_NAME
*/
public static final String MAIN_CLASS = "--class";
/**
* --driver-cores NUM
*/
public static final String DRIVER_CORES = "--driver-cores";
/**
* --driver-memory MEM
*/
public static final String DRIVER_MEMORY = "--driver-memory";
/**
* --num-executors NUM
*/
public static final String NUM_EXECUTORS = "--num-executors";
/**
* --executor-cores NUM
*/
public static final String EXECUTOR_CORES = "--executor-cores";
/**
* --executor-memory MEM
*/
public static final String EXECUTOR_MEMORY = "--executor-memory";
/**
* --queue QUEUE |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String SPARK_QUEUE = "--queue";
/**
* --queue --qu
*/
public static final String FLINK_QUEUE = "--qu";
/**
* exit code success
*/
public static final int EXIT_CODE_SUCCESS = 0;
/**
* exit code kill
*/
public static final int EXIT_CODE_KILL = 137;
/**
* exit code failure
*/
public static final int EXIT_CODE_FAILURE = -1;
/**
* date format of yyyyMMdd
*/
public static final String PARAMETER_FORMAT_DATE = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String PARAMETER_FORMAT_TIME = "yyyyMMddHHmmss";
/**
* system date(yyyyMMddHHmmss)
*/
public static final String PARAMETER_DATETIME = "system.datetime"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* system date(yyyymmdd) today
*/
public static final String PARAMETER_CURRENT_DATE = "system.biz.curdate";
/**
* system date(yyyymmdd) yesterday
*/
public static final String PARAMETER_BUSINESS_DATE = "system.biz.date";
/**
* ACCEPTED
*/
public static final String ACCEPTED = "ACCEPTED";
/**
* SUCCEEDED
*/
public static final String SUCCEEDED = "SUCCEEDED";
/**
* NEW
*/
public static final String NEW = "NEW";
/**
* NEW_SAVING
*/
public static final String NEW_SAVING = "NEW_SAVING";
/**
* SUBMITTED
*/
public static final String SUBMITTED = "SUBMITTED";
/**
* FAILED |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String FAILED = "FAILED";
/**
* KILLED
*/
public static final String KILLED = "KILLED";
/**
* RUNNING
*/
public static final String RUNNING = "RUNNING";
/**
* underline "_"
*/
public static final String UNDERLINE = "_";
/**
* quartz job prifix
*/
public static final String QUARTZ_JOB_PRIFIX = "job";
/**
* quartz job group prifix
*/
public static final String QUARTZ_JOB_GROUP_PRIFIX = "jobgroup";
/**
* projectId
*/
public static final String PROJECT_ID = "projectId";
/**
* processId
*/
public static final String SCHEDULE_ID = "scheduleId"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* schedule
*/
public static final String SCHEDULE = "schedule";
/**
* application regex
*/
public static final String APPLICATION_REGEX = "application_\\d+_\\d+";
public static final String PID = OSUtils.isWindows() ? "handle" : "pid";
/**
* month_begin
*/
public static final String MONTH_BEGIN = "month_begin";
/**
* add_months
*/
public static final String ADD_MONTHS = "add_months";
/**
* month_end
*/
public static final String MONTH_END = "month_end";
/**
* week_begin
*/
public static final String WEEK_BEGIN = "week_begin";
/**
* week_end
*/
public static final String WEEK_END = "week_end";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * timestamp
*/
public static final String TIMESTAMP = "timestamp";
public static final char SUBTRACT_CHAR = '-';
public static final char ADD_CHAR = '+';
public static final char MULTIPLY_CHAR = '*';
public static final char DIVISION_CHAR = '/';
public static final char LEFT_BRACE_CHAR = '(';
public static final char RIGHT_BRACE_CHAR = ')';
public static final String ADD_STRING = "+";
public static final String MULTIPLY_STRING = "*";
public static final String DIVISION_STRING = "/";
public static final String LEFT_BRACE_STRING = "(";
public static final char P = 'P';
public static final char N = 'N';
public static final String SUBTRACT_STRING = "-";
public static final String GLOBAL_PARAMS = "globalParams";
public static final String LOCAL_PARAMS = "localParams";
public static final String PROCESS_INSTANCE_STATE = "processInstanceState";
public static final String TASK_LIST = "taskList";
public static final String RWXR_XR_X = "rwxr-xr-x";
/**
* master/worker server use for zk
*/
public static final String MASTER_PREFIX = "master";
public static final String WORKER_PREFIX = "worker";
public static final String DELETE_ZK_OP = "delete";
public static final String ADD_ZK_OP = "add";
public static final String ALIAS = "alias";
public static final String CONTENT = "content"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String DEPENDENT_SPLIT = ":||";
public static final String DEPENDENT_ALL = "ALL";
/**
* preview schedule execute count
*/
public static final int PREVIEW_SCHEDULE_EXECUTE_COUNT = 5;
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* kerberos expire time
*/
public static final String KERBEROS_EXPIRE_TIME = "kerberos.expire.time";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf";
/**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* default worker group id
*/
public static final int DEFAULT_WORKER_ID = -1;
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* task log info format
*/
public static final String TASK_LOG_INFO_FORMAT = "TaskLogInfo-%s";
/**
* hive conf
*/
public static final String HIVE_CONF = "hiveconf:";
public static final String FLINK_YARN_CLUSTER = "yarn-cluster";
public static final String FLINK_RUN_MODE = "-m";
public static final String FLINK_YARN_SLOT = "-ys";
public static final String FLINK_APP_NAME = "-ynm";
public static final String FLINK_TASK_MANAGE = "-yn";
public static final String FLINK_JOB_MANAGE_MEM = "-yjm";
public static final String FLINK_TASK_MANAGE_MEM = "-ytm";
public static final String FLINK_DETACH = "-d";
public static final String FLINK_MAIN_CLASS = "-c"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final int[] NOT_TERMINATED_STATES = new int[]{
ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.RUNNING_EXEUTION.ordinal(),
ExecutionStatus.READY_PAUSE.ordinal(),
ExecutionStatus.READY_STOP.ordinal(),
ExecutionStatus.NEED_FAULT_TOLERANCE.ordinal(),
ExecutionStatus.WAITTING_THREAD.ordinal(),
ExecutionStatus.WAITTING_DEPEND.ordinal()
};
/**
* status
*/
public static final String STATUS = "status";
/**
* message
*/
public static final String MSG = "msg";
/**
* data total
*/
public static final String COUNT = "count";
/**
* page size
*/
public static final String PAGE_SIZE = "pageSize";
/**
* current page no
*/
public static final String PAGE_NUMBER = "pageNo";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | *
*/
public static final String DATA_LIST = "data";
public static final String TOTAL_LIST = "totalList";
public static final String CURRENT_PAGE = "currentPage";
public static final String TOTAL_PAGE = "totalPage";
public static final String TOTAL = "total";
/**
* session user
*/
public static final String SESSION_USER = "session.user";
public static final String SESSION_ID = "sessionId";
public static final String PASSWORD_DEFAULT = "******";
/**
* driver
*/
public static final String ORG_POSTGRESQL_DRIVER = "org.postgresql.Driver";
public static final String COM_MYSQL_JDBC_DRIVER = "com.mysql.jdbc.Driver";
public static final String ORG_APACHE_HIVE_JDBC_HIVE_DRIVER = "org.apache.hive.jdbc.HiveDriver";
public static final String COM_CLICKHOUSE_JDBC_DRIVER = "ru.yandex.clickhouse.ClickHouseDriver";
public static final String COM_ORACLE_JDBC_DRIVER = "oracle.jdbc.driver.OracleDriver";
public static final String COM_SQLSERVER_JDBC_DRIVER = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
public static final String COM_DB2_JDBC_DRIVER = "com.ibm.db2.jcc.DB2Driver";
/**
* database type
*/
public static final String MYSQL = "MYSQL";
public static final String POSTGRESQL = "POSTGRESQL";
public static final String HIVE = "HIVE";
public static final String SPARK = "SPARK"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String CLICKHOUSE = "CLICKHOUSE";
public static final String ORACLE = "ORACLE";
public static final String SQLSERVER = "SQLSERVER";
public static final String DB2 = "DB2";
/**
* jdbc url
*/
public static final String JDBC_MYSQL = "jdbc:mysql://";
public static final String JDBC_POSTGRESQL = "jdbc:postgresql://";
public static final String JDBC_HIVE_2 = "jdbc:hive2://";
public static final String JDBC_CLICKHOUSE = "jdbc:clickhouse://";
public static final String JDBC_ORACLE_SID = "jdbc:oracle:thin:@";
public static final String JDBC_ORACLE_SERVICE_NAME = "jdbc:oracle:thin:@//";
public static final String JDBC_SQLSERVER = "jdbc:sqlserver://";
public static final String JDBC_DB2 = "jdbc:db2://";
public static final String ADDRESS = "address";
public static final String DATABASE = "database";
public static final String JDBC_URL = "jdbcUrl";
public static final String PRINCIPAL = "principal";
public static final String OTHER = "other";
public static final String ORACLE_DB_CONNECT_TYPE = "connectType";
/**
* session timeout
*/
public static final int SESSION_TIME_OUT = 7200;
public static final int MAX_FILE_SIZE = 1024 * 1024 * 1024;
public static final String UDF = "UDF";
public static final String CLASS = "class";
public static final String RECEIVERS = "receivers";
public static final String RECEIVERS_CC = "receiversCc"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* dataSource sensitive param
*/
public static final String DATASOURCE_PASSWORD_REGEX = "(?<=(\"password\":\")).*?(?=(\"))";
/**
* default worker group
*/
public static final String DEFAULT_WORKER_GROUP = "default";
public static final Integer TASK_INFO_LENGTH = 5;
/**
* new
* schedule time
*/
public static final String PARAMETER_SHECDULE_TIME = "schedule.time";
/**
* authorize writable perm
*/
public static final int AUTHORIZE_WRITABLE_PERM=7;
/**
* authorize readable perm
*/
public static final int AUTHORIZE_READABLE_PERM=4;
/**
* plugin configurations
*/
public static final String PLUGIN_JAR_SUFFIX = ".jar";
public static final int NORAML_NODE_STATUS = 0;
public static final int ABNORMAL_NODE_STATUS = 1;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
import org.apache.commons.io.IOUtils;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.ResUploadType;
import org.apache.dolphinscheduler.common.enums.ResourceType;
import org.apache.hadoop.conf.Configuration; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.client.cli.RMAdminCLI;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.*;
import java.nio.file.Files;
import java.security.PrivilegedExceptionAction;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_UPLOAD_PATH;
/**
* hadoop utils
* single instance
*/
public class HadoopUtils implements Closeable {
private static final Logger logger = LoggerFactory.getLogger(HadoopUtils.class);
private static String hdfsUser = PropertyUtils.getString(Constants.HDFS_ROOT_USER);
public static final String resourceUploadPath = PropertyUtils.getString(RESOURCE_UPLOAD_PATH, "/dolphinscheduler");
public static final String rmHaIds = PropertyUtils.getString(Constants.YARN_RESOURCEMANAGER_HA_RM_IDS);
public static final String appAddress = PropertyUtils.getString(Constants.YARN_APPLICATION_STATUS_ADDRESS);
public static final String jobHistoryAddress = PropertyUtils.getString(Constants.YARN_JOB_HISTORY_STATUS_ADDRESS);
private static final String HADOOP_UTILS_KEY = "HADOOP_UTILS_KEY";
private static final LoadingCache<String, HadoopUtils> cache = CacheBuilder
.newBuilder() |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | .expireAfterWrite(PropertyUtils.getInt(Constants.KERBEROS_EXPIRE_TIME, 2), TimeUnit.HOURS)
.build(new CacheLoader<String, HadoopUtils>() {
@Override
public HadoopUtils load(String key) throws Exception {
return new HadoopUtils();
}
});
private static volatile boolean yarnEnabled = false;
private Configuration configuration;
private FileSystem fs;
private HadoopUtils() {
init();
initHdfsPath();
}
public static HadoopUtils getInstance() {
return cache.getUnchecked(HADOOP_UTILS_KEY);
}
/**
* init dolphinscheduler root path in hdfs
*/
private void initHdfsPath() {
Path path = new Path(resourceUploadPath);
try {
if (!fs.exists(path)) {
fs.mkdirs(path);
}
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | /**
* init hadoop configuration
*/
private void init() {
try {
configuration = new Configuration();
String resourceStorageType = PropertyUtils.getUpperCaseString(Constants.RESOURCE_STORAGE_TYPE);
ResUploadType resUploadType = ResUploadType.valueOf(resourceStorageType);
if (resUploadType == ResUploadType.HDFS) {
if (PropertyUtils.getBoolean(Constants.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false)) {
System.setProperty(Constants.JAVA_SECURITY_KRB5_CONF,
PropertyUtils.getString(Constants.JAVA_SECURITY_KRB5_CONF_PATH));
configuration.set(Constants.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
hdfsUser = "";
UserGroupInformation.setConfiguration(configuration);
UserGroupInformation.loginUserFromKeytab(PropertyUtils.getString(Constants.LOGIN_USER_KEY_TAB_USERNAME),
PropertyUtils.getString(Constants.LOGIN_USER_KEY_TAB_PATH));
}
String defaultFS = configuration.get(Constants.FS_DEFAULTFS);
if (defaultFS.startsWith("file")) {
String defaultFSProp = PropertyUtils.getString(Constants.FS_DEFAULTFS);
if (StringUtils.isNotBlank(defaultFSProp)) {
Map<String, String> fsRelatedProps = PropertyUtils.getPrefixedProperties("fs.");
configuration.set(Constants.FS_DEFAULTFS, defaultFSProp);
fsRelatedProps.forEach((key, value) -> configuration.set(key, value));
} else {
logger.error("property:{} can not to be empty, please set!", Constants.FS_DEFAULTFS);
throw new RuntimeException( |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | String.format("property: %s can not to be empty, please set!", Constants.FS_DEFAULTFS)
);
}
} else {
logger.info("get property:{} -> {}, from core-site.xml hdfs-site.xml ", Constants.FS_DEFAULTFS, defaultFS);
}
if (fs == null) {
if (StringUtils.isNotEmpty(hdfsUser)) {
UserGroupInformation ugi = UserGroupInformation.createRemoteUser(hdfsUser);
ugi.doAs(new PrivilegedExceptionAction<Boolean>() {
@Override
public Boolean run() throws Exception {
fs = FileSystem.get(configuration);
return true;
}
});
} else {
logger.warn("hdfs.root.user is not set value!");
fs = FileSystem.get(configuration);
}
}
} else if (resUploadType == ResUploadType.S3) {
configuration.set(Constants.FS_DEFAULTFS, PropertyUtils.getString(Constants.FS_DEFAULTFS));
configuration.set(Constants.FS_S3A_ENDPOINT, PropertyUtils.getString(Constants.FS_S3A_ENDPOINT));
configuration.set(Constants.FS_S3A_ACCESS_KEY, PropertyUtils.getString(Constants.FS_S3A_ACCESS_KEY));
configuration.set(Constants.FS_S3A_SECRET_KEY, PropertyUtils.getString(Constants.FS_S3A_SECRET_KEY));
fs = FileSystem.get(configuration);
}
} catch (Exception e) {
logger.error(e.getMessage(), e); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | }
}
/**
* @return Configuration
*/
public Configuration getConfiguration() {
return configuration;
}
/**
* get application url
*
* @param applicationId application id
* @return url of application
*/
public String getApplicationUrl(String applicationId) throws Exception {
/**
* if rmHaIds contains xx, it signs not use resourcemanager
* otherwise:
* if rmHaIds is empty, single resourcemanager enabled
* if rmHaIds not empty: resourcemanager HA enabled
*/
String appUrl = "";
if (StringUtils.isEmpty(rmHaIds)){
appUrl = appAddress;
yarnEnabled = true;
} else {
appUrl = getAppAddress(appAddress, rmHaIds);
yarnEnabled = true; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | logger.info("application url : {}", appUrl);
}
if(StringUtils.isBlank(appUrl)){
throw new Exception("application url is blank");
}
return String.format(appUrl, applicationId);
}
public String getJobHistoryUrl(String applicationId) {
String jobId = applicationId.replace("application", "job");
return String.format(jobHistoryAddress, jobId);
}
/**
* cat file on hdfs
*
* @param hdfsFilePath hdfs file path
* @return byte[] byte array
* @throws IOException errors
*/
public byte[] catFile(String hdfsFilePath) throws IOException {
if (StringUtils.isBlank(hdfsFilePath)) {
logger.error("hdfs file path:{} is blank", hdfsFilePath);
return new byte[0];
}
FSDataInputStream fsDataInputStream = fs.open(new Path(hdfsFilePath));
return IOUtils.toByteArray(fsDataInputStream);
}
/**
* cat file on hdfs
* |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * @param hdfsFilePath hdfs file path
* @param skipLineNums skip line numbers
* @param limit read how many lines
* @return content of file
* @throws IOException errors
*/
public List<String> catFile(String hdfsFilePath, int skipLineNums, int limit) throws IOException {
if (StringUtils.isBlank(hdfsFilePath)) {
logger.error("hdfs file path:{} is blank", hdfsFilePath);
return Collections.emptyList();
}
try (FSDataInputStream in = fs.open(new Path(hdfsFilePath))) {
BufferedReader br = new BufferedReader(new InputStreamReader(in));
Stream<String> stream = br.lines().skip(skipLineNums).limit(limit);
return stream.collect(Collectors.toList());
}
}
/**
* make the given file and all non-existent parents into
* directories. Has the semantics of Unix 'mkdir -p'.
* Existence of the directory hierarchy is not an error.
*
* @param hdfsPath path to create
* @return mkdir result
* @throws IOException errors
*/
public boolean mkdir(String hdfsPath) throws IOException {
return fs.mkdirs(new Path(hdfsPath));
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * copy files between FileSystems
*
* @param srcPath source hdfs path
* @param dstPath destination hdfs path
* @param deleteSource whether to delete the src
* @param overwrite whether to overwrite an existing file
* @return if success or not
* @throws IOException errors
*/
public boolean copy(String srcPath, String dstPath, boolean deleteSource, boolean overwrite) throws IOException {
return FileUtil.copy(fs, new Path(srcPath), fs, new Path(dstPath), deleteSource, overwrite, fs.getConf());
}
/**
* the src file is on the local disk. Add it to FS at
* the given dst name.
*
* @param srcFile local file
* @param dstHdfsPath destination hdfs path
* @param deleteSource whether to delete the src
* @param overwrite whether to overwrite an existing file
* @return if success or not
* @throws IOException errors
*/
public boolean copyLocalToHdfs(String srcFile, String dstHdfsPath, boolean deleteSource, boolean overwrite) throws IOException {
Path srcPath = new Path(srcFile);
Path dstPath = new Path(dstHdfsPath);
fs.copyFromLocalFile(deleteSource, overwrite, srcPath, dstPath);
return true;
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * copy hdfs file to local
*
* @param srcHdfsFilePath source hdfs file path
* @param dstFile destination file
* @param deleteSource delete source
* @param overwrite overwrite
* @return result of copy hdfs file to local
* @throws IOException errors
*/
public boolean copyHdfsToLocal(String srcHdfsFilePath, String dstFile, boolean deleteSource, boolean overwrite) throws IOException {
Path srcPath = new Path(srcHdfsFilePath);
File dstPath = new File(dstFile);
if (dstPath.exists()) {
if (dstPath.isFile()) {
if (overwrite) {
Files.delete(dstPath.toPath());
}
} else {
logger.error("destination file must be a file");
}
}
if (!dstPath.getParentFile().exists()) {
dstPath.getParentFile().mkdirs();
}
return FileUtil.copy(fs, srcPath, dstPath, deleteSource, fs.getConf());
}
/**
* delete a file
*
* @param hdfsFilePath the path to delete. |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * @param recursive if path is a directory and set to
* true, the directory is deleted else throws an exception. In
* case of a file the recursive can be set to either true or false.
* @return true if delete is successful else false.
* @throws IOException errors
*/
public boolean delete(String hdfsFilePath, boolean recursive) throws IOException {
return fs.delete(new Path(hdfsFilePath), recursive);
}
/**
* check if exists
*
* @param hdfsFilePath source file path
* @return result of exists or not
* @throws IOException errors
*/
public boolean exists(String hdfsFilePath) throws IOException {
return fs.exists(new Path(hdfsFilePath));
}
/**
* Gets a list of files in the directory
*
* @param filePath file path
* @return {@link FileStatus} file status
* @throws Exception errors
*/
public FileStatus[] listFileStatus(String filePath) throws Exception {
try {
return fs.listStatus(new Path(filePath));
} catch (IOException e) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | logger.error("Get file list exception", e);
throw new Exception("Get file list exception", e);
}
}
/**
* Renames Path src to Path dst. Can take place on local fs
* or remote DFS.
*
* @param src path to be renamed
* @param dst new path after rename
* @return true if rename is successful
* @throws IOException on failure
*/
public boolean rename(String src, String dst) throws IOException {
return fs.rename(new Path(src), new Path(dst));
}
/**
* hadoop resourcemanager enabled or not
*
* @return result
*/
public boolean isYarnEnabled() {
return yarnEnabled;
}
/**
* get the state of an application
*
* @param applicationId application id
* @return the return may be null or there may be other parse exceptions
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | public ExecutionStatus getApplicationStatus(String applicationId) throws Exception{
if (StringUtils.isEmpty(applicationId)) {
return null;
}
String result = Constants.FAILED;
String applicationUrl = getApplicationUrl(applicationId);
logger.info("applicationUrl={}", applicationUrl);
String responseContent = HttpUtils.get(applicationUrl);
if (responseContent != null) {
ObjectNode jsonObject = JSONUtils.parseObject(responseContent);
result = jsonObject.path("app").path("finalStatus").asText();
} else {
String jobHistoryUrl = getJobHistoryUrl(applicationId);
logger.info("jobHistoryUrl={}", jobHistoryUrl);
responseContent = HttpUtils.get(jobHistoryUrl);
ObjectNode jsonObject = JSONUtils.parseObject(responseContent);
if (!jsonObject.has("job")){
return ExecutionStatus.FAILURE;
}
result = jsonObject.path("job").path("state").asText();
}
switch (result) {
case Constants.ACCEPTED:
return ExecutionStatus.SUBMITTED_SUCCESS;
case Constants.SUCCEEDED:
return ExecutionStatus.SUCCESS;
case Constants.NEW:
case Constants.NEW_SAVING:
case Constants.SUBMITTED: |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | case Constants.FAILED:
return ExecutionStatus.FAILURE;
case Constants.KILLED:
return ExecutionStatus.KILL;
case Constants.RUNNING:
default:
return ExecutionStatus.RUNNING_EXEUTION;
}
}
/**
* get data hdfs path
*
* @return data hdfs path
*/
public static String getHdfsDataBasePath() {
if ("/".equals(resourceUploadPath)) {
return "";
} else {
return resourceUploadPath;
}
}
/**
* hdfs resource dir
*
* @param tenantCode tenant code
* @param resourceType resource type
* @return hdfs resource dir
*/
public static String getHdfsDir(ResourceType resourceType, String tenantCode) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | String hdfsDir = "";
if (resourceType.equals(ResourceType.FILE)) {
hdfsDir = getHdfsResDir(tenantCode);
} else if (resourceType.equals(ResourceType.UDF)) {
hdfsDir = getHdfsUdfDir(tenantCode);
}
return hdfsDir;
}
/**
* hdfs resource dir
*
* @param tenantCode tenant code
* @return hdfs resource dir
*/
public static String getHdfsResDir(String tenantCode) {
return String.format("%s/resources", getHdfsTenantDir(tenantCode));
}
/**
* hdfs user dir
*
* @param tenantCode tenant code
* @param userId user id
* @return hdfs resource dir
*/
public static String getHdfsUserDir(String tenantCode, int userId) {
return String.format("%s/home/%d", getHdfsTenantDir(tenantCode), userId);
}
/**
* hdfs udf dir
* |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * @param tenantCode tenant code
* @return get udf dir on hdfs
*/
public static String getHdfsUdfDir(String tenantCode) {
return String.format("%s/udfs", getHdfsTenantDir(tenantCode));
}
/**
* get hdfs file name
*
* @param resourceType resource type
* @param tenantCode tenant code
* @param fileName file name
* @return hdfs file name
*/
public static String getHdfsFileName(ResourceType resourceType, String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
}
return String.format("%s/%s", getHdfsDir(resourceType, tenantCode), fileName);
}
/**
* get absolute path and name for resource file on hdfs
*
* @param tenantCode tenant code
* @param fileName file name
* @return get absolute path and name for file on hdfs
*/
public static String getHdfsResourceFileName(String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", ""); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | }
return String.format("%s/%s", getHdfsResDir(tenantCode), fileName);
}
/**
* get absolute path and name for udf file on hdfs
*
* @param tenantCode tenant code
* @param fileName file name
* @return get absolute path and name for udf file on hdfs
*/
public static String getHdfsUdfFileName(String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
}
return String.format("%s/%s", getHdfsUdfDir(tenantCode), fileName);
}
/**
* @param tenantCode tenant code
* @return file directory of tenants on hdfs
*/
public static String getHdfsTenantDir(String tenantCode) {
return String.format("%s/%s", getHdfsDataBasePath(), tenantCode);
}
/**
* getAppAddress
*
* @param appAddress app address
* @param rmHa resource manager ha
* @return app address
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | public static String getAppAddress(String appAddress, String rmHa) {
String activeRM = YarnHAAdminUtils.getAcitveRMName(rmHa);
String[] split1 = appAddress.split(Constants.DOUBLE_SLASH);
if (split1.length != 2) {
return null;
}
String start = split1[0] + Constants.DOUBLE_SLASH;
String[] split2 = split1[1].split(Constants.COLON);
if (split2.length != 2) {
return null;
}
String end = Constants.COLON + split2[1];
return start + activeRM + end;
}
@Override
public void close() throws IOException {
if (fs != null) {
try {
fs.close();
} catch (IOException e) {
logger.error("Close HadoopUtils instance failed", e);
throw new IOException("Close HadoopUtils instance failed", e);
}
}
}
/**
* yarn ha admin utils
*/
private static final class YarnHAAdminUtils extends RMAdminCLI { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | /**
* get active resourcemanager
*
* @param rmIds
* @return
*/
public static String getAcitveRMName(String rmIds) {
String[] rmIdArr = rmIds.split(Constants.COMMA);
int activeResourceManagerPort = PropertyUtils.getInt(Constants.HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT, 8088);
String yarnUrl = "http://%s:" + activeResourceManagerPort + "/ws/v1/cluster/info";
String state = null;
try {
/**
* send http get request to rm1
*/
state = getRMState(String.format(yarnUrl, rmIdArr[0]));
if (Constants.HADOOP_RM_STATE_ACTIVE.equals(state)) {
return rmIdArr[0];
} else if (Constants.HADOOP_RM_STATE_STANDBY.equals(state)) {
state = getRMState(String.format(yarnUrl, rmIdArr[1]));
if (Constants.HADOOP_RM_STATE_ACTIVE.equals(state)) {
return rmIdArr[1];
}
} else {
return null;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,181 | [BUG] get http code 400 bad request with AWS S3 as resource storage type | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
when you use AWS S3 as the resource storage backend you will get a error like:
``` log
Status Code: 400, AWS Service: Amazon S3, AWS Request ID: xxxxxxx, AWS Error Code: null, AWS Error Message: Bad Request
```
**To Reproduce**
Steps to reproduce the behavior, for example:
just set `resource.storage.type=S3` in common.properties and also keep other configuration correct.
**Expected behavior**
the resource centre work fine.
**Screenshots**
when you try to upload a file at resource centre, you will get a error.
**Which version of Dolphin Scheduler:**
-[1.3.1-release]
**Additional context**
it is because of the version of AWS S3 encryption method.
**Requirement or improvement**
I will make a PR for this later.
| https://github.com/apache/dolphinscheduler/issues/3181 | https://github.com/apache/dolphinscheduler/pull/3182 | 1e7582e910c23a42bb4e92cd64fde4df7cbf6b34 | dcdd7dedd06454ed468eae86881b75177261c9e2 | "2020-07-10T11:30:13Z" | java | "2020-07-11T00:56:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | } catch (Exception e) {
state = getRMState(String.format(yarnUrl, rmIdArr[1]));
if (Constants.HADOOP_RM_STATE_ACTIVE.equals(state)) {
return rmIdArr[0];
}
}
return null;
}
/**
* get ResourceManager state
*
* @param url
* @return
*/
public static String getRMState(String url) {
String retStr = HttpUtils.get(url);
if (StringUtils.isEmpty(retStr)) {
return null;
}
ObjectNode jsonObject = JSONUtils.parseObject(retStr);
if (!jsonObject.has("clusterInfo")){
return null;
}
return jsonObject.get("clusterInfo").path("haState").asText();
}
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,923 | [BUG] Hive JDBC connection parameter ignored | **Describe the bug**
The jdbc connection parameter of Hive datasource, should append after the question mark when building jdbc url, like `jdbc:hive2://host:port/default?mapred.job.queue.name=root.users.a`. But actually, it append after the semicolon, so the result is `jdbc:hive2://host:port/default;mapred.job.queue.name=root.users.a`, which make the parameter being ignored

For testing, I set the parameter in this way `{"?mapred.job.queue.name":"root.user.a"}`, and now it can be set correctly
**Which version of Dolphin Scheduler:**
- [1.2.1-release]
| https://github.com/apache/dolphinscheduler/issues/2923 | https://github.com/apache/dolphinscheduler/pull/3194 | 6d43c21d80a3b210a7e01df9fa73d4f076698a58 | 98fdba6740abcb8dd30bc08d6ff39793dd9dd598 | "2020-06-07T12:52:13Z" | java | "2020-07-13T05:58:01Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/datasource/HiveDataSource.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.dao.datasource;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.DbType;
/**
* data source of hive
*/
public class HiveDataSource extends BaseDataSource { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,923 | [BUG] Hive JDBC connection parameter ignored | **Describe the bug**
The jdbc connection parameter of Hive datasource, should append after the question mark when building jdbc url, like `jdbc:hive2://host:port/default?mapred.job.queue.name=root.users.a`. But actually, it append after the semicolon, so the result is `jdbc:hive2://host:port/default;mapred.job.queue.name=root.users.a`, which make the parameter being ignored

For testing, I set the parameter in this way `{"?mapred.job.queue.name":"root.user.a"}`, and now it can be set correctly
**Which version of Dolphin Scheduler:**
- [1.2.1-release]
| https://github.com/apache/dolphinscheduler/issues/2923 | https://github.com/apache/dolphinscheduler/pull/3194 | 6d43c21d80a3b210a7e01df9fa73d4f076698a58 | 98fdba6740abcb8dd30bc08d6ff39793dd9dd598 | "2020-06-07T12:52:13Z" | java | "2020-07-13T05:58:01Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/datasource/HiveDataSource.java | /**
* gets the JDBC url for the data source connection
* @return jdbc url
*/
@Override
public String driverClassSelector() {
return Constants.ORG_APACHE_HIVE_JDBC_HIVE_DRIVER;
}
/**
* @return db type
*/
@Override
public DbType dbTypeSelector() {
return DbType.HIVE;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,187 | [BUG] Heartbeat thread pool does not shutdown when MasterRegistry unRegistry | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
look at the method of MasterRegistry
```java
public void unRegistry() {
String address = getLocalAddress();
String localNodePath = getMasterPath();
zookeeperRegistryCenter.getZookeeperCachedOperator().remove(localNodePath);
logger.info("master node : {} unRegistry to ZK.", address);
}
```
The method, which is invoke when close the MasterServer, does not shutdown the Heartbeat thread pool. | https://github.com/apache/dolphinscheduler/issues/3187 | https://github.com/apache/dolphinscheduler/pull/3188 | 98fdba6740abcb8dd30bc08d6ff39793dd9dd598 | 88f9bed726d479d4be4a46fb599161d5e61f496f | "2020-07-10T17:28:52Z" | java | "2020-07-13T06:27:38Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistry.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,187 | [BUG] Heartbeat thread pool does not shutdown when MasterRegistry unRegistry | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
look at the method of MasterRegistry
```java
public void unRegistry() {
String address = getLocalAddress();
String localNodePath = getMasterPath();
zookeeperRegistryCenter.getZookeeperCachedOperator().remove(localNodePath);
logger.info("master node : {} unRegistry to ZK.", address);
}
```
The method, which is invoke when close the MasterServer, does not shutdown the Heartbeat thread pool. | https://github.com/apache/dolphinscheduler/issues/3187 | https://github.com/apache/dolphinscheduler/pull/3188 | 98fdba6740abcb8dd30bc08d6ff39793dd9dd598 | 88f9bed726d479d4be4a46fb599161d5e61f496f | "2020-07-10T17:28:52Z" | java | "2020-07-13T06:27:38Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistry.java | * distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.registry;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.state.ConnectionState;
import org.apache.curator.framework.state.ConnectionStateListener;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.remote.utils.NamedThreadFactory;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.registry.HeartBeatTask;
import org.apache.dolphinscheduler.server.registry.ZookeeperRegistryCenter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
import java.util.Date;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import static org.apache.dolphinscheduler.remote.utils.Constants.COMMA;
/**
* master registry
*/
@Service |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,187 | [BUG] Heartbeat thread pool does not shutdown when MasterRegistry unRegistry | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
look at the method of MasterRegistry
```java
public void unRegistry() {
String address = getLocalAddress();
String localNodePath = getMasterPath();
zookeeperRegistryCenter.getZookeeperCachedOperator().remove(localNodePath);
logger.info("master node : {} unRegistry to ZK.", address);
}
```
The method, which is invoke when close the MasterServer, does not shutdown the Heartbeat thread pool. | https://github.com/apache/dolphinscheduler/issues/3187 | https://github.com/apache/dolphinscheduler/pull/3188 | 98fdba6740abcb8dd30bc08d6ff39793dd9dd598 | 88f9bed726d479d4be4a46fb599161d5e61f496f | "2020-07-10T17:28:52Z" | java | "2020-07-13T06:27:38Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistry.java | public class MasterRegistry {
private final Logger logger = LoggerFactory.getLogger(MasterRegistry.class);
/**
* zookeeper registry center
*/
@Autowired
private ZookeeperRegistryCenter zookeeperRegistryCenter;
/**
* master config
*/
@Autowired
private MasterConfig masterConfig;
/**
* heartbeat executor
*/
private ScheduledExecutorService heartBeatExecutor;
/**
* worker start time
*/
private String startTime;
@PostConstruct |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,187 | [BUG] Heartbeat thread pool does not shutdown when MasterRegistry unRegistry | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
look at the method of MasterRegistry
```java
public void unRegistry() {
String address = getLocalAddress();
String localNodePath = getMasterPath();
zookeeperRegistryCenter.getZookeeperCachedOperator().remove(localNodePath);
logger.info("master node : {} unRegistry to ZK.", address);
}
```
The method, which is invoke when close the MasterServer, does not shutdown the Heartbeat thread pool. | https://github.com/apache/dolphinscheduler/issues/3187 | https://github.com/apache/dolphinscheduler/pull/3188 | 98fdba6740abcb8dd30bc08d6ff39793dd9dd598 | 88f9bed726d479d4be4a46fb599161d5e61f496f | "2020-07-10T17:28:52Z" | java | "2020-07-13T06:27:38Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistry.java | public void init(){
this.startTime = DateUtils.dateToString(new Date());
this.heartBeatExecutor = Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("HeartBeatExecutor"));
}
/**
* registry
*/
public void registry() {
String address = NetUtils.getHost();
String localNodePath = getMasterPath();
zookeeperRegistryCenter.getZookeeperCachedOperator().persistEphemeral(localNodePath, "");
zookeeperRegistryCenter.getZookeeperCachedOperator().getZkClient().getConnectionStateListenable().addListener(new ConnectionStateListener() {
@Override
public void stateChanged(CuratorFramework client, ConnectionState newState) {
if(newState == ConnectionState.LOST){
logger.error("master : {} connection lost from zookeeper", address);
} else if(newState == ConnectionState.RECONNECTED){
logger.info("master : {} reconnected to zookeeper", address);
zookeeperRegistryCenter.getZookeeperCachedOperator().persistEphemeral(localNodePath, "");
} else if(newState == ConnectionState.SUSPENDED){
logger.warn("master : {} connection SUSPENDED ", address);
}
}
});
int masterHeartbeatInterval = masterConfig.getMasterHeartbeatInterval();
HeartBeatTask heartBeatTask = new HeartBeatTask(startTime,
masterConfig.getMasterReservedMemory(),
masterConfig.getMasterMaxCpuloadAvg(),
getMasterPath(),
zookeeperRegistryCenter); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 3,187 | [BUG] Heartbeat thread pool does not shutdown when MasterRegistry unRegistry | *For better global communication, please give priority to using English description, thx! *
**Describe the bug**
look at the method of MasterRegistry
```java
public void unRegistry() {
String address = getLocalAddress();
String localNodePath = getMasterPath();
zookeeperRegistryCenter.getZookeeperCachedOperator().remove(localNodePath);
logger.info("master node : {} unRegistry to ZK.", address);
}
```
The method, which is invoke when close the MasterServer, does not shutdown the Heartbeat thread pool. | https://github.com/apache/dolphinscheduler/issues/3187 | https://github.com/apache/dolphinscheduler/pull/3188 | 98fdba6740abcb8dd30bc08d6ff39793dd9dd598 | 88f9bed726d479d4be4a46fb599161d5e61f496f | "2020-07-10T17:28:52Z" | java | "2020-07-13T06:27:38Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistry.java | this.heartBeatExecutor.scheduleAtFixedRate(heartBeatTask, masterHeartbeatInterval, masterHeartbeatInterval, TimeUnit.SECONDS);
logger.info("master node : {} registry to ZK successfully with heartBeatInterval : {}s", address, masterHeartbeatInterval);
}
/**
* remove registry info
*/
public void unRegistry() {
String address = getLocalAddress();
String localNodePath = getMasterPath();
zookeeperRegistryCenter.getZookeeperCachedOperator().remove(localNodePath);
logger.info("master node : {} unRegistry to ZK.", address);
}
/**
* get master path
* @return
*/
private String getMasterPath() {
String address = getLocalAddress();
String localNodePath = this.zookeeperRegistryCenter.getMasterPath() + "/" + address;
return localNodePath;
}
/**
* get local address
* @return
*/
private String getLocalAddress(){
return NetUtils.getHost() + ":" + masterConfig.getListenPort();
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.utils.OSUtils;
import java.util.regex.Pattern;
/**
* Constants
*/
public final class Constants { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | private Constants() {
throw new IllegalStateException("Constants class");
}
/**
* quartz config
*/
public static final String ORG_QUARTZ_JOBSTORE_DRIVERDELEGATECLASS = "org.quartz.jobStore.driverDelegateClass";
public static final String ORG_QUARTZ_SCHEDULER_INSTANCENAME = "org.quartz.scheduler.instanceName";
public static final String ORG_QUARTZ_SCHEDULER_INSTANCEID = "org.quartz.scheduler.instanceId";
public static final String ORG_QUARTZ_SCHEDULER_MAKESCHEDULERTHREADDAEMON = "org.quartz.scheduler.makeSchedulerThreadDaemon";
public static final String ORG_QUARTZ_JOBSTORE_USEPROPERTIES = "org.quartz.jobStore.useProperties";
public static final String ORG_QUARTZ_THREADPOOL_CLASS = "org.quartz.threadPool.class";
public static final String ORG_QUARTZ_THREADPOOL_THREADCOUNT = "org.quartz.threadPool.threadCount";
public static final String ORG_QUARTZ_THREADPOOL_MAKETHREADSDAEMONS = "org.quartz.threadPool.makeThreadsDaemons";
public static final String ORG_QUARTZ_THREADPOOL_THREADPRIORITY = "org.quartz.threadPool.threadPriority";
public static final String ORG_QUARTZ_JOBSTORE_CLASS = "org.quartz.jobStore.class";
public static final String ORG_QUARTZ_JOBSTORE_TABLEPREFIX = "org.quartz.jobStore.tablePrefix";
public static final String ORG_QUARTZ_JOBSTORE_ISCLUSTERED = "org.quartz.jobStore.isClustered";
public static final String ORG_QUARTZ_JOBSTORE_MISFIRETHRESHOLD = "org.quartz.jobStore.misfireThreshold";
public static final String ORG_QUARTZ_JOBSTORE_CLUSTERCHECKININTERVAL = "org.quartz.jobStore.clusterCheckinInterval";
public static final String ORG_QUARTZ_JOBSTORE_ACQUIRETRIGGERSWITHINLOCK = "org.quartz.jobStore.acquireTriggersWithinLock";
public static final String ORG_QUARTZ_JOBSTORE_DATASOURCE = "org.quartz.jobStore.dataSource";
public static final String ORG_QUARTZ_DATASOURCE_MYDS_CONNECTIONPROVIDER_CLASS = "org.quartz.dataSource.myDs.connectionProvider.class";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | * quartz config default value
*/
public static final String QUARTZ_TABLE_PREFIX = "QRTZ_";
public static final String QUARTZ_MISFIRETHRESHOLD = "60000";
public static final String QUARTZ_CLUSTERCHECKININTERVAL = "5000";
public static final String QUARTZ_DATASOURCE = "myDs";
public static final String QUARTZ_THREADCOUNT = "25";
public static final String QUARTZ_THREADPRIORITY = "5";
public static final String QUARTZ_INSTANCENAME = "DolphinScheduler";
public static final String QUARTZ_INSTANCEID = "AUTO";
public static final String QUARTZ_ACQUIRETRIGGERSWITHINLOCK = "true";
/**
* common properties path
*/
public static final String COMMON_PROPERTIES_PATH = "/common.properties";
/**
* fs.defaultFS
*/
public static final String FS_DEFAULTFS = "fs.defaultFS";
/**
* fs s3a endpoint
*/
public static final String FS_S3A_ENDPOINT = "fs.s3a.endpoint";
/**
* fs s3a access key
*/
public static final String FS_S3A_ACCESS_KEY = "fs.s3a.access.key";
/**
* fs s3a secret key
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String FS_S3A_SECRET_KEY = "fs.s3a.secret.key";
/**
* yarn.resourcemanager.ha.rm.ids
*/
public static final String YARN_RESOURCEMANAGER_HA_RM_IDS = "yarn.resourcemanager.ha.rm.ids";
public static final String YARN_RESOURCEMANAGER_HA_XX = "xx";
/**
* yarn.application.status.address
*/
public static final String YARN_APPLICATION_STATUS_ADDRESS = "yarn.application.status.address";
/**
* yarn.job.history.status.address
*/
public static final String YARN_JOB_HISTORY_STATUS_ADDRESS = "yarn.job.history.status.address";
/**
* hdfs configuration
* hdfs.root.user
*/
public static final String HDFS_ROOT_USER = "hdfs.root.user";
/**
* hdfs/s3 configuration
* resource.upload.path
*/
public static final String RESOURCE_UPLOAD_PATH = "resource.upload.path";
/**
* data basedir path
*/
public static final String DATA_BASEDIR_PATH = "data.basedir.path";
/**
* dolphinscheduler.env.path |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String DOLPHINSCHEDULER_ENV_PATH = "dolphinscheduler.env.path";
/**
* environment properties default path
*/
public static final String ENV_PATH = "env/dolphinscheduler_env.sh";
/**
* python home
*/
public static final String PYTHON_HOME="PYTHON_HOME";
/**
* resource.view.suffixs
*/
public static final String RESOURCE_VIEW_SUFFIXS = "resource.view.suffixs";
public static final String RESOURCE_VIEW_SUFFIXS_DEFAULT_VALUE = "txt,log,sh,conf,cfg,py,java,sql,hql,xml,properties";
/**
* development.state
*/
public static final String DEVELOPMENT_STATE = "development.state";
public static final String DEVELOPMENT_STATE_DEFAULT_VALUE = "true";
/**
* string true
*/
public static final String STRING_TRUE = "true";
/**
* string false
*/
public static final String STRING_FALSE = "false";
/**
* resource storage type |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String RESOURCE_STORAGE_TYPE = "resource.storage.type";
/**
* MasterServer directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_MASTERS = "/nodes/master";
/**
* WorkerServer directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_WORKERS = "/nodes/worker";
/**
* all servers directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_DEAD_SERVERS = "/dead-servers";
/**
* MasterServer lock directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_MASTERS = "/lock/masters";
/**
* MasterServer failover directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_FAILOVER_MASTERS = "/lock/failover/masters";
/**
* WorkerServer failover directory registered in zookeeper
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_FAILOVER_WORKERS = "/lock/failover/workers";
/**
* MasterServer startup failover runing and fault tolerance process
*/
public static final String ZOOKEEPER_DOLPHINSCHEDULER_LOCK_FAILOVER_STARTUP_MASTERS = "/lock/failover/startup-masters"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* comma ,
*/
public static final String COMMA = ",";
/**
* slash /
*/
public static final String SLASH = "/";
/**
* COLON :
*/
public static final String COLON = ":";
/**
* SINGLE_SLASH /
*/
public static final String SINGLE_SLASH = "/";
/**
* DOUBLE_SLASH //
*/
public static final String DOUBLE_SLASH = "//";
/**
* SEMICOLON ;
*/
public static final String SEMICOLON = ";";
/**
* EQUAL SIGN
*/
public static final String EQUAL_SIGN = "=";
/**
* AT SIGN |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String AT_SIGN = "@";
public static final String WORKER_MAX_CPULOAD_AVG = "worker.max.cpuload.avg";
public static final String WORKER_RESERVED_MEMORY = "worker.reserved.memory";
public static final String MASTER_MAX_CPULOAD_AVG = "master.max.cpuload.avg";
public static final String MASTER_RESERVED_MEMORY = "master.reserved.memory";
/**
* date format of yyyy-MM-dd HH:mm:ss
*/
public static final String YYYY_MM_DD_HH_MM_SS = "yyyy-MM-dd HH:mm:ss";
/**
* date format of yyyyMMddHHmmss
*/
public static final String YYYYMMDDHHMMSS = "yyyyMMddHHmmss";
/**
* http connect time out
*/
public static final int HTTP_CONNECT_TIMEOUT = 60 * 1000;
/**
* http connect request time out
*/
public static final int HTTP_CONNECTION_REQUEST_TIMEOUT = 60 * 1000;
/**
* httpclient soceket time out
*/
public static final int SOCKET_TIMEOUT = 60 * 1000;
/**
* http header
*/
public static final String HTTP_HEADER_UNKNOWN = "unKnown"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* http X-Forwarded-For
*/
public static final String HTTP_X_FORWARDED_FOR = "X-Forwarded-For";
/**
* http X-Real-IP
*/
public static final String HTTP_X_REAL_IP = "X-Real-IP";
/**
* UTF-8
*/
public static final String UTF_8 = "UTF-8";
/**
* user name regex
*/
public static final Pattern REGEX_USER_NAME = Pattern.compile("^[a-zA-Z0-9._-]{3,39}$");
/**
* email regex
*/
public static final Pattern REGEX_MAIL_NAME = Pattern.compile("^([a-z0-9A-Z]+[_|\\-|\\.]?)+[a-z0-9A-Z]@([a-z0-9A-Z]+(-[a-z0-9A-Z]+)?\\.)+[a-zA-Z]{2,}$");
/**
* read permission
*/
public static final int READ_PERMISSION = 2 * 1;
/**
* write permission
*/
public static final int WRITE_PERMISSION = 2 * 2;
/**
* execute permission |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final int EXECUTE_PERMISSION = 1;
/**
* default admin permission
*/
public static final int DEFAULT_ADMIN_PERMISSION = 7;
/**
* all permissions
*/
public static final int ALL_PERMISSIONS = READ_PERMISSION | WRITE_PERMISSION | EXECUTE_PERMISSION;
/**
* max task timeout
*/
public static final int MAX_TASK_TIMEOUT = 24 * 3600;
/**
* master cpu load
*/
public static final int DEFAULT_MASTER_CPU_LOAD = Runtime.getRuntime().availableProcessors() * 2;
/**
* master reserved memory
*/
public static final double DEFAULT_MASTER_RESERVED_MEMORY = OSUtils.totalMemorySize() / 10;
/**
* worker cpu load
*/
public static final int DEFAULT_WORKER_CPU_LOAD = Runtime.getRuntime().availableProcessors() * 2;
/**
* worker reserved memory
*/
public static final double DEFAULT_WORKER_RESERVED_MEMORY = OSUtils.totalMemorySize() / 10; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* default log cache rows num,output when reach the number
*/
public static final int DEFAULT_LOG_ROWS_NUM = 4 * 16;
/**
* log flush interval?output when reach the interval
*/
public static final int DEFAULT_LOG_FLUSH_INTERVAL = 1000;
/**
* time unit secong to minutes
*/
public static final int SEC_2_MINUTES_TIME_UNIT = 60;
/***
*
* rpc port
*/
public static final int RPC_PORT = 50051;
/**
* forbid running task
*/
public static final String FLOWNODE_RUN_FLAG_FORBIDDEN = "FORBIDDEN";
/**
* datasource configuration path
*/
public static final String DATASOURCE_PROPERTIES = "/datasource.properties";
public static final String TASK_RECORD_URL = "task.record.datasource.url";
public static final String TASK_RECORD_FLAG = "task.record.flag";
public static final String TASK_RECORD_USER = "task.record.datasource.username";
public static final String TASK_RECORD_PWD = "task.record.datasource.password";
public static final String DEFAULT = "Default"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String USER = "user";
public static final String PASSWORD = "password";
public static final String XXXXXX = "******";
public static final String NULL = "NULL";
public static final String THREAD_NAME_MASTER_SERVER = "Master-Server";
public static final String THREAD_NAME_WORKER_SERVER = "Worker-Server";
public static final String TASK_RECORD_TABLE_HIVE_LOG = "eamp_hive_log_hd";
public static final String TASK_RECORD_TABLE_HISTORY_HIVE_LOG = "eamp_hive_hist_log_hd";
/**
* command parameter keys
*/
public static final String CMDPARAM_RECOVER_PROCESS_ID_STRING = "ProcessInstanceId";
public static final String CMDPARAM_RECOVERY_START_NODE_STRING = "StartNodeIdList";
public static final String CMDPARAM_RECOVERY_WAITTING_THREAD = "WaittingThreadInstanceId";
public static final String CMDPARAM_SUB_PROCESS = "processInstanceId";
public static final String CMDPARAM_EMPTY_SUB_PROCESS = "0";
public static final String CMDPARAM_SUB_PROCESS_PARENT_INSTANCE_ID = "parentProcessInstanceId";
public static final String CMDPARAM_SUB_PROCESS_DEFINE_ID = "processDefinitionId";
public static final String CMDPARAM_START_NODE_NAMES = "StartNodeNameList";
/**
* complement data start date
*/
public static final String CMDPARAM_COMPLEMENT_DATA_START_DATE = "complementStartDate";
/**
* complement data end date
*/
public static final String CMDPARAM_COMPLEMENT_DATA_END_DATE = "complementEndDate";
/**
* hadoop configuration
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final String HADOOP_RM_STATE_ACTIVE = "ACTIVE";
public static final String HADOOP_RM_STATE_STANDBY = "STANDBY";
public static final String HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT = "resource.manager.httpaddress.port";
/**
* data source config
*/
public static final String SPRING_DATASOURCE_DRIVER_CLASS_NAME = "spring.datasource.driver-class-name";
public static final String SPRING_DATASOURCE_URL = "spring.datasource.url";
public static final String SPRING_DATASOURCE_USERNAME = "spring.datasource.username";
public static final String SPRING_DATASOURCE_PASSWORD = "spring.datasource.password";
public static final String SPRING_DATASOURCE_VALIDATION_QUERY_TIMEOUT = "spring.datasource.validationQueryTimeout";
public static final String SPRING_DATASOURCE_INITIAL_SIZE = "spring.datasource.initialSize";
public static final String SPRING_DATASOURCE_MIN_IDLE = "spring.datasource.minIdle";
public static final String SPRING_DATASOURCE_MAX_ACTIVE = "spring.datasource.maxActive";
public static final String SPRING_DATASOURCE_MAX_WAIT = "spring.datasource.maxWait";
public static final String SPRING_DATASOURCE_TIME_BETWEEN_EVICTION_RUNS_MILLIS = "spring.datasource.timeBetweenEvictionRunsMillis";
public static final String SPRING_DATASOURCE_TIME_BETWEEN_CONNECT_ERROR_MILLIS = "spring.datasource.timeBetweenConnectErrorMillis";
public static final String SPRING_DATASOURCE_MIN_EVICTABLE_IDLE_TIME_MILLIS = "spring.datasource.minEvictableIdleTimeMillis";
public static final String SPRING_DATASOURCE_VALIDATION_QUERY = "spring.datasource.validationQuery";
public static final String SPRING_DATASOURCE_TEST_WHILE_IDLE = "spring.datasource.testWhileIdle";
public static final String SPRING_DATASOURCE_TEST_ON_BORROW = "spring.datasource.testOnBorrow";
public static final String SPRING_DATASOURCE_TEST_ON_RETURN = "spring.datasource.testOnReturn";
public static final String SPRING_DATASOURCE_POOL_PREPARED_STATEMENTS = "spring.datasource.poolPreparedStatements";
public static final String SPRING_DATASOURCE_DEFAULT_AUTO_COMMIT = "spring.datasource.defaultAutoCommit";
public static final String SPRING_DATASOURCE_KEEP_ALIVE = "spring.datasource.keepAlive";
public static final String SPRING_DATASOURCE_MAX_POOL_PREPARED_STATEMENT_PER_CONNECTION_SIZE = "spring.datasource.maxPoolPreparedStatementPerConnectionSize";
public static final String DEVELOPMENT = "development";
public static final String QUARTZ_PROPERTIES_PATH = "quartz.properties";
/**
* sleep time |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final int SLEEP_TIME_MILLIS = 1000;
/**
* heartbeat for zk info length
*/
public static final int HEARTBEAT_FOR_ZOOKEEPER_INFO_LENGTH = 10;
/**
* hadoop params constant
*/
/**
* jar
*/
public static final String JAR = "jar";
/**
* hadoop
*/
public static final String HADOOP = "hadoop";
/**
* -D parameter
*/
public static final String D = "-D";
/**
* -D mapreduce.job.queuename=ququename
*/
public static final String MR_QUEUE = "mapreduce.job.queuename";
/**
* spark params constant
*/
public static final String MASTER = "--master";
public static final String DEPLOY_MODE = "--deploy-mode"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* --class CLASS_NAME
*/
public static final String MAIN_CLASS = "--class";
/**
* --driver-cores NUM
*/
public static final String DRIVER_CORES = "--driver-cores";
/**
* --driver-memory MEM
*/
public static final String DRIVER_MEMORY = "--driver-memory";
/**
* --num-executors NUM
*/
public static final String NUM_EXECUTORS = "--num-executors";
/**
* --executor-cores NUM
*/
public static final String EXECUTOR_CORES = "--executor-cores";
/**
* --executor-memory MEM
*/
public static final String EXECUTOR_MEMORY = "--executor-memory";
/**
* --queue QUEUE
*/
public static final String SPARK_QUEUE = "--queue";
/**
* --queue --qu |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String FLINK_QUEUE = "--qu";
/**
* exit code success
*/
public static final int EXIT_CODE_SUCCESS = 0;
/**
* exit code kill
*/
public static final int EXIT_CODE_KILL = 137;
/**
* exit code failure
*/
public static final int EXIT_CODE_FAILURE = -1;
/**
* date format of yyyyMMdd
*/
public static final String PARAMETER_FORMAT_DATE = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String PARAMETER_FORMAT_TIME = "yyyyMMddHHmmss";
/**
* system date(yyyyMMddHHmmss)
*/
public static final String PARAMETER_DATETIME = "system.datetime";
/**
* system date(yyyymmdd) today
*/
public static final String PARAMETER_CURRENT_DATE = "system.biz.curdate"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* system date(yyyymmdd) yesterday
*/
public static final String PARAMETER_BUSINESS_DATE = "system.biz.date";
/**
* ACCEPTED
*/
public static final String ACCEPTED = "ACCEPTED";
/**
* SUCCEEDED
*/
public static final String SUCCEEDED = "SUCCEEDED";
/**
* NEW
*/
public static final String NEW = "NEW";
/**
* NEW_SAVING
*/
public static final String NEW_SAVING = "NEW_SAVING";
/**
* SUBMITTED
*/
public static final String SUBMITTED = "SUBMITTED";
/**
* FAILED
*/
public static final String FAILED = "FAILED";
/**
* KILLED |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final String KILLED = "KILLED";
/**
* RUNNING
*/
public static final String RUNNING = "RUNNING";
/**
* underline "_"
*/
public static final String UNDERLINE = "_";
/**
* quartz job prifix
*/
public static final String QUARTZ_JOB_PRIFIX = "job";
/**
* quartz job group prifix
*/
public static final String QUARTZ_JOB_GROUP_PRIFIX = "jobgroup";
/**
* projectId
*/
public static final String PROJECT_ID = "projectId";
/**
* processId
*/
public static final String SCHEDULE_ID = "scheduleId";
/**
* schedule
*/
public static final String SCHEDULE = "schedule"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* application regex
*/
public static final String APPLICATION_REGEX = "application_\\d+_\\d+";
public static final String PID = OSUtils.isWindows() ? "handle" : "pid";
/**
* month_begin
*/
public static final String MONTH_BEGIN = "month_begin";
/**
* add_months
*/
public static final String ADD_MONTHS = "add_months";
/**
* month_end
*/
public static final String MONTH_END = "month_end";
/**
* week_begin
*/
public static final String WEEK_BEGIN = "week_begin";
/**
* week_end
*/
public static final String WEEK_END = "week_end";
/**
* timestamp
*/
public static final String TIMESTAMP = "timestamp";
public static final char SUBTRACT_CHAR = '-'; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final char ADD_CHAR = '+';
public static final char MULTIPLY_CHAR = '*';
public static final char DIVISION_CHAR = '/';
public static final char LEFT_BRACE_CHAR = '(';
public static final char RIGHT_BRACE_CHAR = ')';
public static final String ADD_STRING = "+";
public static final String MULTIPLY_STRING = "*";
public static final String DIVISION_STRING = "/";
public static final String LEFT_BRACE_STRING = "(";
public static final char P = 'P';
public static final char N = 'N';
public static final String SUBTRACT_STRING = "-";
public static final String GLOBAL_PARAMS = "globalParams";
public static final String LOCAL_PARAMS = "localParams";
public static final String PROCESS_INSTANCE_STATE = "processInstanceState";
public static final String TASK_LIST = "taskList";
public static final String RWXR_XR_X = "rwxr-xr-x";
/**
* master/worker server use for zk
*/
public static final String MASTER_PREFIX = "master";
public static final String WORKER_PREFIX = "worker";
public static final String DELETE_ZK_OP = "delete";
public static final String ADD_ZK_OP = "add";
public static final String ALIAS = "alias";
public static final String CONTENT = "content";
public static final String DEPENDENT_SPLIT = ":||";
public static final String DEPENDENT_ALL = "ALL";
/**
* preview schedule execute count |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | */
public static final int PREVIEW_SCHEDULE_EXECUTE_COUNT = 5;
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* kerberos expire time
*/
public static final String KERBEROS_EXPIRE_TIME = "kerberos.expire.time";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf";
/**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state";
/**
* com.amazonaws.services.s3.enableV4
*/
public static final String AWS_S3_V4 = "com.amazonaws.services.s3.enableV4"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* default worker group id
*/
public static final int DEFAULT_WORKER_ID = -1;
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* task log info format
*/
public static final String TASK_LOG_INFO_FORMAT = "TaskLogInfo-%s";
/**
* hive conf
*/
public static final String HIVE_CONF = "hiveconf:";
public static final String FLINK_YARN_CLUSTER = "yarn-cluster";
public static final String FLINK_RUN_MODE = "-m";
public static final String FLINK_YARN_SLOT = "-ys";
public static final String FLINK_APP_NAME = "-ynm";
public static final String FLINK_TASK_MANAGE = "-yn";
public static final String FLINK_JOB_MANAGE_MEM = "-yjm";
public static final String FLINK_TASK_MANAGE_MEM = "-ytm";
public static final String FLINK_DETACH = "-d";
public static final String FLINK_MAIN_CLASS = "-c"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 2,574 | [Feature]Configurable registration ipaddress | **Is your feature request related to a problem? Please describe.**
on macos,OSUtils.getHost() returns diffrent ip in running time. Also,the method OSUtils.getHost() didn't
perform well in container situation like docker ,some error may happen
**Describe the solution you'd like**
provide a param like register-ip to specific the ip which will be registried on zk
**Describe alternatives you've considered**
provide a param like networkinterface to specific the networkinterface which will be registried on zk
**Additional context**
<img width="1467" alt="8C1E5A5B-CC64-40FE-9049-5FC6154E72FE" src="https://user-images.githubusercontent.com/42403258/80572245-80ab6180-8a30-11ea-9525-73b33f1bf49e.png">
<img width="1481" alt="25DECE0A-2A92-479F-A76F-7DAEB8F691FF" src="https://user-images.githubusercontent.com/42403258/80572256-8608ac00-8a30-11ea-8784-eeb88c0fb7ea.png">
英文不好,简单的说就是可以指定注册到zk上的ip或可选的网卡配置 | https://github.com/apache/dolphinscheduler/issues/2574 | https://github.com/apache/dolphinscheduler/pull/3186 | 6c9ac84f73a20717ab1014fe8c09e98668262baa | 6f9970b189d71d043e79200a70a95f2f33ad10f4 | "2020-04-29T07:48:39Z" | java | "2020-07-13T10:51:38Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | public static final int[] NOT_TERMINATED_STATES = new int[]{
ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.RUNNING_EXEUTION.ordinal(),
ExecutionStatus.READY_PAUSE.ordinal(),
ExecutionStatus.READY_STOP.ordinal(),
ExecutionStatus.NEED_FAULT_TOLERANCE.ordinal(),
ExecutionStatus.WAITTING_THREAD.ordinal(),
ExecutionStatus.WAITTING_DEPEND.ordinal()
};
/**
* status
*/
public static final String STATUS = "status";
/**
* message
*/
public static final String MSG = "msg";
/**
* data total
*/
public static final String COUNT = "count";
/**
* page size
*/
public static final String PAGE_SIZE = "pageSize";
/**
* current page no
*/
public static final String PAGE_NUMBER = "pageNo";
/** |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.