status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,882 | [Bug] [Master] process cannot finish and its status is always running. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
branch :2.0
process cannot finish and its status is always running.

### What you expected to happen
process end normally.
### How to reproduce
1. run a process with a sub process ( only a shel task in it)
2. sometimes ( 1/20) the process status would be always running.
### Anything else
_No response_
### Version
2.0.0-alpha
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6882 | https://github.com/apache/dolphinscheduler/pull/6886 | f564687a8988c8ccdb2f138b1b994cce9eb914e9 | 653eae24195957b01d1a911aada020372d1742e6 | "2021-11-17T07:22:20Z" | java | "2021-11-17T09:39:22Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/EventExecuteService.java | @Override
public void run() {
logger.info("Event service started");
while (Stopper.isRunning()) {
try {
eventHandler();
TimeUnit.MILLISECONDS.sleep(Constants.SLEEP_TIME_MILLIS);
} catch (Exception e) {
logger.error("Event service thread error", e);
}
}
}
private void eventHandler() {
for (WorkflowExecuteThread workflowExecuteThread : this.processInstanceExecCacheManager.getAll()) {
if (workflowExecuteThread.eventSize() == 0
|| StringUtils.isEmpty(workflowExecuteThread.getKey())
|| eventHandlerMap.containsKey(workflowExecuteThread.getKey())) {
continue;
}
int processInstanceId = workflowExecuteThread.getProcessInstance().getId();
logger.info("handle process instance : {} , events count:{}",
processInstanceId,
workflowExecuteThread.eventSize());
logger.info("already exists handler process size:{}", this.eventHandlerMap.size());
eventHandlerMap.put(workflowExecuteThread.getKey(), workflowExecuteThread);
ListenableFuture future = this.listeningExecutorService.submit(workflowExecuteThread);
FutureCallback futureCallback = new FutureCallback() {
@Override
public void onSuccess(Object o) {
if (workflowExecuteThread.workFlowFinish()) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,882 | [Bug] [Master] process cannot finish and its status is always running. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
branch :2.0
process cannot finish and its status is always running.

### What you expected to happen
process end normally.
### How to reproduce
1. run a process with a sub process ( only a shel task in it)
2. sometimes ( 1/20) the process status would be always running.
### Anything else
_No response_
### Version
2.0.0-alpha
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6882 | https://github.com/apache/dolphinscheduler/pull/6886 | f564687a8988c8ccdb2f138b1b994cce9eb914e9 | 653eae24195957b01d1a911aada020372d1742e6 | "2021-11-17T07:22:20Z" | java | "2021-11-17T09:39:22Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/EventExecuteService.java | processInstanceExecCacheManager.removeByProcessInstanceId(processInstanceId);
notifyProcessChanged();
logger.info("process instance {} finished.", processInstanceId);
}
if (workflowExecuteThread.getProcessInstance().getId() != processInstanceId) {
processInstanceExecCacheManager.removeByProcessInstanceId(processInstanceId);
processInstanceExecCacheManager.cache(workflowExecuteThread.getProcessInstance().getId(), workflowExecuteThread);
}
eventHandlerMap.remove(workflowExecuteThread.getKey());
}
private void notifyProcessChanged() {
Map<ProcessInstance, TaskInstance> fatherMaps
= processService.notifyProcessList(processInstanceId, 0);
for (ProcessInstance processInstance : fatherMaps.keySet()) {
String address = NetUtils.getAddr(masterConfig.getListenPort());
if (processInstance.getHost().equalsIgnoreCase(address)) {
notifyMyself(processInstance, fatherMaps.get(processInstance));
} else {
notifyProcess(processInstance, fatherMaps.get(processInstance));
}
}
}
private void notifyMyself(ProcessInstance processInstance, TaskInstance taskInstance) {
logger.info("notify process {} task {} state change", processInstance.getId(), taskInstance.getId());
if (!processInstanceExecCacheManager.contains(processInstance.getId())) {
return;
}
WorkflowExecuteThread workflowExecuteThreadNotify = processInstanceExecCacheManager.getByProcessInstanceId(processInstance.getId());
StateEvent stateEvent = new StateEvent();
stateEvent.setTaskInstanceId(taskInstance.getId()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,882 | [Bug] [Master] process cannot finish and its status is always running. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
branch :2.0
process cannot finish and its status is always running.

### What you expected to happen
process end normally.
### How to reproduce
1. run a process with a sub process ( only a shel task in it)
2. sometimes ( 1/20) the process status would be always running.
### Anything else
_No response_
### Version
2.0.0-alpha
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6882 | https://github.com/apache/dolphinscheduler/pull/6886 | f564687a8988c8ccdb2f138b1b994cce9eb914e9 | 653eae24195957b01d1a911aada020372d1742e6 | "2021-11-17T07:22:20Z" | java | "2021-11-17T09:39:22Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/EventExecuteService.java | stateEvent.setType(StateEventType.TASK_STATE_CHANGE);
stateEvent.setProcessInstanceId(processInstance.getId());
stateEvent.setExecutionStatus(ExecutionStatus.RUNNING_EXECUTION);
workflowExecuteThreadNotify.addStateEvent(stateEvent);
}
private void notifyProcess(ProcessInstance processInstance, TaskInstance taskInstance) {
String host = processInstance.getHost();
if (StringUtils.isEmpty(host)) {
logger.info("process {} host is empty, cannot notify task {} now.",
processInstance.getId(), taskInstance.getId());
return;
}
String address = host.split(":")[0];
int port = Integer.parseInt(host.split(":")[1]);
logger.info("notify process {} task {} state change, host:{}",
processInstance.getId(), taskInstance.getId(), host);
StateEventChangeCommand stateEventChangeCommand = new StateEventChangeCommand(
processInstanceId, 0, workflowExecuteThread.getProcessInstance().getState(), processInstance.getId(), taskInstance.getId()
);
stateEventCallbackService.sendResult(address, port, stateEventChangeCommand.convert2Command());
}
@Override
public void onFailure(Throwable throwable) {
}
};
Futures.addCallback(future, futureCallback, this.listeningExecutorService);
}
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.service;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.UserType;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.dao.entity.DataSource;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.DataSourceMapper;
import org.apache.dolphinscheduler.dao.mapper.DataSourceUserMapper;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.hive.HiveDataSourceParamDTO;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.mysql.MysqlDatasourceParamDTO;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.oracle.OracleDatasourceParamDTO; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | import org.apache.dolphinscheduler.plugin.datasource.api.datasource.postgresql.PostgreSqlDatasourceParamDTO;
import org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.CommonUtils;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.DatasourceUtil;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.PasswordUtils;
import org.apache.dolphinscheduler.spi.datasource.ConnectionParam;
import org.apache.dolphinscheduler.spi.enums.DbConnectType;
import org.apache.dolphinscheduler.spi.enums.DbType;
import org.apache.dolphinscheduler.spi.utils.PropertyUtils;
import java.sql.Connection;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PowerMockIgnore;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
/**
* data source service test
*/
@RunWith(PowerMockRunner.class)
@PowerMockIgnore({"sun.security.*", "javax.net.*"})
@PrepareForTest({DatasourceUtil.class, CommonUtils.class, DataSourceClientProvider.class, PasswordUtils.class}) |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | public class DataSourceServiceTest {
@InjectMocks
private DataSourceServiceImpl dataSourceService;
@Mock
private DataSourceMapper dataSourceMapper;
@Mock
private DataSourceUserMapper datasourceUserMapper;
public void createDataSourceTest() {
User loginUser = getAdminUser();
String dataSourceName = "dataSource01";
String dataSourceDesc = "test dataSource";
PostgreSqlDatasourceParamDTO postgreSqlDatasourceParam = new PostgreSqlDatasourceParamDTO();
postgreSqlDatasourceParam.setDatabase(dataSourceName);
postgreSqlDatasourceParam.setNote(dataSourceDesc);
postgreSqlDatasourceParam.setHost("172.16.133.200");
postgreSqlDatasourceParam.setPort(5432);
postgreSqlDatasourceParam.setDatabase("dolphinscheduler");
postgreSqlDatasourceParam.setUserName("postgres");
postgreSqlDatasourceParam.setPassword("");
List<DataSource> dataSourceList = new ArrayList<>();
DataSource dataSource = new DataSource();
dataSource.setName(dataSourceName);
dataSourceList.add(dataSource);
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName.trim())).thenReturn(dataSourceList);
Result dataSourceExitsResult = dataSourceService.createDataSource(loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.DATASOURCE_EXIST.getCode(), dataSourceExitsResult.getCode().intValue());
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(postgreSqlDatasourceParam);
DbType dataSourceType = postgreSqlDatasourceParam.getType(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName.trim())).thenReturn(null);
Result connectionResult = new Result(Status.DATASOURCE_CONNECT_FAILED.getCode(), Status.DATASOURCE_CONNECT_FAILED.getMsg());
PowerMockito.doReturn(connectionResult).when(dataSourceService).checkConnection(dataSourceType, connectionParam);
Result connectFailedResult = dataSourceService.createDataSource(loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.DATASOURCE_CONNECT_FAILED.getCode(), connectFailedResult.getCode().intValue());
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName.trim())).thenReturn(null);
connectionResult = new Result(Status.SUCCESS.getCode(), Status.SUCCESS.getMsg());
PowerMockito.when(dataSourceService.checkConnection(dataSourceType, connectionParam)).thenReturn(connectionResult);
Result notValidError = dataSourceService.createDataSource(loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.REQUEST_PARAMS_NOT_VALID_ERROR.getCode(), notValidError.getCode().intValue());
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName.trim())).thenReturn(null);
PowerMockito.when(dataSourceService.checkConnection(dataSourceType, connectionParam)).thenReturn(connectionResult);
Result success = dataSourceService.createDataSource(loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.SUCCESS.getCode(), success.getCode().intValue());
}
public void updateDataSourceTest() {
User loginUser = getAdminUser();
int dataSourceId = 12;
String dataSourceName = "dataSource01";
String dataSourceDesc = "test dataSource";
PostgreSqlDatasourceParamDTO postgreSqlDatasourceParam = new PostgreSqlDatasourceParamDTO();
postgreSqlDatasourceParam.setDatabase(dataSourceName);
postgreSqlDatasourceParam.setNote(dataSourceDesc);
postgreSqlDatasourceParam.setHost("172.16.133.200");
postgreSqlDatasourceParam.setPort(5432);
postgreSqlDatasourceParam.setDatabase("dolphinscheduler");
postgreSqlDatasourceParam.setUserName("postgres"); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | postgreSqlDatasourceParam.setPassword("");
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(null);
Result resourceNotExits = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.RESOURCE_NOT_EXIST.getCode(), resourceNotExits.getCode().intValue());
DataSource dataSource = new DataSource();
dataSource.setUserId(0);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
Result userNoOperationPerm = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.USER_NO_OPERATION_PERM.getCode(), userNoOperationPerm.getCode().intValue());
dataSource.setUserId(-1);
List<DataSource> dataSourceList = new ArrayList<>();
dataSourceList.add(dataSource);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName)).thenReturn(dataSourceList);
Result dataSourceNameExist = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.DATASOURCE_EXIST.getCode(), dataSourceNameExist.getCode().intValue());
DbType dataSourceType = postgreSqlDatasourceParam.getType();
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(postgreSqlDatasourceParam);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName)).thenReturn(null);
Result connectionResult = new Result(Status.SUCCESS.getCode(), Status.SUCCESS.getMsg());
PowerMockito.when(dataSourceService.checkConnection(dataSourceType, connectionParam)).thenReturn(connectionResult);
Result connectFailed = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.DATASOURCE_CONNECT_FAILED.getCode(), connectFailed.getCode().intValue());
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName)).thenReturn(null);
connectionResult = new Result(Status.DATASOURCE_CONNECT_FAILED.getCode(), Status.DATASOURCE_CONNECT_FAILED.getMsg());
PowerMockito.when(dataSourceService.checkConnection(dataSourceType, connectionParam)).thenReturn(connectionResult);
Result success = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.SUCCESS.getCode(), success.getCode().intValue());
}
@Test
public void queryDataSourceListPagingTest() {
User loginUser = getAdminUser();
String searchVal = "";
int pageNo = 1;
int pageSize = 10;
Result result = dataSourceService.queryDataSourceListPaging(loginUser, searchVal, pageNo, pageSize);
Assert.assertEquals(Status.SUCCESS.getCode(),(int)result.getCode());
}
@Test
public void connectionTest() {
int dataSourceId = -1;
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(null);
Result result = dataSourceService.connectionTest(dataSourceId);
Assert.assertEquals(Status.RESOURCE_NOT_EXIST.getCode(), result.getCode().intValue());
}
@Test
public void deleteTest() {
User loginUser = getAdminUser();
int dataSourceId = 1;
Result result = new Result();
dataSourceService.putMsg(result, Status.RESOURCE_NOT_EXIST);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(null); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | Assert.assertEquals(result.getCode(), dataSourceService.delete(loginUser, dataSourceId).getCode());
dataSourceService.putMsg(result, Status.USER_NO_OPERATION_PERM);
DataSource dataSource = new DataSource();
dataSource.setUserId(0);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
Assert.assertEquals(result.getCode(), dataSourceService.delete(loginUser, dataSourceId).getCode());
dataSourceService.putMsg(result, Status.SUCCESS);
dataSource.setUserId(-1);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
Assert.assertEquals(result.getCode(), dataSourceService.delete(loginUser, dataSourceId).getCode());
}
@Test
public void unauthDatasourceTest() {
User loginUser = getAdminUser();
int userId = -1;
Map<String, Object> noOperationPerm = dataSourceService.unauthDatasource(loginUser, userId);
Assert.assertEquals(Status.USER_NO_OPERATION_PERM, noOperationPerm.get(Constants.STATUS));
loginUser.setUserType(UserType.ADMIN_USER);
Map<String, Object> success = dataSourceService.unauthDatasource(loginUser, userId);
Assert.assertEquals(Status.SUCCESS, success.get(Constants.STATUS));
}
@Test
public void authedDatasourceTest() {
User loginUser = getAdminUser();
int userId = -1; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | Map<String, Object> noOperationPerm = dataSourceService.authedDatasource(loginUser, userId);
Assert.assertEquals(Status.USER_NO_OPERATION_PERM, noOperationPerm.get(Constants.STATUS));
loginUser.setUserType(UserType.ADMIN_USER);
Map<String, Object> success = dataSourceService.authedDatasource(loginUser, userId);
Assert.assertEquals(Status.SUCCESS, success.get(Constants.STATUS));
}
@Test
public void queryDataSourceListTest() {
User loginUser = new User();
loginUser.setUserType(UserType.GENERAL_USER);
Map<String, Object> map = dataSourceService.queryDataSourceList(loginUser, DbType.MYSQL.ordinal());
Assert.assertEquals(Status.SUCCESS, map.get(Constants.STATUS));
}
@Test
public void verifyDataSourceNameTest() {
User loginUser = new User();
loginUser.setUserType(UserType.GENERAL_USER);
String dataSourceName = "dataSource1";
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName)).thenReturn(getDataSourceList());
Result result = dataSourceService.verifyDataSourceName(dataSourceName);
Assert.assertEquals(Status.DATASOURCE_EXIST.getMsg(), result.getMsg());
}
@Test
public void queryDataSourceTest() {
PowerMockito.when(dataSourceMapper.selectById(Mockito.anyInt())).thenReturn(null);
Map<String, Object> result = dataSourceService.queryDataSource(Mockito.anyInt());
Assert.assertEquals(((Status) result.get(Constants.STATUS)).getCode(), Status.RESOURCE_NOT_EXIST.getCode());
PowerMockito.when(dataSourceMapper.selectById(Mockito.anyInt())).thenReturn(getOracleDataSource());
result = dataSourceService.queryDataSource(Mockito.anyInt()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | Assert.assertEquals(((Status) result.get(Constants.STATUS)).getCode(), Status.SUCCESS.getCode());
}
private List<DataSource> getDataSourceList() {
List<DataSource> dataSources = new ArrayList<>();
dataSources.add(getOracleDataSource());
return dataSources;
}
private DataSource getOracleDataSource() {
DataSource dataSource = new DataSource();
dataSource.setName("test");
dataSource.setNote("Note");
dataSource.setType(DbType.ORACLE);
dataSource.setConnectionParams("{\"connectType\":\"ORACLE_SID\",\"address\":\"jdbc:oracle:thin:@192.168.xx.xx:49161\",\"database\":\"XE\","
+ "\"jdbcUrl\":\"jdbc:oracle:thin:@192.168.xx.xx:49161/XE\",\"user\":\"system\",\"password\":\"oracle\"}");
return dataSource;
}
@Test
public void buildParameter() {
OracleDatasourceParamDTO oracleDatasourceParamDTO = new OracleDatasourceParamDTO();
oracleDatasourceParamDTO.setHost("192.168.9.1");
oracleDatasourceParamDTO.setPort(1521);
oracleDatasourceParamDTO.setDatabase("im");
oracleDatasourceParamDTO.setUserName("test");
oracleDatasourceParamDTO.setPassword("test");
oracleDatasourceParamDTO.setConnectType(DbConnectType.ORACLE_SERVICE_NAME);
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(oracleDatasourceParamDTO);
String expected = "{\"user\":\"test\",\"password\":\"test\",\"address\":\"jdbc:oracle:thin:@//192.168.9.1:1521\",\"database\":\"im\",\"jdbcUrl\":\"jdbc:oracle:thin:@//192.168.9.1:1521/im\","
+ "\"driverClassName\":\"oracle.jdbc.OracleDriver\",\"validationQuery\":\"select 1 from dual\",\"connectType\":\"ORACLE_SERVICE_NAME\"}";
Assert.assertEquals(expected, JSONUtils.toJsonString(connectionParam));
PowerMockito.mockStatic(CommonUtils.class); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | PowerMockito.mockStatic(PasswordUtils.class);
PowerMockito.when(CommonUtils.getKerberosStartupState()).thenReturn(true);
PowerMockito.when(PasswordUtils.encodePassword(Mockito.anyString())).thenReturn("test");
HiveDataSourceParamDTO hiveDataSourceParamDTO = new HiveDataSourceParamDTO();
hiveDataSourceParamDTO.setHost("192.168.9.1");
hiveDataSourceParamDTO.setPort(10000);
hiveDataSourceParamDTO.setDatabase("im");
hiveDataSourceParamDTO.setPrincipal("hive/[email protected]");
hiveDataSourceParamDTO.setUserName("test");
hiveDataSourceParamDTO.setPassword("test");
hiveDataSourceParamDTO.setJavaSecurityKrb5Conf("/opt/krb5.conf");
hiveDataSourceParamDTO.setLoginUserKeytabPath("/opt/hdfs.headless.keytab");
hiveDataSourceParamDTO.setLoginUserKeytabUsername("test2/[email protected]");
connectionParam = DatasourceUtil.buildConnectionParams(hiveDataSourceParamDTO);
expected = "{\"user\":\"test\",\"password\":\"test\",\"address\":\"jdbc:hive2://192.168.9.1:10000\",\"database\":\"im\",\"jdbcUrl\":\"jdbc:hive2://192.168.9.1:10000/im;"
+ "principal=hive/[email protected]\",\"driverClassName\":\"org.apache.hive.jdbc.HiveDriver\",\"validationQuery\":\"select 1\",\"principal\":\"hive/[email protected]\","
+ "\"javaSecurityKrb5Conf\":\"/opt/krb5.conf\",\"loginUserKeytabUsername\":\"test2/[email protected]\",\"loginUserKeytabPath\":\"/opt/hdfs.headless.keytab\"}";
Assert.assertEquals(expected, JSONUtils.toJsonString(connectionParam));
}
@Test
public void buildParameterWithDecodePassword() {
PropertyUtils.setValue(Constants.DATASOURCE_ENCRYPTION_ENABLE, "true");
Map<String, String> other = new HashMap<>();
other.put("autoDeserialize", "yes");
other.put("allowUrlInLocalInfile", "true");
MysqlDatasourceParamDTO mysqlDatasourceParamDTO = new MysqlDatasourceParamDTO();
mysqlDatasourceParamDTO.setHost("192.168.9.1");
mysqlDatasourceParamDTO.setPort(1521);
mysqlDatasourceParamDTO.setDatabase("im");
mysqlDatasourceParamDTO.setUserName("test"); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | mysqlDatasourceParamDTO.setPassword("123456");
mysqlDatasourceParamDTO.setOther(other);
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(mysqlDatasourceParamDTO);
String expected = "{\"user\":\"test\",\"password\":\"IUAjJCVeJipNVEl6TkRVMg==\",\"address\":\"jdbc:mysql://192.168.9.1:1521\",\"database\":\"im\",\"jdbcUrl\":\"jdbc:mysql://192.168.9.1:1521/"
+ "im\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"validationQuery\":\"select 1\",\"props\":{\"autoDeserialize\":\"yes\",\"allowUrlInLocalInfile\":\"true\"}}";
Assert.assertEquals(expected, JSONUtils.toJsonString(connectionParam));
PropertyUtils.setValue(Constants.DATASOURCE_ENCRYPTION_ENABLE, "false");
mysqlDatasourceParamDTO = new MysqlDatasourceParamDTO();
mysqlDatasourceParamDTO.setHost("192.168.9.1");
mysqlDatasourceParamDTO.setPort(1521);
mysqlDatasourceParamDTO.setDatabase("im");
mysqlDatasourceParamDTO.setUserName("test");
mysqlDatasourceParamDTO.setPassword("123456");
connectionParam = DatasourceUtil.buildConnectionParams(mysqlDatasourceParamDTO);
expected = "{\"user\":\"test\",\"password\":\"123456\",\"address\":\"jdbc:mysql://192.168.9.1:1521\",\"database\":\"im\","
+ "\"jdbcUrl\":\"jdbc:mysql://192.168.9.1:1521/im\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"validationQuery\":\"select 1\"}";
Assert.assertEquals(expected, JSONUtils.toJsonString(connectionParam));
}
/**
* get Mock Admin User
*
* @return admin user
*/
private User getAdminUser() {
User loginUser = new User();
loginUser.setId(-1);
loginUser.setUserName("admin");
loginUser.setUserType(UserType.GENERAL_USER);
return loginUser;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | /**
* test check connection
*/
@Test
public void testCheckConnection() throws Exception {
DbType dataSourceType = DbType.POSTGRESQL;
String dataSourceName = "dataSource01";
String dataSourceDesc = "test dataSource";
PostgreSqlDatasourceParamDTO postgreSqlDatasourceParam = new PostgreSqlDatasourceParamDTO();
postgreSqlDatasourceParam.setDatabase(dataSourceName);
postgreSqlDatasourceParam.setNote(dataSourceDesc);
postgreSqlDatasourceParam.setHost("172.16.133.200");
postgreSqlDatasourceParam.setPort(5432);
postgreSqlDatasourceParam.setDatabase("dolphinscheduler");
postgreSqlDatasourceParam.setUserName("postgres");
postgreSqlDatasourceParam.setPassword("");
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(postgreSqlDatasourceParam);
PowerMockito.mockStatic(DatasourceUtil.class);
PowerMockito.mockStatic(DataSourceClientProvider.class);
DataSourceClientProvider clientProvider = PowerMockito.mock(DataSourceClientProvider.class);
PowerMockito.when(DataSourceClientProvider.getInstance()).thenReturn(clientProvider);
Result result = dataSourceService.checkConnection(dataSourceType, connectionParam);
Assert.assertEquals(Status.CONNECTION_TEST_FAILURE.getCode(), result.getCode().intValue());
Connection connection = PowerMockito.mock(Connection.class);
PowerMockito.when(clientProvider.getConnection(Mockito.any(), Mockito.any())).thenReturn(connection);
result = dataSourceService.checkConnection(dataSourceType, connectionParam);
Assert.assertEquals(Status.SUCCESS.getCode(), result.getCode().intValue());
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/main/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessor.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.datasource.api.datasource.mysql;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.AbstractDatasourceProcessor;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.BaseDataSourceParamDTO;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.PasswordUtils;
import org.apache.dolphinscheduler.spi.datasource.BaseConnectionParam;
import org.apache.dolphinscheduler.spi.datasource.ConnectionParam;
import org.apache.dolphinscheduler.spi.enums.DbType;
import org.apache.dolphinscheduler.spi.utils.Constants;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
import org.apache.dolphinscheduler.spi.utils.StringUtils;
import org.apache.commons.collections4.MapUtils;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/main/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessor.java | import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.Map;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class MysqlDatasourceProcessor extends AbstractDatasourceProcessor {
private final Logger logger = LoggerFactory.getLogger(MysqlDatasourceProcessor.class);
private static final String ALLOW_LOAD_LOCAL_IN_FILE_NAME = "allowLoadLocalInfile";
private static final String AUTO_DESERIALIZE = "autoDeserialize";
private static final String ALLOW_LOCAL_IN_FILE_NAME = "allowLocalInfile";
private static final String ALLOW_URL_IN_LOCAL_IN_FILE_NAME = "allowUrlInLocalInfile";
private static final String APPEND_PARAMS = "allowLoadLocalInfile=false&autoDeserialize=false&allowLocalInfile=false&allowUrlInLocalInfile=false";
@Override
public BaseDataSourceParamDTO createDatasourceParamDTO(String connectionJson) {
MysqlConnectionParam connectionParams = (MysqlConnectionParam) createConnectionParams(connectionJson);
MysqlDatasourceParamDTO mysqlDatasourceParamDTO = new MysqlDatasourceParamDTO();
mysqlDatasourceParamDTO.setUserName(connectionParams.getUser());
mysqlDatasourceParamDTO.setDatabase(connectionParams.getDatabase());
mysqlDatasourceParamDTO.setOther(parseOther(connectionParams.getOther()));
String address = connectionParams.getAddress();
String[] hostSeperator = address.split(Constants.DOUBLE_SLASH);
String[] hostPortArray = hostSeperator[hostSeperator.length - 1].split(Constants.COMMA);
mysqlDatasourceParamDTO.setPort(Integer.parseInt(hostPortArray[0].split(Constants.COLON)[1]));
mysqlDatasourceParamDTO.setHost(hostPortArray[0].split(Constants.COLON)[0]);
return mysqlDatasourceParamDTO;
}
@Override
public BaseConnectionParam createConnectionParams(BaseDataSourceParamDTO dataSourceParam) {
MysqlDatasourceParamDTO mysqlDatasourceParam = (MysqlDatasourceParamDTO) dataSourceParam;
String address = String.format("%s%s:%s", Constants.JDBC_MYSQL, mysqlDatasourceParam.getHost(), mysqlDatasourceParam.getPort()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/main/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessor.java | String jdbcUrl = String.format("%s/%s", address, mysqlDatasourceParam.getDatabase());
MysqlConnectionParam mysqlConnectionParam = new MysqlConnectionParam();
mysqlConnectionParam.setJdbcUrl(jdbcUrl);
mysqlConnectionParam.setDatabase(mysqlDatasourceParam.getDatabase());
mysqlConnectionParam.setAddress(address);
mysqlConnectionParam.setUser(mysqlDatasourceParam.getUserName());
mysqlConnectionParam.setPassword(PasswordUtils.encodePassword(mysqlDatasourceParam.getPassword()));
mysqlConnectionParam.setDriverClassName(getDatasourceDriver());
mysqlConnectionParam.setValidationQuery(getValidationQuery());
mysqlConnectionParam.setOther(transformOther(mysqlDatasourceParam.getOther()));
mysqlConnectionParam.setProps(mysqlDatasourceParam.getOther());
return mysqlConnectionParam;
}
@Override
public ConnectionParam createConnectionParams(String connectionJson) {
return JSONUtils.parseObject(connectionJson, MysqlConnectionParam.class);
}
@Override
public String getDatasourceDriver() {
return Constants.COM_MYSQL_JDBC_DRIVER;
}
@Override
public String getValidationQuery() {
return Constants.MYSQL_VALIDATION_QUERY;
}
@Override
public String getJdbcUrl(ConnectionParam connectionParam) {
MysqlConnectionParam mysqlConnectionParam = (MysqlConnectionParam) connectionParam;
String jdbcUrl = mysqlConnectionParam.getJdbcUrl();
if (!StringUtils.isEmpty(mysqlConnectionParam.getOther())) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/main/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessor.java | return String.format("%s?%s&%s", jdbcUrl, mysqlConnectionParam.getOther(), APPEND_PARAMS);
}
return String.format("%s?%s", jdbcUrl, APPEND_PARAMS);
}
@Override
public Connection getConnection(ConnectionParam connectionParam) throws ClassNotFoundException, SQLException {
MysqlConnectionParam mysqlConnectionParam = (MysqlConnectionParam) connectionParam;
Class.forName(getDatasourceDriver());
String user = mysqlConnectionParam.getUser();
if (user.contains(AUTO_DESERIALIZE)) {
logger.warn("sensitive param : {} in username field is filtered", AUTO_DESERIALIZE);
user = user.replace(AUTO_DESERIALIZE, "");
}
String password = PasswordUtils.decodePassword(mysqlConnectionParam.getPassword());
if (password.contains(AUTO_DESERIALIZE)) {
logger.warn("sensitive param : {} in password field is filtered", AUTO_DESERIALIZE);
password = password.replace(AUTO_DESERIALIZE, "");
}
return DriverManager.getConnection(getJdbcUrl(connectionParam), user, password);
}
@Override
public DbType getDbType() {
return DbType.MYSQL;
}
private String transformOther(Map<String, String> paramMap) {
if (MapUtils.isEmpty(paramMap)) {
return null;
}
Map<String, String> otherMap = new HashMap<>();
paramMap.forEach((k, v) -> { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/main/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessor.java | if (!checkKeyIsLegitimate(k)) {
return;
}
otherMap.put(k, v);
});
if (MapUtils.isEmpty(otherMap)) {
return null;
}
StringBuilder stringBuilder = new StringBuilder();
otherMap.forEach((key, value) -> stringBuilder.append(String.format("%s=%s&", key, value)));
return stringBuilder.toString();
}
private static boolean checkKeyIsLegitimate(String key) {
return !key.contains(ALLOW_LOAD_LOCAL_IN_FILE_NAME)
&& !key.contains(AUTO_DESERIALIZE)
&& !key.contains(ALLOW_LOCAL_IN_FILE_NAME)
&& !key.contains(ALLOW_URL_IN_LOCAL_IN_FILE_NAME);
}
private Map<String, String> parseOther(String other) {
if (StringUtils.isEmpty(other)) {
return null;
}
Map<String, String> otherMap = new LinkedHashMap<>();
for (String config : other.split("&")) {
otherMap.put(config.split("=")[0], config.split("=")[1]);
}
return otherMap;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/test/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessorTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0 |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/test/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessorTest.java | * (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.datasource.api.datasource.mysql;
import org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.CommonUtils;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.DatasourceUtil;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.PasswordUtils;
import org.apache.dolphinscheduler.spi.enums.DbType;
import org.apache.dolphinscheduler.spi.utils.Constants;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
import java.sql.DriverManager;
import java.util.HashMap;
import java.util.Map;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mockito;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
@RunWith(PowerMockRunner.class) |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/test/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessorTest.java | @PrepareForTest({Class.class, DriverManager.class, DatasourceUtil.class, CommonUtils.class, DataSourceClientProvider.class, PasswordUtils.class})
public class MysqlDatasourceProcessorTest {
private MysqlDatasourceProcessor mysqlDatasourceProcessor = new MysqlDatasourceProcessor();
@Test
public void testCreateConnectionParams() {
Map<String, String> props = new HashMap<>();
props.put("serverTimezone", "utc");
MysqlDatasourceParamDTO mysqlDatasourceParamDTO = new MysqlDatasourceParamDTO();
mysqlDatasourceParamDTO.setUserName("root");
mysqlDatasourceParamDTO.setPassword("123456");
mysqlDatasourceParamDTO.setHost("localhost");
mysqlDatasourceParamDTO.setPort(3306);
mysqlDatasourceParamDTO.setDatabase("default");
mysqlDatasourceParamDTO.setOther(props);
PowerMockito.mockStatic(PasswordUtils.class);
PowerMockito.when(PasswordUtils.encodePassword(Mockito.anyString())).thenReturn("test");
MysqlConnectionParam connectionParams = (MysqlConnectionParam) mysqlDatasourceProcessor
.createConnectionParams(mysqlDatasourceParamDTO);
Assert.assertEquals("jdbc:mysql://localhost:3306", connectionParams.getAddress());
Assert.assertEquals("jdbc:mysql://localhost:3306/default", connectionParams.getJdbcUrl());
}
@Test
public void testCreateConnectionParams2() {
String connectionJson = "{\"user\":\"root\",\"password\":\"123456\",\"address\":\"jdbc:mysql://localhost:3306\""
+ ",\"database\":\"default\",\"jdbcUrl\":\"jdbc:mysql://localhost:3306/default\"}";
MysqlConnectionParam connectionParams = (MysqlConnectionParam) mysqlDatasourceProcessor
.createConnectionParams(connectionJson); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/test/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessorTest.java | Assert.assertNotNull(connectionJson);
Assert.assertEquals("root", connectionParams.getUser());
}
@Test
public void testGetDatasourceDriver() {
Assert.assertEquals(Constants.COM_MYSQL_JDBC_DRIVER, mysqlDatasourceProcessor.getDatasourceDriver());
}
@Test
public void testGetJdbcUrl() {
MysqlConnectionParam mysqlConnectionParam = new MysqlConnectionParam();
mysqlConnectionParam.setJdbcUrl("jdbc:mysql://localhost:3306/default");
Assert.assertEquals("jdbc:mysql://localhost:3306/default?allowLoadLocalInfile=false&autoDeserialize=false&allowLocalInfile=false&allowUrlInLocalInfile=false",
mysqlDatasourceProcessor.getJdbcUrl(mysqlConnectionParam));
}
@Test
public void testGetDbType() {
Assert.assertEquals(DbType.MYSQL, mysqlDatasourceProcessor.getDbType());
}
@Test
public void testGetValidationQuery() {
Assert.assertEquals(Constants.MYSQL_VALIDATION_QUERY, mysqlDatasourceProcessor.getValidationQuery());
}
@Test
public void testGetDatasourceUniqueId() {
MysqlConnectionParam mysqlConnectionParam = new MysqlConnectionParam();
mysqlConnectionParam.setJdbcUrl("jdbc:mysql://localhost:3306/default");
mysqlConnectionParam.setUser("root");
Assert.assertEquals("mysql@root@jdbc:mysql://localhost:3306/default", mysqlDatasourceProcessor.getDatasourceUniqueId(mysqlConnectionParam, DbType.MYSQL));
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.spi.task;
public class TaskConstants { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | private TaskConstants() {
throw new IllegalStateException("Utility class");
}
public static final String APPLICATION_REGEX = "application_\\d+_\\d+";
/**
* string false
*/
public static final String STRING_FALSE = "false";
/**
* exit code kill
*/
public static final int EXIT_CODE_KILL = 137;
public static final String PID = "pid";
/**
* comma ,
*/
public static final String COMMA = ",";
/**
* slash /
*/
public static final String SLASH = "/";
/**
* COLON : |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | */
public static final String COLON = ":";
/**
* SPACE " "
*/
public static final String SPACE = " ";
/**
* SINGLE_SLASH /
*/
public static final String SINGLE_SLASH = "/";
/**
* DOUBLE_SLASH //
*/
public static final String DOUBLE_SLASH = "//";
/**
* SINGLE_QUOTES "'"
*/
public static final String SINGLE_QUOTES = "'";
/**
* DOUBLE_QUOTES "\""
*/
public static final String DOUBLE_QUOTES = "\"";
/**
* SEMICOLON ;
*/
public static final String SEMICOLON = ";";
/**
* EQUAL SIGN
*/
public static final String EQUAL_SIGN = "="; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | /**
* AT SIGN
*/
public static final String AT_SIGN = "@";
/**
* sleep time
*/
public static final int SLEEP_TIME_MILLIS = 1000;
/**
* exit code failure
*/
public static final int EXIT_CODE_FAILURE = -1;
/**
* exit code success
*/
public static final int EXIT_CODE_SUCCESS = 0;
public static final String SH = "sh";
/**
* default log cache rows num,output when reach the number
*/
public static final int DEFAULT_LOG_ROWS_NUM = 4 * 16;
/**
* log flush interval?output when reach the interval
*/
public static final int DEFAULT_LOG_FLUSH_INTERVAL = 1000;
/**
* pstree, get pud and sub pid
*/
public static final String PSTREE = "pstree";
public static final String RWXR_XR_X = "rwxr-xr-x"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | /**
* task log info format
*/
public static final String TASK_LOG_INFO_FORMAT = "TaskLogInfo-%s";
/**
* date format of yyyyMMdd
*/
public static final String PARAMETER_FORMAT_DATE = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String PARAMETER_FORMAT_TIME = "yyyyMMddHHmmss";
/**
* new
* schedule time
*/
public static final String PARAMETER_SHECDULE_TIME = "schedule.time";
/**
* system date(yyyyMMddHHmmss)
*/
public static final String PARAMETER_DATETIME = "system.datetime";
/**
* system date(yyyymmdd) today
*/
public static final String PARAMETER_CURRENT_DATE = "system.biz.curdate";
/**
* system date(yyyymmdd) yesterday
*/
public static final String PARAMETER_BUSINESS_DATE = "system.biz.date";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | * the absolute path of current executing task
*/
public static final String PARAMETER_TASK_EXECUTE_PATH = "system.task.execute.path";
/**
* the instance id of current task
*/
public static final String PARAMETER_TASK_INSTANCE_ID = "system.task.instance.id";
/**
* month_begin
*/
public static final String MONTH_BEGIN = "month_begin";
/**
* add_months
*/
public static final String ADD_MONTHS = "add_months";
/**
* month_end
*/
public static final String MONTH_END = "month_end";
/**
* week_begin
*/
public static final String WEEK_BEGIN = "week_begin";
/**
* week_end
*/
public static final String WEEK_END = "week_end";
/**
* timestamp
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | public static final String TIMESTAMP = "timestamp";
public static final char SUBTRACT_CHAR = '-';
public static final char ADD_CHAR = '+';
public static final char MULTIPLY_CHAR = '*';
public static final char DIVISION_CHAR = '/';
public static final char LEFT_BRACE_CHAR = '(';
public static final char RIGHT_BRACE_CHAR = ')';
public static final String ADD_STRING = "+";
public static final String MULTIPLY_STRING = "*";
public static final String DIVISION_STRING = "/";
public static final String LEFT_BRACE_STRING = "(";
public static final char P = 'P';
public static final char N = 'N';
public static final String SUBTRACT_STRING = "-";
public static final String GLOBAL_PARAMS = "globalParams";
public static final String LOCAL_PARAMS = "localParams";
public static final String LOCAL_PARAMS_LIST = "localParamsList";
public static final String SUBPROCESS_INSTANCE_ID = "subProcessInstanceId";
public static final String PROCESS_INSTANCE_STATE = "processInstanceState";
public static final String PARENT_WORKFLOW_INSTANCE = "parentWorkflowInstance";
public static final String CONDITION_RESULT = "conditionResult";
public static final String SWITCH_RESULT = "switchResult";
public static final String DEPENDENCE = "dependence";
public static final String TASK_TYPE = "taskType";
public static final String TASK_LIST = "taskList";
public static final String QUEUE = "queue";
public static final String QUEUE_NAME = "queueName";
public static final int LOG_QUERY_SKIP_LINE_NUMBER = 0;
public static final int LOG_QUERY_LIMIT = 4096;
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | * default display rows
*/
public static final int DEFAULT_DISPLAY_ROWS = 10;
/**
* jar
*/
public static final String JAR = "jar";
/**
* hadoop
*/
public static final String HADOOP = "hadoop";
/**
* -D <property>=<value>
*/
public static final String D = "-D";
/**
* jdbc url
*/
public static final String JDBC_MYSQL = "jdbc:mysql://";
public static final String JDBC_POSTGRESQL = "jdbc:postgresql://";
public static final String JDBC_HIVE_2 = "jdbc:hive2://";
public static final String JDBC_CLICKHOUSE = "jdbc:clickhouse://";
public static final String JDBC_ORACLE_SID = "jdbc:oracle:thin:@";
public static final String JDBC_ORACLE_SERVICE_NAME = "jdbc:oracle:thin:@//";
public static final String JDBC_SQLSERVER = "jdbc:sqlserver://";
public static final String JDBC_DB2 = "jdbc:db2://";
public static final String JDBC_PRESTO = "jdbc:presto://";
/**
* driver
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | public static final String ORG_POSTGRESQL_DRIVER = "org.postgresql.Driver";
public static final String COM_MYSQL_JDBC_DRIVER = "com.mysql.jdbc.Driver";
public static final String ORG_APACHE_HIVE_JDBC_HIVE_DRIVER = "org.apache.hive.jdbc.HiveDriver";
public static final String COM_CLICKHOUSE_JDBC_DRIVER = "ru.yandex.clickhouse.ClickHouseDriver";
public static final String COM_ORACLE_JDBC_DRIVER = "oracle.jdbc.driver.OracleDriver";
public static final String COM_SQLSERVER_JDBC_DRIVER = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
public static final String COM_DB2_JDBC_DRIVER = "com.ibm.db2.jcc.DB2Driver";
public static final String COM_PRESTO_JDBC_DRIVER = "com.facebook.presto.jdbc.PrestoDriver";
/**
* datasource encryption salt
*/
public static final String DATASOURCE_ENCRYPTION_SALT_DEFAULT = "!@#$%^&*";
public static final String DATASOURCE_ENCRYPTION_ENABLE = "datasource.encryption.enable";
public static final String DATASOURCE_ENCRYPTION_SALT = "datasource.encryption.salt";
/**
* resource storage type
*/
public static final String RESOURCE_STORAGE_TYPE = "resource.storage.type";
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* kerberos expire time
*/
public static final String KERBEROS_EXPIRE_TIME = "kerberos.expire.time";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | /**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state";
/**
* Task Logger Thread's name
*/
public static final String TASK_LOGGER_THREAD_NAME = "TaskLogInfo";
/**
* hdfs/s3 configuration
* resource.upload.path
*/
public static final String RESOURCE_UPLOAD_PATH = "resource.upload.path";
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/Constants.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.spi.utils;
/**
* constants
*/
public class Constants { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/Constants.java | private Constants() {
throw new IllegalStateException("Constants class");
}
public static final String STRING_PLUGIN_PARAM_FIELD = "field";
public static final String STRING_PLUGIN_PARAM_NAME = "name";
public static final String STRING_PLUGIN_PARAM_PROPS = "props";
public static final String STRING_PLUGIN_PARAM_TYPE = "type";
public static final String STRING_PLUGIN_PARAM_TITLE = "title";
public static final String STRING_PLUGIN_PARAM_VALUE = "value";
public static final String STRING_PLUGIN_PARAM_VALIDATE = "validate";
public static final String STRING_PLUGIN_PARAM_OPTIONS = "options";
public static final String STRING_TRUE = "true";
public static final String STRING_FALSE = "false";
public static final String STRING_YES = "YES"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/Constants.java | public static final String STRING_NO = "NO";
/**
* common properties path
*/
public static final String COMMON_PROPERTIES_PATH = "/common.properties";
/**
* date format of yyyy-MM-dd HH:mm:ss
*/
public static final String YYYY_MM_DD_HH_MM_SS = "yyyy-MM-dd HH:mm:ss";
/**
* date format of yyyyMMddHHmmss
*/
public static final String YYYYMMDDHHMMSS = "yyyyMMddHHmmss";
/**
* date format of yyyyMMddHHmmssSSS
*/
public static final String YYYYMMDDHHMMSSSSS = "yyyyMMddHHmmssSSS";
public static final String SPRING_DATASOURCE_MIN_IDLE = "spring.datasource.minIdle";
public static final String SPRING_DATASOURCE_MAX_ACTIVE = "spring.datasource.maxActive";
public static final String SPRING_DATASOURCE_TEST_ON_BORROW = "spring.datasource.testOnBorrow";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf";
/**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/Constants.java | * hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state";
/**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* resource storage type
*/
public static final String RESOURCE_STORAGE_TYPE = "resource.storage.type";
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* driver
*/
public static final String ORG_POSTGRESQL_DRIVER = "org.postgresql.Driver";
public static final String COM_MYSQL_JDBC_DRIVER = "com.mysql.jdbc.Driver";
public static final String ORG_APACHE_HIVE_JDBC_HIVE_DRIVER = "org.apache.hive.jdbc.HiveDriver";
public static final String COM_CLICKHOUSE_JDBC_DRIVER = "ru.yandex.clickhouse.ClickHouseDriver"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/Constants.java | public static final String COM_ORACLE_JDBC_DRIVER = "oracle.jdbc.OracleDriver";
public static final String COM_SQLSERVER_JDBC_DRIVER = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
public static final String COM_DB2_JDBC_DRIVER = "com.ibm.db2.jcc.DB2Driver";
public static final String COM_PRESTO_JDBC_DRIVER = "com.facebook.presto.jdbc.PrestoDriver";
/**
* validation Query
*/
public static final String POSTGRESQL_VALIDATION_QUERY = "select version()";
public static final String MYSQL_VALIDATION_QUERY = "select 1";
public static final String HIVE_VALIDATION_QUERY = "select 1";
public static final String CLICKHOUSE_VALIDATION_QUERY = "select 1";
public static final String ORACLE_VALIDATION_QUERY = "select 1 from dual";
public static final String SQLSERVER_VALIDATION_QUERY = "select 1";
public static final String DB2_VALIDATION_QUERY = "select 1 from sysibm.sysdummy1";
public static final String PRESTO_VALIDATION_QUERY = "select 1";
/**
* jdbc url
*/
public static final String JDBC_MYSQL = "jdbc:mysql://";
public static final String JDBC_POSTGRESQL = "jdbc:postgresql://";
public static final String JDBC_HIVE_2 = "jdbc:hive2://";
public static final String JDBC_CLICKHOUSE = "jdbc:clickhouse://";
public static final String JDBC_ORACLE_SID = "jdbc:oracle:thin:@";
public static final String JDBC_ORACLE_SERVICE_NAME = "jdbc:oracle:thin:@//";
public static final String JDBC_SQLSERVER = "jdbc:sqlserver://";
public static final String JDBC_DB2 = "jdbc:db2://";
public static final String JDBC_PRESTO = "jdbc:presto://";
public static final String ADDRESS = "address";
public static final String DATABASE = "database";
public static final String JDBC_URL = "jdbcUrl"; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/Constants.java | public static final String PRINCIPAL = "principal";
public static final String OTHER = "other";
public static final String ORACLE_DB_CONNECT_TYPE = "connectType";
public static final String KERBEROS_KRB5_CONF_PATH = "javaSecurityKrb5Conf";
public static final String KERBEROS_KEY_TAB_USERNAME = "loginUserKeytabUsername";
public static final String KERBEROS_KEY_TAB_PATH = "loginUserKeytabPath";
/**
* DOUBLE_SLASH //
*/
public static final String DOUBLE_SLASH = "//";
/**
* comma ,
*/
public static final String COMMA = ",";
/**
* COLON :
*/
public static final String COLON = ":";
/**
* AT SIGN
*/
public static final String AT_SIGN = "@";
/**
* datasource encryption salt
*/
public static final String DATASOURCE_ENCRYPTION_SALT_DEFAULT = "!@#$%^&*";
public static final String DATASOURCE_ENCRYPTION_ENABLE = "datasource.encryption.enable";
public static final String DATASOURCE_ENCRYPTION_SALT = "datasource.encryption.salt";
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_UPLOAD_PATH;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.ResUploadType;
import org.apache.dolphinscheduler.spi.enums.ResourceType;
import org.apache.dolphinscheduler.common.exception.BaseException;
import org.apache.commons.io.IOUtils;
import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.client.cli.RMAdminCLI;
import java.io.BufferedReader;
import java.io.Closeable;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.security.PrivilegedExceptionAction;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
/**
* hadoop utils
* single instance
*/
public class HadoopUtils implements Closeable { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | private static final Logger logger = LoggerFactory.getLogger(HadoopUtils.class);
private static String hdfsUser = PropertyUtils.getString(Constants.HDFS_ROOT_USER);
public static final String resourceUploadPath = PropertyUtils.getString(RESOURCE_UPLOAD_PATH, "/dolphinscheduler");
public static final String rmHaIds = PropertyUtils.getString(Constants.YARN_RESOURCEMANAGER_HA_RM_IDS);
public static final String appAddress = PropertyUtils.getString(Constants.YARN_APPLICATION_STATUS_ADDRESS);
public static final String jobHistoryAddress = PropertyUtils.getString(Constants.YARN_JOB_HISTORY_STATUS_ADDRESS);
private static final String HADOOP_UTILS_KEY = "HADOOP_UTILS_KEY";
private static final LoadingCache<String, HadoopUtils> cache = CacheBuilder
.newBuilder()
.expireAfterWrite(PropertyUtils.getInt(Constants.KERBEROS_EXPIRE_TIME, 2), TimeUnit.HOURS)
.build(new CacheLoader<String, HadoopUtils>() {
@Override
public HadoopUtils load(String key) throws Exception {
return new HadoopUtils();
}
});
private static volatile boolean yarnEnabled = false;
private Configuration configuration;
private FileSystem fs;
private HadoopUtils() {
init();
initHdfsPath();
}
public static HadoopUtils getInstance() {
return cache.getUnchecked(HADOOP_UTILS_KEY);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | /**
* init dolphinscheduler root path in hdfs
*/
private void initHdfsPath() {
Path path = new Path(resourceUploadPath);
try {
if (!fs.exists(path)) {
fs.mkdirs(path);
}
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
/**
* init hadoop configuration
*/
private void init() {
try {
configuration = new HdfsConfiguration();
String resourceStorageType = PropertyUtils.getUpperCaseString(Constants.RESOURCE_STORAGE_TYPE);
ResUploadType resUploadType = ResUploadType.valueOf(resourceStorageType);
if (resUploadType == ResUploadType.HDFS) {
if (CommonUtils.loadKerberosConf(configuration)) {
hdfsUser = "";
}
String defaultFS = configuration.get(Constants.FS_DEFAULTFS);
if (defaultFS.startsWith("file")) {
String defaultFSProp = PropertyUtils.getString(Constants.FS_DEFAULTFS); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | if (StringUtils.isNotBlank(defaultFSProp)) {
Map<String, String> fsRelatedProps = PropertyUtils.getPrefixedProperties("fs.");
configuration.set(Constants.FS_DEFAULTFS, defaultFSProp);
fsRelatedProps.forEach((key, value) -> configuration.set(key, value));
} else {
logger.error("property:{} can not to be empty, please set!", Constants.FS_DEFAULTFS);
throw new RuntimeException(
String.format("property: %s can not to be empty, please set!", Constants.FS_DEFAULTFS)
);
}
} else {
logger.info("get property:{} -> {}, from core-site.xml hdfs-site.xml ", Constants.FS_DEFAULTFS, defaultFS);
}
if (StringUtils.isNotEmpty(hdfsUser)) {
UserGroupInformation ugi = UserGroupInformation.createRemoteUser(hdfsUser);
ugi.doAs((PrivilegedExceptionAction<Boolean>) () -> {
fs = FileSystem.get(configuration);
return true;
});
} else {
logger.warn("hdfs.root.user is not set value!");
fs = FileSystem.get(configuration);
}
} else if (resUploadType == ResUploadType.S3) {
System.setProperty(Constants.AWS_S3_V4, Constants.STRING_TRUE);
configuration.set(Constants.FS_DEFAULTFS, PropertyUtils.getString(Constants.FS_DEFAULTFS));
configuration.set(Constants.FS_S3A_ENDPOINT, PropertyUtils.getString(Constants.FS_S3A_ENDPOINT));
configuration.set(Constants.FS_S3A_ACCESS_KEY, PropertyUtils.getString(Constants.FS_S3A_ACCESS_KEY));
configuration.set(Constants.FS_S3A_SECRET_KEY, PropertyUtils.getString(Constants.FS_S3A_SECRET_KEY));
fs = FileSystem.get(configuration); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | }
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
/**
* @return Configuration
*/
public Configuration getConfiguration() {
return configuration;
}
/**
* @return DefaultFS
*/
public String getDefaultFS() {
return getConfiguration().get(Constants.FS_DEFAULTFS);
}
/**
* get application url
*
* @param applicationId application id
* @return url of application
*/
public String getApplicationUrl(String applicationId) throws Exception {
/**
* if rmHaIds contains xx, it signs not use resourcemanager
* otherwise:
* if rmHaIds is empty, single resourcemanager enabled
* if rmHaIds not empty: resourcemanager HA enabled
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | yarnEnabled = true;
String appUrl = StringUtils.isEmpty(rmHaIds) ? appAddress : getAppAddress(appAddress, rmHaIds);
if (StringUtils.isBlank(appUrl)) {
throw new BaseException("yarn application url generation failed");
}
if (logger.isDebugEnabled()) {
logger.debug("yarn application url:{}, applicationId:{}", appUrl, applicationId);
}
String activeResourceManagerPort = String.valueOf(PropertyUtils.getInt(Constants.HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT, 8088));
return String.format(appUrl, activeResourceManagerPort, applicationId);
}
public String getJobHistoryUrl(String applicationId) {
String jobId = applicationId.replace("application", "job");
return String.format(jobHistoryAddress, jobId);
}
/**
* cat file on hdfs
*
* @param hdfsFilePath hdfs file path
* @return byte[] byte array
* @throws IOException errors
*/
public byte[] catFile(String hdfsFilePath) throws IOException {
if (StringUtils.isBlank(hdfsFilePath)) {
logger.error("hdfs file path:{} is blank", hdfsFilePath);
return new byte[0];
}
try (FSDataInputStream fsDataInputStream = fs.open(new Path(hdfsFilePath))) {
return IOUtils.toByteArray(fsDataInputStream); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | }
}
/**
* cat file on hdfs
*
* @param hdfsFilePath hdfs file path
* @param skipLineNums skip line numbers
* @param limit read how many lines
* @return content of file
* @throws IOException errors
*/
public List<String> catFile(String hdfsFilePath, int skipLineNums, int limit) throws IOException {
if (StringUtils.isBlank(hdfsFilePath)) {
logger.error("hdfs file path:{} is blank", hdfsFilePath);
return Collections.emptyList();
}
try (FSDataInputStream in = fs.open(new Path(hdfsFilePath))) {
BufferedReader br = new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8));
Stream<String> stream = br.lines().skip(skipLineNums).limit(limit);
return stream.collect(Collectors.toList());
}
}
/**
* make the given file and all non-existent parents into
* directories. Has the semantics of Unix 'mkdir -p'.
* Existence of the directory hierarchy is not an error.
*
* @param hdfsPath path to create
* @return mkdir result
* @throws IOException errors |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | */
public boolean mkdir(String hdfsPath) throws IOException {
return fs.mkdirs(new Path(hdfsPath));
}
/**
* copy files between FileSystems
*
* @param srcPath source hdfs path
* @param dstPath destination hdfs path
* @param deleteSource whether to delete the src
* @param overwrite whether to overwrite an existing file
* @return if success or not
* @throws IOException errors
*/
public boolean copy(String srcPath, String dstPath, boolean deleteSource, boolean overwrite) throws IOException {
return FileUtil.copy(fs, new Path(srcPath), fs, new Path(dstPath), deleteSource, overwrite, fs.getConf());
}
/**
* the src file is on the local disk. Add it to FS at
* the given dst name.
*
* @param srcFile local file
* @param dstHdfsPath destination hdfs path
* @param deleteSource whether to delete the src
* @param overwrite whether to overwrite an existing file
* @return if success or not
* @throws IOException errors
*/
public boolean copyLocalToHdfs(String srcFile, String dstHdfsPath, boolean deleteSource, boolean overwrite) throws IOException {
Path srcPath = new Path(srcFile); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | Path dstPath = new Path(dstHdfsPath);
fs.copyFromLocalFile(deleteSource, overwrite, srcPath, dstPath);
return true;
}
/**
* copy hdfs file to local
*
* @param srcHdfsFilePath source hdfs file path
* @param dstFile destination file
* @param deleteSource delete source
* @param overwrite overwrite
* @return result of copy hdfs file to local
* @throws IOException errors
*/
public boolean copyHdfsToLocal(String srcHdfsFilePath, String dstFile, boolean deleteSource, boolean overwrite) throws IOException {
Path srcPath = new Path(srcHdfsFilePath);
File dstPath = new File(dstFile);
if (dstPath.exists()) {
if (dstPath.isFile()) {
if (overwrite) {
Files.delete(dstPath.toPath());
}
} else {
logger.error("destination file must be a file");
}
}
if (!dstPath.getParentFile().exists()) {
dstPath.getParentFile().mkdirs();
}
return FileUtil.copy(fs, srcPath, dstPath, deleteSource, fs.getConf()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | }
/**
* delete a file
*
* @param hdfsFilePath the path to delete.
* @param recursive if path is a directory and set to
* true, the directory is deleted else throws an exception. In
* case of a file the recursive can be set to either true or false.
* @return true if delete is successful else false.
* @throws IOException errors
*/
public boolean delete(String hdfsFilePath, boolean recursive) throws IOException {
return fs.delete(new Path(hdfsFilePath), recursive);
}
/**
* check if exists
*
* @param hdfsFilePath source file path
* @return result of exists or not
* @throws IOException errors
*/
public boolean exists(String hdfsFilePath) throws IOException {
return fs.exists(new Path(hdfsFilePath));
}
/**
* Gets a list of files in the directory
*
* @param filePath file path
* @return {@link FileStatus} file status
* @throws Exception errors |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | */
public FileStatus[] listFileStatus(String filePath) throws Exception {
try {
return fs.listStatus(new Path(filePath));
} catch (IOException e) {
logger.error("Get file list exception", e);
throw new Exception("Get file list exception", e);
}
}
/**
* Renames Path src to Path dst. Can take place on local fs
* or remote DFS.
*
* @param src path to be renamed
* @param dst new path after rename
* @return true if rename is successful
* @throws IOException on failure
*/
public boolean rename(String src, String dst) throws IOException {
return fs.rename(new Path(src), new Path(dst));
}
/**
* hadoop resourcemanager enabled or not
*
* @return result
*/
public boolean isYarnEnabled() {
return yarnEnabled;
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | * get the state of an application
*
* @param applicationId application id
* @return the return may be null or there may be other parse exceptions
*/
public ExecutionStatus getApplicationStatus(String applicationId) throws Exception {
if (StringUtils.isEmpty(applicationId)) {
return null;
}
String result = Constants.FAILED;
String applicationUrl = getApplicationUrl(applicationId);
if (logger.isDebugEnabled()) {
logger.debug("generate yarn application url, applicationUrl={}", applicationUrl);
}
String responseContent = PropertyUtils.getBoolean(Constants.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false) ? KerberosHttpClient.get(applicationUrl) : HttpUtils.get(applicationUrl);
if (responseContent != null) {
ObjectNode jsonObject = JSONUtils.parseObject(responseContent);
if (!jsonObject.has("app")) {
return ExecutionStatus.FAILURE;
}
result = jsonObject.path("app").path("finalStatus").asText();
} else {
String jobHistoryUrl = getJobHistoryUrl(applicationId);
if (logger.isDebugEnabled()) {
logger.debug("generate yarn job history application url, jobHistoryUrl={}", jobHistoryUrl);
}
responseContent = PropertyUtils.getBoolean(Constants.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false) ? KerberosHttpClient.get(jobHistoryUrl) : HttpUtils.get(jobHistoryUrl);
if (null != responseContent) {
ObjectNode jsonObject = JSONUtils.parseObject(responseContent); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | if (!jsonObject.has("job")) {
return ExecutionStatus.FAILURE;
}
result = jsonObject.path("job").path("state").asText();
} else {
return ExecutionStatus.FAILURE;
}
}
switch (result) {
case Constants.ACCEPTED:
return ExecutionStatus.SUBMITTED_SUCCESS;
case Constants.SUCCEEDED:
return ExecutionStatus.SUCCESS;
case Constants.NEW:
case Constants.NEW_SAVING:
case Constants.SUBMITTED:
case Constants.FAILED:
return ExecutionStatus.FAILURE;
case Constants.KILLED:
return ExecutionStatus.KILL;
case Constants.RUNNING:
default:
return ExecutionStatus.RUNNING_EXECUTION;
}
}
/**
* get data hdfs path
*
* @return data hdfs path
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | public static String getHdfsDataBasePath() {
if ("/".equals(resourceUploadPath)) {
return "";
} else {
return resourceUploadPath;
}
}
/**
* hdfs resource dir
*
* @param tenantCode tenant code
* @param resourceType resource type
* @return hdfs resource dir
*/
public static String getHdfsDir(ResourceType resourceType, String tenantCode) {
String hdfsDir = "";
if (resourceType.equals(ResourceType.FILE)) {
hdfsDir = getHdfsResDir(tenantCode);
} else if (resourceType.equals(ResourceType.UDF)) {
hdfsDir = getHdfsUdfDir(tenantCode);
}
return hdfsDir;
}
/**
* hdfs resource dir
*
* @param tenantCode tenant code
* @return hdfs resource dir
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | public static String getHdfsResDir(String tenantCode) {
return String.format("%s/resources", getHdfsTenantDir(tenantCode));
}
/**
* hdfs user dir
*
* @param tenantCode tenant code
* @param userId user id
* @return hdfs resource dir
*/
public static String getHdfsUserDir(String tenantCode, int userId) {
return String.format("%s/home/%d", getHdfsTenantDir(tenantCode), userId);
}
/**
* hdfs udf dir
*
* @param tenantCode tenant code
* @return get udf dir on hdfs
*/
public static String getHdfsUdfDir(String tenantCode) {
return String.format("%s/udfs", getHdfsTenantDir(tenantCode));
}
/**
* get hdfs file name
*
* @param resourceType resource type
* @param tenantCode tenant code
* @param fileName file name
* @return hdfs file name
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | public static String getHdfsFileName(ResourceType resourceType, String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
}
return String.format("%s/%s", getHdfsDir(resourceType, tenantCode), fileName);
}
/**
* get absolute path and name for resource file on hdfs
*
* @param tenantCode tenant code
* @param fileName file name
* @return get absolute path and name for file on hdfs
*/
public static String getHdfsResourceFileName(String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
}
return String.format("%s/%s", getHdfsResDir(tenantCode), fileName);
}
/**
* get absolute path and name for udf file on hdfs
*
* @param tenantCode tenant code
* @param fileName file name
* @return get absolute path and name for udf file on hdfs
*/
public static String getHdfsUdfFileName(String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | return String.format("%s/%s", getHdfsUdfDir(tenantCode), fileName);
}
/**
* @param tenantCode tenant code
* @return file directory of tenants on hdfs
*/
public static String getHdfsTenantDir(String tenantCode) {
return String.format("%s/%s", getHdfsDataBasePath(), tenantCode);
}
/**
* getAppAddress
*
* @param appAddress app address
* @param rmHa resource manager ha
* @return app address
*/
public static String getAppAddress(String appAddress, String rmHa) {
String activeRM = YarnHAAdminUtils.getAcitveRMName(rmHa);
if (StringUtils.isEmpty(activeRM)) {
return null;
}
String[] split1 = appAddress.split(Constants.DOUBLE_SLASH);
if (split1.length != 2) {
return null;
}
String start = split1[0] + Constants.DOUBLE_SLASH;
String[] split2 = split1[1].split(Constants.COLON);
if (split2.length != 2) {
return null; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | }
String end = Constants.COLON + split2[1];
return start + activeRM + end;
}
@Override
public void close() throws IOException {
if (fs != null) {
try {
fs.close();
} catch (IOException e) {
logger.error("Close HadoopUtils instance failed", e);
throw new IOException("Close HadoopUtils instance failed", e);
}
}
}
/**
* yarn ha admin utils
*/
private static final class YarnHAAdminUtils extends RMAdminCLI {
/**
* get active resourcemanager
*/
public static String getAcitveRMName(String rmIds) {
String[] rmIdArr = rmIds.split(Constants.COMMA);
int activeResourceManagerPort = PropertyUtils.getInt(Constants.HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT, 8088);
String yarnUrl = "http://%s:" + activeResourceManagerPort + "/ws/v1/cluster/info";
try {
/**
* send http get request to rm
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | for (String rmId : rmIdArr) {
String state = getRMState(String.format(yarnUrl, rmId));
if (Constants.HADOOP_RM_STATE_ACTIVE.equals(state)) {
return rmId;
}
}
} catch (Exception e) {
logger.error("yarn ha application url generation failed, message:{}", e.getMessage());
}
return null;
}
/**
* get ResourceManager state
*/
public static String getRMState(String url) {
String retStr = PropertyUtils.getBoolean(Constants.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false) ? KerberosHttpClient.get(url) : HttpUtils.get(url);
if (StringUtils.isEmpty(retStr)) {
return null;
}
ObjectNode jsonObject = JSONUtils.parseObject(retStr);
if (!jsonObject.has("clusterInfo")) {
return null;
}
return jsonObject.get("clusterInfo").path("haState").asText();
}
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import static org.apache.dolphinscheduler.common.Constants.COMMON_PROPERTIES_PATH;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.spi.enums.ResUploadType;
import org.apache.directory.api.util.Strings;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PropertyUtils { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | private static final Logger logger = LoggerFactory.getLogger(PropertyUtils.class);
private static final Properties properties = new Properties();
private PropertyUtils() {
throw new UnsupportedOperationException("Construct PropertyUtils");
}
static {
loadPropertyFile(COMMON_PROPERTIES_PATH);
}
public static synchronized void loadPropertyFile(String... propertyFiles) {
for (String fileName : propertyFiles) {
try (InputStream fis = PropertyUtils.class.getResourceAsStream(fileName);) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | properties.load(fis);
} catch (IOException e) {
logger.error(e.getMessage(), e);
System.exit(1);
}
}
System.getProperties().forEach((k, v) -> {
final String key = String.valueOf(k);
logger.info("Overriding property from system property: {}", key);
PropertyUtils.setValue(key, String.valueOf(v));
});
}
/**
* @return judge whether resource upload startup
*/
public static boolean getResUploadStartupState() {
String resUploadStartupType = PropertyUtils.getUpperCaseString(Constants.RESOURCE_STORAGE_TYPE);
ResUploadType resUploadType = ResUploadType.valueOf(resUploadStartupType);
return resUploadType == ResUploadType.HDFS || resUploadType == ResUploadType.S3;
}
/**
* get property value
*
* @param key property name
* @return property value
*/
public static String getString(String key) {
return properties.getProperty(key.trim());
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | /**
* get property value with upper case
*
* @param key property name
* @return property value with upper case
*/
public static String getUpperCaseString(String key) {
return properties.getProperty(key.trim()).toUpperCase();
}
/**
* get property value
*
* @param key property name
* @param defaultVal default value
* @return property value
*/
public static String getString(String key, String defaultVal) {
String val = properties.getProperty(key.trim());
return val == null ? defaultVal : val;
}
/**
* get property value
*
* @param key property name
* @return get property int value , if key == null, then return -1
*/
public static int getInt(String key) {
return getInt(key, -1);
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | * @param key key
* @param defaultValue default value
* @return property value
*/
public static int getInt(String key, int defaultValue) {
String value = getString(key);
if (value == null) {
return defaultValue;
}
try {
return Integer.parseInt(value);
} catch (NumberFormatException e) {
logger.info(e.getMessage(), e);
}
return defaultValue;
}
/**
* get property value
*
* @param key property name
* @return property value
*/
public static boolean getBoolean(String key) {
String value = properties.getProperty(key.trim());
if (null != value) {
return Boolean.parseBoolean(value);
}
return false;
}
/** |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | * get property value
*
* @param key property name
* @param defaultValue default value
* @return property value
*/
public static Boolean getBoolean(String key, boolean defaultValue) {
String value = properties.getProperty(key.trim());
if (null != value) {
return Boolean.parseBoolean(value);
}
return defaultValue;
}
/**
* get property long value
*
* @param key key
* @param defaultVal default value
* @return property value
*/
public static long getLong(String key, long defaultVal) {
String val = getString(key);
return val == null ? defaultVal : Long.parseLong(val);
}
/**
* get all properties with specified prefix, like: fs.
*
* @param prefix prefix to search
* @return all properties with specified prefix
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | public static Map<String, String> getPrefixedProperties(String prefix) {
Map<String, String> matchedProperties = new HashMap<>();
for (String propName : properties.stringPropertyNames()) {
if (propName.startsWith(prefix)) {
matchedProperties.put(propName, properties.getProperty(propName));
}
}
return matchedProperties;
}
public static void setValue(String key, String value) {
properties.setProperty(key, value);
}
public static Map<String, String> getPropertiesByPrefix(String prefix) {
if (Strings.isEmpty(prefix)) {
return null;
}
Set<Object> keys = properties.keySet();
if (keys.isEmpty()) {
return null;
}
Map<String, String> propertiesMap = new HashMap<>();
keys.forEach(k -> {
if (k.toString().contains(prefix)) {
propertiesMap.put(k.toString().replaceFirst(prefix + ".", ""), properties.getProperty((String) k));
}
});
return propertiesMap;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/PropertyUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.spi.utils;
import static org.apache.dolphinscheduler.spi.utils.Constants.COMMON_PROPERTIES_PATH;
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PropertyUtils { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/PropertyUtils.java | private static final Logger logger = LoggerFactory.getLogger(PropertyUtils.class);
private static final Properties properties = new Properties();
private PropertyUtils() {
throw new UnsupportedOperationException("Construct PropertyUtils");
}
static {
loadPropertyFile(COMMON_PROPERTIES_PATH);
}
public static synchronized void loadPropertyFile(String... propertyFiles) {
for (String fileName : propertyFiles) {
try (InputStream fis = PropertyUtils.class.getResourceAsStream(fileName);) {
properties.load(fis);
} catch (IOException e) {
logger.error(e.getMessage(), e);
System.exit(1);
}
}
System.getProperties().forEach((k, v) -> {
final String key = String.valueOf(k);
logger.info("Overriding property from system property: {}", key); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/PropertyUtils.java | PropertyUtils.setValue(key, String.valueOf(v));
});
}
/**
* get property value
*
* @param key property name
* @return property value
*/
public static String getString(String key) {
return properties.getProperty(key.trim());
}
/**
* get property value with upper case
*
* @param key property name
* @return property value with upper case
*/
public static String getUpperCaseString(String key) {
return properties.getProperty(key.trim()).toUpperCase();
}
/**
* get property value
*
* @param key property name
* @param defaultVal default value
* @return property value
*/
public static String getString(String key, String defaultVal) {
String val = properties.getProperty(key.trim()); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/PropertyUtils.java | return val == null ? defaultVal : val;
}
/**
* get property value
*
* @param key property name
* @return get property int value , if key == null, then return -1
*/
public static int getInt(String key) {
return getInt(key, -1);
}
/**
* @param key key
* @param defaultValue default value
* @return property value
*/
public static int getInt(String key, int defaultValue) {
String value = getString(key);
if (value == null) {
return defaultValue;
}
try {
return Integer.parseInt(value);
} catch (NumberFormatException e) {
logger.info(e.getMessage(), e);
}
return defaultValue;
}
/**
* get property value |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`

### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;

### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/PropertyUtils.java | *
* @param key property name
* @return property value
*/
public static boolean getBoolean(String key) {
String value = properties.getProperty(key.trim());
if (null != value) {
return Boolean.parseBoolean(value);
}
return false;
}
/**
* get property value
*
* @param key property name
* @param defaultValue default value
* @return property value
*/
public static Boolean getBoolean(String key, boolean defaultValue) {
String value = properties.getProperty(key.trim());
if (null != value) {
return Boolean.parseBoolean(value);
}
return defaultValue;
}
public static void setValue(String key, String value) {
properties.setProperty(key, value);
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/config/MasterConfig.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.config;
import org.apache.dolphinscheduler.server.master.dispatch.host.assign.HostSelector;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.stereotype.Component;
@Component |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/config/MasterConfig.java | @EnableConfigurationProperties
@ConfigurationProperties("master")
public class MasterConfig {
private int listenPort;
private int execThreads;
private int execTaskNum;
private int dispatchTaskNumber;
private HostSelector hostSelector;
private int heartbeatInterval;
private int taskCommitRetryTimes;
private int taskCommitInterval;
private int stateWheelInterval;
private double maxCpuLoadAvg;
private double reservedMemory;
private boolean cacheProcessDefinition;
public int getListenPort() {
return listenPort;
}
public void setListenPort(int listenPort) {
this.listenPort = listenPort;
}
public int getExecThreads() {
return execThreads;
}
public void setExecThreads(int execThreads) {
this.execThreads = execThreads;
}
public int getExecTaskNum() {
return execTaskNum; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/config/MasterConfig.java | }
public void setExecTaskNum(int execTaskNum) {
this.execTaskNum = execTaskNum;
}
public int getDispatchTaskNumber() {
return dispatchTaskNumber;
}
public void setDispatchTaskNumber(int dispatchTaskNumber) {
this.dispatchTaskNumber = dispatchTaskNumber;
}
public HostSelector getHostSelector() {
return hostSelector;
}
public void setHostSelector(HostSelector hostSelector) {
this.hostSelector = hostSelector;
}
public int getHeartbeatInterval() {
return heartbeatInterval;
}
public void setHeartbeatInterval(int heartbeatInterval) {
this.heartbeatInterval = heartbeatInterval;
}
public int getTaskCommitRetryTimes() {
return taskCommitRetryTimes;
}
public void setTaskCommitRetryTimes(int taskCommitRetryTimes) {
this.taskCommitRetryTimes = taskCommitRetryTimes;
}
public int getTaskCommitInterval() {
return taskCommitInterval; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/config/MasterConfig.java | }
public void setTaskCommitInterval(int taskCommitInterval) {
this.taskCommitInterval = taskCommitInterval;
}
public int getStateWheelInterval() {
return stateWheelInterval;
}
public void setStateWheelInterval(int stateWheelInterval) {
this.stateWheelInterval = stateWheelInterval;
}
public double getMaxCpuLoadAvg() {
return maxCpuLoadAvg > 0 ? maxCpuLoadAvg : Runtime.getRuntime().availableProcessors() * 2;
}
public void setMaxCpuLoadAvg(double maxCpuLoadAvg) {
this.maxCpuLoadAvg = maxCpuLoadAvg;
}
public double getReservedMemory() {
return reservedMemory;
}
public void setReservedMemory(double reservedMemory) {
this.reservedMemory = reservedMemory;
}
public boolean isCacheProcessDefinition() {
return cacheProcessDefinition;
}
public void setCacheProcessDefinition(boolean cacheProcessDefinition) {
this.cacheProcessDefinition = cacheProcessDefinition;
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterSchedulerService.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.runner;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.thread.Stopper; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterSchedulerService.java | import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.common.utils.OSUtils;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.remote.NettyRemotingClient;
import org.apache.dolphinscheduler.remote.config.NettyClientConfig;
import org.apache.dolphinscheduler.server.master.cache.ProcessInstanceExecCacheManager;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.dispatch.executor.NettyExecutorManager;
import org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient;
import org.apache.dolphinscheduler.server.master.registry.ServerNodeManager;
import org.apache.dolphinscheduler.service.alert.ProcessAlertManager;
import org.apache.dolphinscheduler.service.process.ProcessService;
import java.util.HashMap;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
/**
* master scheduler thread
*/
@Service
public class MasterSchedulerService extends Thread { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterSchedulerService.java | /**
* logger of MasterSchedulerService
*/
private static final Logger logger = LoggerFactory.getLogger(MasterSchedulerService.class);
/**
* dolphinscheduler database interface
*/
@Autowired
private ProcessService processService;
/**
* zookeeper master client
*/
@Autowired
private MasterRegistryClient masterRegistryClient;
/**
* master config
*/
@Autowired
private MasterConfig masterConfig;
/**
* alert manager
*/
@Autowired
private ProcessAlertManager processAlertManager;
/**
* netty remoting client
*/
private NettyRemotingClient nettyRemotingClient;
@Autowired
NettyExecutorManager nettyExecutorManager; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterSchedulerService.java | /**
* master exec service
*/
private ThreadPoolExecutor masterExecService;
@Autowired
private ProcessInstanceExecCacheManager processInstanceExecCacheManager;
/**
* process timeout check list
*/
ConcurrentHashMap<Integer, ProcessInstance> processTimeoutCheckList = new ConcurrentHashMap<>();
/**
* task time out checkout list
*/
ConcurrentHashMap<Integer, TaskInstance> taskTimeoutCheckList = new ConcurrentHashMap<>();
/**
* key:code-version
* value: processDefinition
*/
HashMap<String, ProcessDefinition> processDefinitionCacheMaps = new HashMap<>();
private StateWheelExecuteThread stateWheelExecuteThread;
/**
* constructor of MasterSchedulerService
*/
public void init() {
this.masterExecService = (ThreadPoolExecutor) ThreadUtils.newDaemonFixedThreadExecutor("Master-Exec-Thread", masterConfig.getExecThreads());
NettyClientConfig clientConfig = new NettyClientConfig();
this.nettyRemotingClient = new NettyRemotingClient(clientConfig);
stateWheelExecuteThread = new StateWheelExecuteThread(processTimeoutCheckList,
taskTimeoutCheckList,
this.processInstanceExecCacheManager, |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterSchedulerService.java | masterConfig.getStateWheelInterval() * Constants.SLEEP_TIME_MILLIS);
}
@Override
public synchronized void start() {
super.setName("MasterSchedulerService");
super.start();
this.stateWheelExecuteThread.start();
}
public void close() {
masterExecService.shutdown();
boolean terminated = false;
try {
terminated = masterExecService.awaitTermination(5, TimeUnit.SECONDS);
} catch (InterruptedException ignore) {
Thread.currentThread().interrupt();
}
if (!terminated) {
logger.warn("masterExecService shutdown without terminated, increase await time");
}
nettyRemotingClient.close();
logger.info("master schedule service stopped...");
}
/**
* run of MasterSchedulerService
*/
@Override
public void run() {
logger.info("master scheduler started");
while (Stopper.isRunning()) {
try { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterSchedulerService.java | boolean runCheckFlag = OSUtils.checkResource(masterConfig.getMaxCpuLoadAvg(), masterConfig.getReservedMemory());
if (!runCheckFlag) {
Thread.sleep(Constants.SLEEP_TIME_MILLIS);
continue;
}
scheduleProcess();
} catch (Exception e) {
logger.error("master scheduler thread error", e);
}
}
}
/**
* 1. get command by slot
* 2. donot handle command if slot is empty
*
* @throws Exception
*/
private void scheduleProcess() throws Exception {
Command command = findOneCommand();
if (command != null) {
logger.info("find one command: id: {}, type: {}", command.getId(), command.getCommandType());
try {
ProcessInstance processInstance = processService.handleCommand(logger,
getLocalAddress(),
command,
processDefinitionCacheMaps);
if (!masterConfig.isCacheProcessDefinition()
&& processDefinitionCacheMaps.size() > 0) {
processDefinitionCacheMaps.clear(); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterSchedulerService.java | }
if (processInstance != null) {
WorkflowExecuteThread workflowExecuteThread = new WorkflowExecuteThread(
processInstance
, processService
, nettyExecutorManager
, processAlertManager
, masterConfig
, taskTimeoutCheckList);
this.processInstanceExecCacheManager.cache(processInstance.getId(), workflowExecuteThread);
if (processInstance.getTimeout() > 0) {
this.processTimeoutCheckList.put(processInstance.getId(), processInstance);
}
logger.info("handle command end, command {} process {} start...",
command.getId(), processInstance.getId());
masterExecService.execute(workflowExecuteThread);
}
} catch (Exception e) {
logger.error("scan command error ", e);
processService.moveToErrorCommand(command, e.toString());
}
} else {
Thread.sleep(Constants.SLEEP_TIME_MILLIS);
}
}
private Command findOneCommand() {
int pageNumber = 0;
Command result = null;
while (Stopper.isRunning()) { |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterSchedulerService.java | if (ServerNodeManager.MASTER_SIZE == 0) {
return null;
}
List<Command> commandList = processService.findCommandPage(ServerNodeManager.MASTER_SIZE, pageNumber);
if (commandList.size() == 0) {
return null;
}
for (Command command : commandList) {
int slot = ServerNodeManager.getSlot();
if (ServerNodeManager.MASTER_SIZE != 0
&& command.getId() % ServerNodeManager.MASTER_SIZE == slot) {
result = command;
break;
}
}
if (result != null) {
logger.info("find command {}, slot:{} :",
result.getId(),
ServerNodeManager.getSlot());
break;
}
pageNumber += 1;
}
return result;
}
private String getLocalAddress() {
return NetUtils.getAddr(masterConfig.getListenPort());
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.service.process;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_EMPTY_SUB_PROCESS; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_FATHER_PARAMS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID;
import static org.apache.dolphinscheduler.common.Constants.LOCAL_PARAMS;
import static java.util.stream.Collectors.toSet;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.AuthorizationType;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.Direct;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.ReleaseState;
import org.apache.dolphinscheduler.spi.enums.ResourceType;
import org.apache.dolphinscheduler.common.enums.TaskDependType;
import org.apache.dolphinscheduler.common.enums.TimeoutFlag;
import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.model.DateInterval;
import org.apache.dolphinscheduler.common.model.TaskNode;
import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.process.ProcessDag;
import org.apache.dolphinscheduler.common.process.Property;
import org.apache.dolphinscheduler.common.process.ResourceInfo;
import org.apache.dolphinscheduler.common.task.AbstractParameters;
import org.apache.dolphinscheduler.common.task.TaskTimeoutParameter;
import org.apache.dolphinscheduler.common.task.subprocess.SubProcessParameters;
import org.apache.dolphinscheduler.common.utils.DateUtils; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.ParameterUtils;
import org.apache.dolphinscheduler.common.utils.SnowFlakeUtils;
import org.apache.dolphinscheduler.common.utils.SnowFlakeUtils.SnowFlakeException;
import org.apache.dolphinscheduler.common.utils.TaskParametersUtils;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.DagData;
import org.apache.dolphinscheduler.dao.entity.DataSource;
import org.apache.dolphinscheduler.dao.entity.Environment;
import org.apache.dolphinscheduler.dao.entity.ErrorCommand;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.ProcessInstanceMap;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelationLog;
import org.apache.dolphinscheduler.dao.entity.Project;
import org.apache.dolphinscheduler.dao.entity.ProjectUser;
import org.apache.dolphinscheduler.dao.entity.Resource;
import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.UdfFunc;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.CommandMapper;
import org.apache.dolphinscheduler.dao.mapper.DataSourceMapper;
import org.apache.dolphinscheduler.dao.mapper.EnvironmentMapper;
import org.apache.dolphinscheduler.dao.mapper.ErrorCommandMapper; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.ResourceMapper;
import org.apache.dolphinscheduler.dao.mapper.ResourceUserMapper;
import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.TenantMapper;
import org.apache.dolphinscheduler.dao.mapper.UdfFuncMapper;
import org.apache.dolphinscheduler.dao.mapper.UserMapper;
import org.apache.dolphinscheduler.dao.utils.DagHelper;
import org.apache.dolphinscheduler.remote.command.StateEventChangeCommand;
import org.apache.dolphinscheduler.remote.processor.StateEventCallbackService;
import org.apache.dolphinscheduler.remote.utils.Host;
import org.apache.dolphinscheduler.service.log.LogClientService;
import org.apache.dolphinscheduler.service.quartz.cron.CronUtils;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang.StringUtils;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Date;
import java.util.EnumMap;
import java.util.HashMap;
import java.util.HashSet; |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Objects;
import java.util.Set;
import java.util.stream.Collectors;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;
import com.facebook.presto.jdbc.internal.guava.collect.Lists;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.node.ObjectNode;
/**
* process relative dao that some mappers in this.
*/
@Component
public class ProcessService {
private final Logger logger = LoggerFactory.getLogger(getClass());
private final int[] stateArray = new int[]{ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.RUNNING_EXECUTION.ordinal(),
ExecutionStatus.DELAY_EXECUTION.ordinal(),
ExecutionStatus.READY_PAUSE.ordinal(),
ExecutionStatus.READY_STOP.ordinal()};
@Autowired
private UserMapper userMapper;
@Autowired
private ProcessDefinitionMapper processDefineMapper;
@Autowired |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | private ProcessDefinitionLogMapper processDefineLogMapper;
@Autowired
private ProcessInstanceMapper processInstanceMapper;
@Autowired
private DataSourceMapper dataSourceMapper;
@Autowired
private ProcessInstanceMapMapper processInstanceMapMapper;
@Autowired
private TaskInstanceMapper taskInstanceMapper;
@Autowired
private CommandMapper commandMapper;
@Autowired
private ScheduleMapper scheduleMapper;
@Autowired
private UdfFuncMapper udfFuncMapper;
@Autowired
private ResourceMapper resourceMapper;
@Autowired
private ResourceUserMapper resourceUserMapper;
@Autowired
private ErrorCommandMapper errorCommandMapper;
@Autowired
private TenantMapper tenantMapper;
@Autowired
private ProjectMapper projectMapper;
@Autowired
private TaskDefinitionMapper taskDefinitionMapper;
@Autowired
private TaskDefinitionLogMapper taskDefinitionLogMapper;
@Autowired |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | private ProcessTaskRelationMapper processTaskRelationMapper;
@Autowired
private ProcessTaskRelationLogMapper processTaskRelationLogMapper;
@Autowired
StateEventCallbackService stateEventCallbackService;
@Autowired
private EnvironmentMapper environmentMapper;
/**
* handle Command (construct ProcessInstance from Command) , wrapped in transaction
*
* @param logger logger
* @param host host
* @param command found command
* @param processDefinitionCacheMaps
* @return process instance
*/
public ProcessInstance handleCommand(Logger logger, String host, Command command, HashMap<String, ProcessDefinition> processDefinitionCacheMaps) {
ProcessInstance processInstance = constructProcessInstance(command, host, processDefinitionCacheMaps);
if (processInstance == null) {
logger.error("scan command, command parameter is error: {}", command);
moveToErrorCommand(command, "process instance is null");
return null;
}
processInstance.setCommandType(command.getCommandType());
processInstance.addHistoryCmd(command.getCommandType());
ProcessDefinition processDefinition = this.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion());
if (processDefinition.getExecutionType().typeIsSerial()) {
saveSerialProcess(processInstance,processDefinition); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | if (processInstance.getState() != ExecutionStatus.SUBMITTED_SUCCESS) {
this.setSubProcessParam(processInstance);
this.commandMapper.deleteById(command.getId());
return null;
}
} else {
saveProcessInstance(processInstance);
}
this.setSubProcessParam(processInstance);
this.commandMapper.deleteById(command.getId());
return processInstance;
}
private void saveSerialProcess(ProcessInstance processInstance,ProcessDefinition processDefinition) {
processInstance.setState(ExecutionStatus.SERIAL_WAIT);
saveProcessInstance(processInstance);
if (processDefinition.getExecutionType().typeIsSerialWait()) {
while (true) {
List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(),
Constants.RUNNING_PROCESS_STATE,processInstance.getId());
if (CollectionUtils.isEmpty(runningProcessInstances)) {
processInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS);
saveProcessInstance(processInstance);
return;
}
ProcessInstance runningProcess = runningProcessInstances.get(0);
if (this.processInstanceMapper.updateNextProcessIdById(processInstance.getId(), runningProcess.getId())) {
return;
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | }
} else if (processDefinition.getExecutionType().typeIsSerialDiscard()) {
List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(),
Constants.RUNNING_PROCESS_STATE,processInstance.getId());
if (CollectionUtils.isEmpty(runningProcessInstances)) {
processInstance.setState(ExecutionStatus.STOP);
saveProcessInstance(processInstance);
}
} else if (processDefinition.getExecutionType().typeIsSerialPriority()) {
List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(),
Constants.RUNNING_PROCESS_STATE,processInstance.getId());
if (CollectionUtils.isNotEmpty(runningProcessInstances)) {
for (ProcessInstance info : runningProcessInstances) {
info.setCommandType(CommandType.STOP);
info.addHistoryCmd(CommandType.STOP);
info.setState(ExecutionStatus.READY_STOP);
int update = updateProcessInstance(info);
if (update > 0) {
String host = info.getHost();
String address = host.split(":")[0];
int port = Integer.parseInt(host.split(":")[1]);
StateEventChangeCommand stateEventChangeCommand = new StateEventChangeCommand(
info.getId(), 0, info.getState(), info.getId(), 0
);
try {
stateEventCallbackService.sendResult(address, port, stateEventChangeCommand.convert2Command());
} catch (Exception e) {
logger.error("sendResultError");
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | }
}
}
}
}
/**
* save error command, and delete original command
*
* @param command command
* @param message message
*/
public void moveToErrorCommand(Command command, String message) {
ErrorCommand errorCommand = new ErrorCommand(command, message);
this.errorCommandMapper.insert(errorCommand);
this.commandMapper.deleteById(command.getId());
}
/**
* set process waiting thread
*
* @param command command
* @param processInstance processInstance
* @return process instance
*/
private ProcessInstance setWaitingThreadProcess(Command command, ProcessInstance processInstance) {
processInstance.setState(ExecutionStatus.WAITING_THREAD);
if (command.getCommandType() != CommandType.RECOVER_WAITING_THREAD) {
processInstance.addHistoryCmd(command.getCommandType());
}
saveProcessInstance(processInstance);
this.setSubProcessParam(processInstance); |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | createRecoveryWaitingThreadCommand(command, processInstance);
return null;
}
/**
* insert one command
*
* @param command command
* @return create result
*/
public int createCommand(Command command) {
int result = 0;
if (command != null) {
result = commandMapper.insert(command);
}
return result;
}
/**
* get command page
*
* @param pageSize
* @param pageNumber
* @return
*/
public List<Command> findCommandPage(int pageSize, int pageNumber) {
return commandMapper.queryCommandPage(pageSize, pageNumber * pageSize);
}
/**
* check the input command exists in queue list
*
* @param command command |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | * @return create command result
*/
public boolean verifyIsNeedCreateCommand(Command command) {
boolean isNeedCreate = true;
EnumMap<CommandType, Integer> cmdTypeMap = new EnumMap<>(CommandType.class);
cmdTypeMap.put(CommandType.REPEAT_RUNNING, 1);
cmdTypeMap.put(CommandType.RECOVER_SUSPENDED_PROCESS, 1);
cmdTypeMap.put(CommandType.START_FAILURE_TASK_PROCESS, 1);
CommandType commandType = command.getCommandType();
if (cmdTypeMap.containsKey(commandType)) {
ObjectNode cmdParamObj = JSONUtils.parseObject(command.getCommandParam());
int processInstanceId = cmdParamObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt();
List<Command> commands = commandMapper.selectList(null);
for (Command tmpCommand : commands) {
if (cmdTypeMap.containsKey(tmpCommand.getCommandType())) {
ObjectNode tempObj = JSONUtils.parseObject(tmpCommand.getCommandParam());
if (tempObj != null && processInstanceId == tempObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt()) {
isNeedCreate = false;
break;
}
}
}
}
return isNeedCreate;
}
/**
* find process instance detail by id
*
* @param processId processId |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | * @return process instance
*/
public ProcessInstance findProcessInstanceDetailById(int processId) {
return processInstanceMapper.queryDetailById(processId);
}
/**
* get task node list by definitionId
*/
public List<TaskDefinition> getTaskNodeListByDefinition(long defineCode) {
ProcessDefinition processDefinition = processDefineMapper.queryByCode(defineCode);
if (processDefinition == null) {
logger.error("process define not exists");
return new ArrayList<>();
}
List<ProcessTaskRelationLog> processTaskRelations = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion());
Set<TaskDefinition> taskDefinitionSet = new HashSet<>();
for (ProcessTaskRelationLog processTaskRelation : processTaskRelations) {
if (processTaskRelation.getPostTaskCode() > 0) {
taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion()));
}
}
List<TaskDefinitionLog> taskDefinitionLogs = taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet);
return new ArrayList<>(taskDefinitionLogs);
}
/**
* find process instance by id
*
* @param processId processId
* @return process instance
*/ |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | public ProcessInstance findProcessInstanceById(int processId) {
return processInstanceMapper.selectById(processId);
}
/**
* find process define by id.
*
* @param processDefinitionId processDefinitionId
* @return process definition
*/
public ProcessDefinition findProcessDefineById(int processDefinitionId) {
return processDefineMapper.selectById(processDefinitionId);
}
/**
* find process define by code and version.
*
* @param processDefinitionCode processDefinitionCode
* @return process definition
*/
public ProcessDefinition findProcessDefinition(Long processDefinitionCode, int version) {
ProcessDefinition processDefinition = processDefineMapper.queryByCode(processDefinitionCode);
if (processDefinition == null || processDefinition.getVersion() != version) {
processDefinition = processDefineLogMapper.queryByDefinitionCodeAndVersion(processDefinitionCode, version);
if (processDefinition != null) {
processDefinition.setId(0);
}
}
return processDefinition;
}
/**
* find process define by code. |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | *
* @param processDefinitionCode processDefinitionCode
* @return process definition
*/
public ProcessDefinition findProcessDefinitionByCode(Long processDefinitionCode) {
return processDefineMapper.queryByCode(processDefinitionCode);
}
/**
* delete work process instance by id
*
* @param processInstanceId processInstanceId
* @return delete process instance result
*/
public int deleteWorkProcessInstanceById(int processInstanceId) {
return processInstanceMapper.deleteById(processInstanceId);
}
/**
* delete all sub process by parent instance id
*
* @param processInstanceId processInstanceId
* @return delete all sub process instance result
*/
public int deleteAllSubWorkProcessByParentId(int processInstanceId) {
List<Integer> subProcessIdList = processInstanceMapMapper.querySubIdListByParentId(processInstanceId);
for (Integer subId : subProcessIdList) {
deleteAllSubWorkProcessByParentId(subId);
deleteWorkProcessMapByParentId(subId);
removeTaskLogFile(subId);
deleteWorkProcessInstanceById(subId);
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | return 1;
}
/**
* remove task log file
*
* @param processInstanceId processInstanceId
*/
public void removeTaskLogFile(Integer processInstanceId) {
List<TaskInstance> taskInstanceList = findValidTaskListByProcessId(processInstanceId);
if (CollectionUtils.isEmpty(taskInstanceList)) {
return;
}
try (LogClientService logClient = new LogClientService()) {
for (TaskInstance taskInstance : taskInstanceList) {
String taskLogPath = taskInstance.getLogPath();
if (StringUtils.isEmpty(taskInstance.getHost())) {
continue;
}
int port = Constants.RPC_PORT;
String ip = "";
try {
ip = Host.of(taskInstance.getHost()).getIp();
} catch (Exception e) {
ip = taskInstance.getHost();
}
logClient.removeTaskLog(ip, port, taskLogPath);
}
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.