sr
stringlengths 9
9
| prompt_software
stringlengths 815
57.9k
| prompt_product
stringlengths 807
36.2k
| Customer_Symptom__c
stringlengths 253
14.2k
| Description
stringlengths 33
14k
| Resolution_Summary__c
stringlengths 7
7.89k
| Problem_Description__c
stringlengths 3
5.37k
| Product_Name__c
stringlengths 8
22
| SW_Version__c
stringlengths 3
42
| Case_Close_SW_Version__c
stringlengths 3
18
| CXSAT_Score__c
stringlengths 3
5
| Priority
stringclasses 3
values | Health_Score__c
stringclasses 2
values | Underlying_Cause__c
stringclasses 9
values | Product_Family__c
stringclasses 11
values | Technology_Text__c
stringclasses 10
values | Sub_Technology_Text__c
stringlengths 3
89
| SW_Product__c
stringlengths 3
18
| Problem_Code_Description__c
stringclasses 7
values | Sub_Technology_Description__c
stringlengths 3
89
| Technology_Description__c
stringclasses 10
values | HW_Product__c
stringlengths 3
18
| Automation_Level__c
stringclasses 9
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
695109716 | Question: Which software version is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY software version should be selected from the given software version list: 16.12.04/ NA - COMPONENT ONLY/ THIRD_PARTY_PRODUCT_SW/ NA - RMA/ 16.12.04/ 3.2.3o/ 15.0.2/ 16.06.07/ 16.12.03s/ 1/ 16.06.08/ 16.12.1s/ 17.03.05/ 16.12.4/ 16.9.5/ 16.12.01s/ PRODUCT_NOT_FOUND/ 16.3.5/ 16.12.3s/ 16.09.02/ 16.12.3a/ 16.12.1s/ 17.6.4 Response in this JSON format:
{"software_version": "","explanation": "", "summary": ""}
17.3.23 - - - -
+ as per discussion with rep, it is agreed upon for soft closure
10.3.23 - - - -
+ 2nd follow up
7.3.23 - - - -
+ 1st follow up
Based on the provided documentation,
------------------ show switch stack-ports summary ------------------
Sw#/Port# Port Status Neighbor Cable Length Link OK Link Active Sync OK #Changes to LinkOK In Loopback
-------------------------------------------------------------------------------------------------------------------
1/1 OK 2 100cm Yes Yes Yes 2 No
1/2 OK 6 300cm Yes Yes Yes 1 No
2/1 OK 3 100cm Yes Yes Yes 1 No
2/2 OK 1 100cm Yes Yes Yes 1 No
3/1 OK 4 100cm Yes Yes Yes 1 No
3/2 OK 2 100cm Yes Yes Yes 2 No
4/1 OK 5 100cm Yes Yes Yes 1 No
4/2 OK 3 100cm Yes Yes Yes 3 No
5/1 OK 6 100cm Yes Yes Yes 1 No
5/2 OK 4 100cm Yes Yes Yes 2 No
6/1 OK 1 300cm Yes Yes Yes 2 No
6/2 OK 5 100cm Yes Yes Yes 1 No
Stack ports of all 6 switches in the tsack seem stable. We should monitor them.
#show switch stack-ports summary – will display if there were any changes on that specific link.
The following defect was found On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally CSCvw90216
The switch is also affected by this defect https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwa14578 - realted to the memory leak and the reload of switch 2.
Unfortunately, all the logs are from Feb 22, however noted that a you have a logging server, if possible to provide me the logs for that date and the specific hour, for further investigation.
As this stack is affected by 2 defects, my recommendation would be to upgrade to golden fixed release https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-series-switches/214814-recommended-releases-for-catalyst-9200-9.html
10.3.23 - - - -
+ 2nd follow up
7.3.23 - - - -
+ 1st follow up
Based on the provided documentation,
------------------ show switch stack-ports summary ------------------
Sw#/Port# Port Status Neighbor Cable Length Link OK Link Active Sync OK #Changes to LinkOK In Loopback
-------------------------------------------------------------------------------------------------------------------
1/1 OK 2 100cm Yes Yes Yes 2 No
1/2 OK 6 300cm Yes Yes Yes 1 No
2/1 OK 3 100cm Yes Yes Yes 1 No
2/2 OK 1 100cm Yes Yes Yes 1 No
3/1 OK 4 100cm Yes Yes Yes 1 No
3/2 OK 2 100cm Yes Yes Yes 2 No
4/1 OK 5 100cm Yes Yes Yes 1 No
4/2 OK 3 100cm Yes Yes Yes 3 No
5/1 OK 6 100cm Yes Yes Yes 1 No
5/2 OK 4 100cm Yes Yes Yes 2 No
6/1 OK 1 300cm Yes Yes Yes 2 No
6/2 OK 5 100cm Yes Yes Yes 1 No
Stack ports of all 6 switches in the tsack seem stable. We should monitor them.
#show switch stack-ports summary – will display if there were any changes on that specific link.
The following defect was found On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally CSCvw90216
The switch is also affected by this defect https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwa14578 - realted to the memory leak and the reload of switch 2.
Unfortunately, all the logs are from Feb 22, however noted that a you have a logging server, if possible to provide me the logs for that date and the specific hour, for further investigation.
As this stack is affected by 2 defects, my recommendation would be to upgrade to golden fixed release https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-series-switches/214814-recommended-releases-for-catalyst-9200-9.html
Hope you are doing good.
After TAC analysis, below is the summary report:
//
As I understand it, you are observing stack issues.
Based on the provided documentation,
------------------ show switch stack-ports summary ------------------
Sw#/Port# Port Status Neighbor Cable Length Link OK Link Active Sync OK #Changes to LinkOK In Loopback
-------------------------------------------------------------------------------------------------------------------
1/1 OK 2 100cm Yes Yes Yes 2 No
1/2 OK 6 300cm Yes Yes Yes 1 No
2/1 OK 3 100cm Yes Yes Yes 1 No
2/2 OK 1 100cm Yes Yes Yes 1 No
3/1 OK 4 100cm Yes Yes Yes 1 No
3/2 OK 2 100cm Yes Yes Yes 2 No
4/1 OK 5 100cm Yes Yes Yes 1 No
4/2 OK 3 100cm Yes Yes Yes 3 No
5/1 OK 6 100cm Yes Yes Yes 1 No
5/2 OK 4 100cm Yes Yes Yes 2 No
6/1 OK 1 300cm Yes Yes Yes 2 No
6/2 OK 5 100cm Yes Yes Yes 1 No
Stack ports of all 6 switches in the tsack seem stable.
We should monitor them.
#show switch stack-ports summary - will display if there were any changes on that specific link.
The following defect was found On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally CSCvw90216<https://apc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcdets.cisco.com%2Fapps%2Fdumpcr%3F%26content%3Dsummary%26format%3Dhtml%26identifier%3DCSCvw90216&data=05%7C01%7Cmohamedissam.niyamathullah%40ncs.com.sg%7Cf6f55778cc244ffe4fc508db160d5841%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638128018828469927%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=05NTrndwFHh5jVSRghLnTcdCJ1hTAFhrdFN%2Fx4zDTn8%3D&reserved=0>
The switch is also affected by this defect https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwa14578<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbst.cloudapps.cisco.com%2Fbugsearch%2Fbug%2FCSCwa14578&data=05%7C01%7Cmohamedissam.niyamathullah%40ncs.com.sg%7Cf6f55778cc244ffe4fc508db160d5841%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638128018828469927%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=zvPY6QbVsIwOwWwnuB9eVLPjVgNhr%2FiZnubXuyh93Mk%3D&reserved=0> - realted to the memory leak and the reload of switch 2.
Unfortunately, all the logs are from Feb 22, however noted that you have a logging server, if possible to provide me the logs for that date and the specific hour, for further investigation.
Regarding port 3/0/11, do observe it flapping,
Feb 22 2023 17:34:55.656 SGT: %LINK-3-UPDOWN: Interface GigabitEthernet3/0/11, changed state to up
Feb 22 2023 17:34:56.657 SGT: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet3/0/11, changed state to up
Feb 22 2023 17:35:40.786 SGT: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet3/0/11, changed state to down
Feb 22 2023 17:35:41.787 SGT: %LINK-3-UPDOWN: Interface GigabitEthernet3/0/11, changed state to down
Please investigate the cable and RJ45 connector on both sides.
As this stack is affected by 2 defects, my recommendation would be to upgrade to golden fixed release https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-series-switches/214814-recommended-releases-for-catalyst-9200-9.html<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.cisco.com%2Fc%2Fen%2Fus%2Fsupport%2Fdocs%2Fswitches%2Fcatalyst-9300-series-switches%2F214814-recommended-releases-for-catalyst-9200-9.html&data=05%7C01%7Cmohamedissam.niyamathullah%40ncs.com.sg%7Cf6f55778cc244ffe4fc508db160d5841%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638128018828469927%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=1wfS1ak1q%2Bfy51GaG%2FUVjfsvOZ0h3B6xccBoWb3PH60%3D&reserved=0>
//
Please let us know if you need Webex to schedule with TAC now.
Based on the provided documentation,
------------------ show switch stack-ports summary ------------------
Sw#/Port# Port Status Neighbor Cable Length Link OK Link Active Sync OK #Changes to LinkOK In Loopback
-------------------------------------------------------------------------------------------------------------------
1/1 OK 2 100cm Yes Yes Yes 2 No
1/2 OK 6 300cm Yes Yes Yes 1 No
2/1 OK 3 100cm Yes Yes Yes 1 No
2/2 OK 1 100cm Yes Yes Yes 1 No
3/1 OK 4 100cm Yes Yes Yes 1 No
3/2 OK 2 100cm Yes Yes Yes 2 No
4/1 OK 5 100cm Yes Yes Yes 1 No
4/2 OK 3 100cm Yes Yes Yes 3 No
5/1 OK 6 100cm Yes Yes Yes 1 No
5/2 OK 4 100cm Yes Yes Yes 2 No
6/1 OK 1 300cm Yes Yes Yes 2 No
6/2 OK 5 100cm Yes Yes Yes 1 No
Stack ports of all 6 switches in the tsack seem stable. We should monitor them.
#show switch stack-ports summary – will display if there were any changes on that specific link.
The following defect was found On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally CSCvw90216
The switch is also affected by this defect https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwa14578 - realted to the memory leak and the reload of switch 2.
Unfortunately, all the logs are from Feb 22, however noted that a you have a logging server, if possible to provide me the logs for that date and the specific hour, for further investigation.
As this stack is affected by 2 defects, my recommendation would be to upgrade to golden fixed release https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-series-switches/214814-recommended-releases-for-catalyst-9200-9.html
Technology: LAN Switching
Subtechnology: Cat9300
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: C9300
Software Version: N/A
Router/Node Name: N/A
Problem Details: Stack port disconnects and reconnects again on its own. Need assistance to find out cause of the issue
Stack port disconnects and reconnects again on its own. Need assistance to find out cause of the issue
timestamp : 2023-03-20T12:21:45.000+0000 || updatedby : gikeshav || type : RESOLUTION SUMMARY || visibility : External || details : closed after 3 strikes | Question: Which product is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY product name should be selected from the given product name list: C9300-48U-A/ C9300-48U/ THIRD_PARTY_PRODUCT_HW/ UCS-FI-6454/ C9300-NM-8X/ PWR-C1-1100WAC Response in this JSON format:
{"product_name": "","explanation": "", "summary": ""}
17.3.23 - - - -
+ as per discussion with rep, it is agreed upon for soft closure
10.3.23 - - - -
+ 2nd follow up
7.3.23 - - - -
+ 1st follow up
Based on the provided documentation,
------------------ show switch stack-ports summary ------------------
Sw#/Port# Port Status Neighbor Cable Length Link OK Link Active Sync OK #Changes to LinkOK In Loopback
-------------------------------------------------------------------------------------------------------------------
1/1 OK 2 100cm Yes Yes Yes 2 No
1/2 OK 6 300cm Yes Yes Yes 1 No
2/1 OK 3 100cm Yes Yes Yes 1 No
2/2 OK 1 100cm Yes Yes Yes 1 No
3/1 OK 4 100cm Yes Yes Yes 1 No
3/2 OK 2 100cm Yes Yes Yes 2 No
4/1 OK 5 100cm Yes Yes Yes 1 No
4/2 OK 3 100cm Yes Yes Yes 3 No
5/1 OK 6 100cm Yes Yes Yes 1 No
5/2 OK 4 100cm Yes Yes Yes 2 No
6/1 OK 1 300cm Yes Yes Yes 2 No
6/2 OK 5 100cm Yes Yes Yes 1 No
Stack ports of all 6 switches in the tsack seem stable. We should monitor them.
#show switch stack-ports summary – will display if there were any changes on that specific link.
The following defect was found On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally CSCvw90216
The switch is also affected by this defect https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwa14578 - realted to the memory leak and the reload of switch 2.
Unfortunately, all the logs are from Feb 22, however noted that a you have a logging server, if possible to provide me the logs for that date and the specific hour, for further investigation.
As this stack is affected by 2 defects, my recommendation would be to upgrade to golden fixed release https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-series-switches/214814-recommended-releases-for-catalyst-9200-9.html
10.3.23 - - - -
+ 2nd follow up
7.3.23 - - - -
+ 1st follow up
Based on the provided documentation,
------------------ show switch stack-ports summary ------------------
Sw#/Port# Port Status Neighbor Cable Length Link OK Link Active Sync OK #Changes to LinkOK In Loopback
-------------------------------------------------------------------------------------------------------------------
1/1 OK 2 100cm Yes Yes Yes 2 No
1/2 OK 6 300cm Yes Yes Yes 1 No
2/1 OK 3 100cm Yes Yes Yes 1 No
2/2 OK 1 100cm Yes Yes Yes 1 No
3/1 OK 4 100cm Yes Yes Yes 1 No
3/2 OK 2 100cm Yes Yes Yes 2 No
4/1 OK 5 100cm Yes Yes Yes 1 No
4/2 OK 3 100cm Yes Yes Yes 3 No
5/1 OK 6 100cm Yes Yes Yes 1 No
5/2 OK 4 100cm Yes Yes Yes 2 No
6/1 OK 1 300cm Yes Yes Yes 2 No
6/2 OK 5 100cm Yes Yes Yes 1 No
Stack ports of all 6 switches in the tsack seem stable. We should monitor them.
#show switch stack-ports summary – will display if there were any changes on that specific link.
The following defect was found On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally CSCvw90216
The switch is also affected by this defect https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwa14578 - realted to the memory leak and the reload of switch 2.
Unfortunately, all the logs are from Feb 22, however noted that a you have a logging server, if possible to provide me the logs for that date and the specific hour, for further investigation.
As this stack is affected by 2 defects, my recommendation would be to upgrade to golden fixed release https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-series-switches/214814-recommended-releases-for-catalyst-9200-9.html
Hope you are doing good.
After TAC analysis, below is the summary report:
//
As I understand it, you are observing stack issues.
Based on the provided documentation,
------------------ show switch stack-ports summary ------------------
Sw#/Port# Port Status Neighbor Cable Length Link OK Link Active Sync OK #Changes to LinkOK In Loopback
-------------------------------------------------------------------------------------------------------------------
1/1 OK 2 100cm Yes Yes Yes 2 No
1/2 OK 6 300cm Yes Yes Yes 1 No
2/1 OK 3 100cm Yes Yes Yes 1 No
2/2 OK 1 100cm Yes Yes Yes 1 No
3/1 OK 4 100cm Yes Yes Yes 1 No
3/2 OK 2 100cm Yes Yes Yes 2 No
4/1 OK 5 100cm Yes Yes Yes 1 No
4/2 OK 3 100cm Yes Yes Yes 3 No
5/1 OK 6 100cm Yes Yes Yes 1 No
5/2 OK 4 100cm Yes Yes Yes 2 No
6/1 OK 1 300cm Yes Yes Yes 2 No
6/2 OK 5 100cm Yes Yes Yes 1 No
Stack ports of all 6 switches in the tsack seem stable.
We should monitor them.
#show switch stack-ports summary - will display if there were any changes on that specific link.
The following defect was found On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally CSCvw90216<https://apc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcdets.cisco.com%2Fapps%2Fdumpcr%3F%26content%3Dsummary%26format%3Dhtml%26identifier%3DCSCvw90216&data=05%7C01%7Cmohamedissam.niyamathullah%40ncs.com.sg%7Cf6f55778cc244ffe4fc508db160d5841%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638128018828469927%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=05NTrndwFHh5jVSRghLnTcdCJ1hTAFhrdFN%2Fx4zDTn8%3D&reserved=0>
The switch is also affected by this defect https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwa14578<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbst.cloudapps.cisco.com%2Fbugsearch%2Fbug%2FCSCwa14578&data=05%7C01%7Cmohamedissam.niyamathullah%40ncs.com.sg%7Cf6f55778cc244ffe4fc508db160d5841%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638128018828469927%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=zvPY6QbVsIwOwWwnuB9eVLPjVgNhr%2FiZnubXuyh93Mk%3D&reserved=0> - realted to the memory leak and the reload of switch 2.
Unfortunately, all the logs are from Feb 22, however noted that you have a logging server, if possible to provide me the logs for that date and the specific hour, for further investigation.
Regarding port 3/0/11, do observe it flapping,
Feb 22 2023 17:34:55.656 SGT: %LINK-3-UPDOWN: Interface GigabitEthernet3/0/11, changed state to up
Feb 22 2023 17:34:56.657 SGT: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet3/0/11, changed state to up
Feb 22 2023 17:35:40.786 SGT: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet3/0/11, changed state to down
Feb 22 2023 17:35:41.787 SGT: %LINK-3-UPDOWN: Interface GigabitEthernet3/0/11, changed state to down
Please investigate the cable and RJ45 connector on both sides.
As this stack is affected by 2 defects, my recommendation would be to upgrade to golden fixed release https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-series-switches/214814-recommended-releases-for-catalyst-9200-9.html<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.cisco.com%2Fc%2Fen%2Fus%2Fsupport%2Fdocs%2Fswitches%2Fcatalyst-9300-series-switches%2F214814-recommended-releases-for-catalyst-9200-9.html&data=05%7C01%7Cmohamedissam.niyamathullah%40ncs.com.sg%7Cf6f55778cc244ffe4fc508db160d5841%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638128018828469927%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=1wfS1ak1q%2Bfy51GaG%2FUVjfsvOZ0h3B6xccBoWb3PH60%3D&reserved=0>
//
Please let us know if you need Webex to schedule with TAC now.
This is regarding Case Number 695109716, which you opened with Cisco Systems.
My name is Georgi Borisov and I'm the Customer Support Engineer who owns your case.
I am sending this e-mail as the initial point of contact to let you know that I have accepted your case, and also to give you information on how to contact me.
As I understand it, you are observing stack issues.
Based on the provided documentation,
------------------ show switch stack-ports summary ------------------
Sw#/Port# Port Status Neighbor Cable Length Link OK Link Active Sync OK #Changes to LinkOK In Loopback
-------------------------------------------------------------------------------------------------------------------
1/1 OK 2 100cm Yes Yes Yes 2 No
1/2 OK 6 300cm Yes Yes Yes 1 No
2/1 OK 3 100cm Yes Yes Yes 1 No
2/2 OK 1 100cm Yes Yes Yes 1 No
3/1 OK 4 100cm Yes Yes Yes 1 No
3/2 OK 2 100cm Yes Yes Yes 2 No
4/1 OK 5 100cm Yes Yes Yes 1 No
4/2 OK 3 100cm Yes Yes Yes 3 No
5/1 OK 6 100cm Yes Yes Yes 1 No
5/2 OK 4 100cm Yes Yes Yes 2 No
6/1 OK 1 300cm Yes Yes Yes 2 No
6/2 OK 5 100cm Yes Yes Yes 1 No
Stack ports of all 6 switches in the tsack seem stable.
We should monitor them.
#show switch stack-ports summary - will display if there were any changes on that specific link.
The following defect was found On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally CSCvw90216<http://cdets.cisco.com/apps/dumpcr?&content=summary&format=html&identifier=CSCvw90216>
The switch is also affected by this defect https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwa14578 - realted to the memory leak and the reload of switch 2.
Unfortunately, all the logs are from Feb 22, however noted that you have a logging server, if possible to provide me the logs for that date and the specific hour, for further investigation.
Regarding port 3/0/11, do observe it flapping,
Feb 22 2023 17:34:55.656 SGT: %LINK-3-UPDOWN: Interface GigabitEthernet3/0/11, changed state to up
Feb 22 2023 17:34:56.657 SGT: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet3/0/11, changed state to up
Feb 22 2023 17:35:40.786 SGT: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet3/0/11, changed state to down
Feb 22 2023 17:35:41.787 SGT: %LINK-3-UPDOWN: Interface GigabitEthernet3/0/11, changed state to down
Please investigate the cable and RJ45 connector on both sides.
As this stack is affected by 2 defects, my recommendation would be to upgrade to golden fixed release https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-series-switches/214814-recommended-releases-for-catalyst-9200-9.html
Apologize for not reaching out sooner, I was in a meeting that took longer than expected.
My shift is from Mon - Fri - 11am-7:00pm CST, if you need assistance in your time zone please contact our CIN agents to reuque the case to an available engineer in your timezone.
Please do not hesitate to contact me for any questions or concerns.
Based on the provided documentation,
------------------ show switch stack-ports summary ------------------
Sw#/Port# Port Status Neighbor Cable Length Link OK Link Active Sync OK #Changes to LinkOK In Loopback
-------------------------------------------------------------------------------------------------------------------
1/1 OK 2 100cm Yes Yes Yes 2 No
1/2 OK 6 300cm Yes Yes Yes 1 No
2/1 OK 3 100cm Yes Yes Yes 1 No
2/2 OK 1 100cm Yes Yes Yes 1 No
3/1 OK 4 100cm Yes Yes Yes 1 No
3/2 OK 2 100cm Yes Yes Yes 2 No
4/1 OK 5 100cm Yes Yes Yes 1 No
4/2 OK 3 100cm Yes Yes Yes 3 No
5/1 OK 6 100cm Yes Yes Yes 1 No
5/2 OK 4 100cm Yes Yes Yes 2 No
6/1 OK 1 300cm Yes Yes Yes 2 No
6/2 OK 5 100cm Yes Yes Yes 1 No
Stack ports of all 6 switches in the tsack seem stable. We should monitor them.
#show switch stack-ports summary – will display if there were any changes on that specific link.
The following defect was found On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally CSCvw90216
The switch is also affected by this defect https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwa14578 - realted to the memory leak and the reload of switch 2.
Unfortunately, all the logs are from Feb 22, however noted that a you have a logging server, if possible to provide me the logs for that date and the specific hour, for further investigation.
As this stack is affected by 2 defects, my recommendation would be to upgrade to golden fixed release https://www.cisco.com/c/en/us/support/docs/switches/catalyst-9300-series-switches/214814-recommended-releases-for-catalyst-9200-9.html
I did the analysis of the provided show tech, it seems that this is related to a bug.
Can you check if related?
On Cat9200(stack),trap ciscoStackWiseMIB.0.0.6 (cswStackMemberRemoved) unable created normally
CSCvw90216
Technology: LAN Switching
Subtechnology: Cat9300
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: C9300
Software Version: N/A
Router/Node Name: N/A
Problem Details: Stack port disconnects and reconnects again on its own. Need assistance to find out cause of the issue
Stack port disconnects and reconnects again on its own. Need assistance to find out cause of the issue
timestamp : 2023-03-20T12:21:45.000+0000 || updatedby : gikeshav || type : RESOLUTION SUMMARY || visibility : External || details : closed after 3 strikes | Technology: LAN Switching
Subtechnology: Cat9300
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: C9300
Software Version: N/A
Router/Node Name: N/A
Problem Details: Stack port disconnects and reconnects again on its own. Need assistance to find out cause of the issue | Stack port disconnects and reconnects again on its own. Need assistance to find out cause of the issue | timestamp : 2023-03-20T12:21:45.000+0000 || updatedby : gikeshav || type : RESOLUTION SUMMARY || visibility : External || details : closed after 3 strikes | nan | C9300-48U | cat9k_iosxe.16.12.04.SPA.bin | nan | nan | 3 | nan | Software Bug | C9300 | LAN Switching | Cat9300 | nan | nan | nan | nan | nan | nan |
693988553 | Question: Which software version is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY software version should be selected from the given software version list: 16.12.04/ NA - COMPONENT ONLY/ THIRD_PARTY_PRODUCT_SW/ NA - RMA/ 16.12.04/ 3.2.3o/ 15.0.2/ 16.06.07/ 16.12.03s/ 1/ 16.06.08/ 16.12.1s/ 17.03.05/ 16.12.4/ 16.9.5/ 16.12.01s/ PRODUCT_NOT_FOUND/ 16.3.5/ 16.12.3s/ 16.09.02/ 16.12.3a/ 16.12.1s/ 17.6.4 Response in this JSON format:
{"software_version": "","explanation": "", "summary": ""}
Closed
Technology: LAN Switching
Subtechnology: Cat9300
Problem Code: Configuration Assistance
Product: NA
Product Family: C9300
Software Version: N/A
Router/Node Name: N/A
Problem Details: Configurations are different between running-config and startup-config after reboot the switch.
Configurations are different between running-config and startup-config after reboot the switch.
Advised to verify show run all output for default configuration. | Question: Which product is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY product name should be selected from the given product name list: C9300-48U-A/ C9300-48U/ THIRD_PARTY_PRODUCT_HW/ UCS-FI-6454/ C9300-NM-8X/ PWR-C1-1100WAC Response in this JSON format:
{"product_name": "","explanation": "", "summary": ""}
Technology: LAN Switching
Subtechnology: Cat9300
Problem Code: Configuration Assistance
Product: NA
Product Family: C9300
Software Version: N/A
Router/Node Name: N/A
Problem Details: Configurations are different between running-config and startup-config after reboot the switch.
Configurations are different between running-config and startup-config after reboot the switch.
Advised to verify show run all output for default configuration. | Technology: LAN Switching
Subtechnology: Cat9300
Problem Code: Configuration Assistance
Product: NA
Product Family: C9300
Software Version: N/A
Router/Node Name: N/A
Problem Details: Configurations are different between running-config and startup-config after reboot the switch. | Configurations are different between running-config and startup-config after reboot the switch. | Advised to verify show run all output for default configuration. | Configurations are different between running-config and startup-config after reboot the c9300 switch. | C9300-48U-A | nan | nan | 82.0 | 3 | 100.0 | Configuration Assistance (process not intuitive, too complex, inconsistent...) | C9300 | LAN Switching | Cat9300 | 01tA0000000huG8IAI | Configuration Assistance | Cat9300 | LAN Switching | 01t15000005W1eHAAS | nan |
693929980 | Question: Which software version is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY software version should be selected from the given software version list: THIRD_PARTY_PRODUCT_SW/ 16.03.01/ 16.07.02/ 122.33 Response in this JSON format:
{"software_version": "","explanation": "", "summary": ""}
Software Version: -- => 17.3.4a
Technology: Router and IOS-XE Architecture
Subtechnology: IOS-XE Memory Leaks
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: ASR1000
Software Version: N/A
Router/Node Name: N/A
Problem Details: OSPF and BGP reset on SDA Borders
OSPF and BGP reset on SDA Borders
timestamp : 2023-01-09T13:29:50.000+0000 || updatedby : akbashir || type : RESOLUTION SUMMARY || visibility : External || details : No Response from
Closing as per 3 strike | Question: Which product is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY product name should be selected from the given product name list: THIRD_PARTY_PRODUCT_HW/ VS-C6509E-S720-10G/ CSR-ASR1K2-CHT1-1/ ASR1009-X Response in this JSON format:
{"product_name": "","explanation": "", "summary": ""}
Technology: Router and IOS-XE Architecture
Subtechnology: IOS-XE Memory Leaks
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: ASR1000
Software Version: N/A
Router/Node Name: N/A
Problem Details: OSPF and BGP reset on SDA Borders
OSPF and BGP reset on SDA Borders
timestamp : 2023-01-09T13:29:50.000+0000 || updatedby : akbashir || type : RESOLUTION SUMMARY || visibility : External || details : No Response from
Closing as per 3 strike | Technology: Router and IOS-XE Architecture
Subtechnology: IOS-XE Memory Leaks
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: ASR1000
Software Version: N/A
Router/Node Name: N/A
Problem Details: OSPF and BGP reset on SDA Borders | OSPF and BGP reset on SDA Borders | timestamp : 2023-01-09T13:29:50.000+0000 || updatedby : akbashir || type : RESOLUTION SUMMARY || visibility : External || details : No Response from
Closing as per 3 strike | nan | ASR1009-X | asr1001x-universalk9.16.07.02.SPA.bin | nan | nan | 3 | nan | Configuration Assistance (process not intuitive, too complex, inconsistent...) | ASR1000 | Routing Protocols (Includes NAT and HSRP) | BGP | nan | nan | nan | nan | nan | nan |
694642521 | Question: Which software version is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY software version should be selected from the given software version list: 1/ 8.10.151.0/ 8.10.171.0/ 802.11AC/ 8.5.171.0/ 8.10.183.0/ 8.10.130.0/ 8.10.171.0/ THIRD_PARTY_PRODUCT_SW/ 8.5.135.0/ 8.3.143.0/ NA-CIN_CLOSE_SW/ 8.5.140.0/ 4.0.1a/ WLC CLIENT INTEROP/ 12.0.13/ 8.5.151.0/ 8.4/ 120.13/ 8.5.160.0/ 8.10.151.0/ 8.3.143.0/ 8.3.131.0/ 8.3.150.0/ 8.5.135.0/ 8.5.161.0/ 8.10.185.0/ 8.2.166.0/ 8.5.140.0/ 8.5.151.0/ 8.3.150.0/ 16.12.3s/ WLC RRM/RF/CLEANAIR/ 8.5.151.0/ 8.10.130.0/ NA - COMPONENT ONLY/ 8.10.162.0/ 8.3.140.0/ 8.10.150.0/ 8.10.183.0/ 8.10.161.0/ 8.10.185.0/ 8.5.151.0/ 16.12.4/ 17.3.5b/ 8.10.183.0/ 8.10.171.0/ 8.5.171.0 Response in this JSON format:
{"software_version": "","explanation": "", "summary": ""}
8.10.185.3 special image shared with customer with bug fixes for AP crash issue
Thank you for your email.
We received an update form our escalation team and we have opened two new bugs internally to further investigate and below is the status:
1
AP BIZ2-04-AP24
Kernel Panic/ PC is at __udelay+0x30/0x48
1.
0xffffffc000338900 __udelay + 0x30
2.
0xffffffbffc44d52c osl_delay + 0x2c
New bug - CSCwe91301 for further investigation
2
AP TB-02-AP04
Kernel Panic/ PC is at get_partial_node.isra.24+0x140/0x288
1.
0xffffffc000088d30 dump_backtrace + 0x150
2.
0xffffffc000088e94 show_stack + 0x20
3.
0xffffffc000668b30 dump_stack + 0xb0
4.
0xffffffc000156058 print_bad_pte + 0x1d0
5.
0xffffffc0001579cc unmap_single_vma + 0x5d0
6.
0xffffffc0001582d8 unmap_vmas + 0x68
7.
0xffffffc00015f9b4 exit_mmap + 0xf8
8.
0xffffffc00009c354 mmput + 0x118
9.
0xffffffc0000a0afc do_exit + 0x8e0
10.
0xffffffc0000a120c do_group_exit + 0xb0
11.
0xffffffc0000a1290 __wake_up_parent + 0x28
New bug - CSCwe91264 for further investigation
3
AP ERC-02-AP49
CALLBACK FULL Reset Radio
Unable to Decode
Collect all .txt from /storages/cores/
AP ERC-02-AP51
CALLBACK FULL Reset Radio
Unable to Decode
Collect all .txt from /storages/cores/
AP ERC-B1-AP03
CALLBACK FULL Reset Radio
Unable to Decode
Collect all .txt from /storages/cores/
The below 3 AP crashes belong to AP model 1810W, which is EOL.
I will further check with the team internally to see how we can proceed on this:
4
AP i4-05-AP55
Kernel Panic/ PC is at skb_recycler_alloc+0x84/0x2e4
1.
0xc039a954 skb_recycler_alloc + 0x84
2.
0xc037bbe0 dev_alloc_skb_poolid + 0x24
3.
0xbfbe9678 __adf_nbuf_alloc + 0x28
4.
0xbfccff54 wbuf_alloc + 0x5c
5.
0xbfd08e2c wmi_buf_alloc + 0x20
6.
0xbfcf8fb4 wmi_unified_set_qboost_param + 0x30
7.
0xbfcf904c qboost_config + 0x18
8.
0xbfcf92d8 ol_ath_net80211_newassoc + 0x288
9.
0xbfc6a65c ieee80211_mlme_recv_assoc_request + 0x418
10.
0xbfc1ed5c ieee80211_ucfg_splitmac_add_client + 0xc3c
11.
0xbfcb8d04 acfg_add_client + 0x280
12.
0xbfcb92dc acfg_handle_vap_ioctl + 0x10c
13.
0xbfca73b4 ieee80211_ioctl + 0x2720
14.
0xc038962c dev_ifsioc + 0x300
15.
0xc0389db8 dev_ioctl + 0x764
16.
0xc0372e80 sock_ioctl + 0x260
17.
0xc0104730 do_vfs_ioctl + 0x5a0
18.
0xc01047b8 sys_ioctl + 0x3c
19.
0xc000e680 ret_fast_syscall + 0x0
AP i4-05-AP55
Kernel Panic/ PC is at skb_recycler_alloc+0x84/0x2e4
1.
0xc039a954 skb_recycler_alloc + 0x84
2.
0xc037bbe0 dev_alloc_skb_poolid + 0x24
3.
0xbfbe9678 __adf_nbuf_alloc + 0x28
4.
0xbfccff54 wbuf_alloc + 0x5c
5.
0xbfd08e2c wmi_buf_alloc + 0x20
6.
0xbfcf8fb4 wmi_unified_set_qboost_param + 0x30
7.
0xbfcf904c qboost_config + 0x18
8.
0xbfcf92d8 ol_ath_net80211_newassoc + 0x288
9.
0xbfc6a65c ieee80211_mlme_recv_assoc_request + 0x418
10.
0xbfc1ed5c ieee80211_ucfg_splitmac_add_client + 0xc3c
11.
0xbfcb8d04 acfg_add_client + 0x280
12.
0xbfcb92dc acfg_handle_vap_ioctl + 0x10c
13.
0xbfca73b4 ieee80211_ioctl + 0x2720
14.
0xc038962c dev_ifsioc + 0x300
15.
0xc0389db8 dev_ioctl + 0x764
16.
0xc0372e80 sock_ioctl + 0x260
17.
0xc0104730 do_vfs_ioctl + 0x5a0
18.
0xc01047b8 sys_ioctl + 0x3c
19.
0xc000e680 ret_fast_syscall + 0x0
AP CELC-03-AP11
Kernel Panic/ PC is at skb_recycler_alloc+0x84/0x2e4
1.
0xc039a954 skb_recycler_alloc + 0x84
2.
0xc037bbe0 dev_alloc_skb_poolid + 0x24
3.
0xbfbe9678 __adf_nbuf_alloc + 0x28
4.
0xbfd34cac htt_h2t_dbg_stats_get + 0x6c
5.
0xbfd2c218 ol_txrx_fw_stats_get + 0x110
6.
0xbfcd94c0 ol_ath_fw_stats_timeout + 0xb0
7.
0xc006b664 run_timer_softirq + 0x168
8.
0xc0065ba0 __do_softirq + 0xf4
9.
0xc0066070 irq_exit + 0x54
10.
0xc000ef5c handle_IRQ + 0x88
11.
0xc00085ec gic_handle_irq + 0x94
12.
0xc000e340 __irq_svc + 0x40
The below two AP crashes are still being investigated upon and will have an update shortly.
5
AP IT-01-AP13
Kernel Panic/ PC is at misc_open+0x48/0x198
0xffffffc0003d01c8 misc_open + 0x48
0xffffffc0003d01b0 misc_open + 0x30
AP E1-05-AP08
Kernel Panic/ PC is at misc_open+0x48/0x198
1.
0xffffffc0003d01c8 misc_open + 0x48
2.
0xffffffc0003d01b0 misc_open + 0x30
Could you please help me collect all .txt files from /storages/cores/ from the 3 Aps (AP ERC-02-AP49, AP ERC-02-AP51 & AP ERC-B1-AP03) so I can share it with our team for further analysis.
Please let me know your availability so I can collect the required logs.
Please feel free to let me know if you have any further queries.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue.
Recommended to upgrade the SDA WLC to 8.10.181 as well.
jan 27th : crash file of 9120 has
from crash file
PC is at rb_
next+0xc/0x6c
LR is at pick_next_task_fair+0x4
8/0x160.
CSCwa34136 Cisco 3802 FQI/NMI reset at rb_next+0xc
the fix is to upgrade to 8.10.171
During the upgrade customer was facing FN - 72524.
Customer will perform the upgrade from 6 PM SGT to next day 7 AM SGT. They plan to do reboots at 12 midnight of 24th feb and 25th feb.
Commsbridge - CRQ000002047334 - WLC (.13) Upgrade to 8.10.183
Hosted by Samuel Foo
https://techdynamicpteltd.my.webex.com/techdynamicpteltd.my/j.php?MTID=mb7f4582654ff010cfd61f959ef1cf2ed
Saturday, February 18, 2023 11:00 PM | 8 hours | (UTC+08:00) Kuala Lumpur, Singapore
Meeting number: 2550 952 0840
Password: iuMJ34W3bJR (48653493 from phones and video systems)
Join by video system
Dial [email protected]
You can also dial 173.243.2.68 and enter your meeting number.
Join by phone
+1-650-479-3208 United States Toll
Access code: 255 095 20840
Only CSCvz08781 is fixed on 8.5.171, though Wism2 is not mentioned under the list of products on this bug.
Please feel free to let me know if you have any further queries.
Thanks Sharath for an update.
Can you confirm if below bugs are fixed in 8.5.171.0
MPSH issue bug https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvz08781<https://ddec1-0-en-ctp.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2fbst.cloudapps.cisco.com%2fbugsearch%2fbug%2fCSCvz08781&umid=75439d08-7787-4ffb-a34a-787b5c768898&auth=32813f2e647ebb91b39d0b35ac97db1efa82dd64-04efa1002c98d5b739f3adb9e569602d457457fe>
Logs :-
Nov 22 11:08:12 kernel: [*11/22/2022 11:08:12.9134] Re-Tx Count=2, Max Re-Tx Value=5, SendSeqNum=5, NumofPendingMsgs=67
Nov 22 11:08:12 kernel: [*11/22/2022 11:08:12.9135]
Nov 22 11:08:15 kernel: [*11/22/2022 11:08:15.8115] Re-Tx Count=3, Max Re-Tx Value=5, SendSeqNum=8, NumofPendingMsgs=70
Nov 22 11:08:15 kernel: [*11/22/2022 11:08:15.8115]
Nov 22 11:08:18 kernel: [*11/22/2022 11:08:18.7069] Re-Tx Count=4, Max Re-Tx Value=5, SendSeqNum=11, NumofPendingMsgs=73
Nov 22 11:08:18 kernel: [*11/22/2022 11:08:18.7069]
Nov 22 11:08:21 kernel: [*11/22/2022 11:08:21.6039] Re-Tx Count=5, Max Re-Tx Value=5, SendSeqNum=11, NumofPendingMsgs=73
Nov 22 11:08:21 kernel: [*11/22/2022 11:08:21.6039]
Nov 22 11:08:24 kernel: [*11/22/2022 11:08:24.4998] Max retransmission count exceeded, going back to DISCOVER mode.
CLIB issue bug https://bst.cisco.com/bugsearch/bug/CSCvy37953
Logs :-
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0011] dtls_cssl_msg_cb: Received >>> DTLS 1.2 Handshake [Length 00ac] ClientHello
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0011] dtls_connectionDB_add_connection: Added Connection 0x54aaea00 Server [172.18.240.26]:5246 Client [10.253.159.139]:5256
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0011]
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0011] create_dtls_connection: Creating DTLS Ctrl Connection 0x54aaea00
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0012] DTLS connection created sucessfully local_ip: 10.253.159.139 local_port: 5256 peer_ip: 172.18.240.26 peer_port: 5246
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0022] local_in_addr_comp: Client and server addresses/port/version of 2 nodes are [10.253.159.139]:5256(0)--[172.18.240.26]:5246(0) [10.253.159.139]:5256--[172.18.240.26]:5246
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0022] dtls_connection_find_using_link_info: Searching connection [10.253.159.139]:5256--[172.18.240.26]:5246, result 0x54aaea00
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0023] dtls_cssl_msg_cb: Sent <<< DTLS Header [Length 000d]
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0023] dtls_cssl_msg_cb: Sent <<< DTLS 1.2 Handshake [Length 002f] HelloVerifyRequest
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0024] dtls_cssl_msg_cb: Received >>> DTLS Header [Length 000d]
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0024] dtls_cssl_msg_cb: Received >>> DTLS 1.2 Handshake [Length 00cc] ClientHello
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0025] dtls_process_packet: SSL get error rc 5
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0031] local_in_addr_comp: Client and server addresses/port/version of 2 nodes are [10.253.159.139]:5256(0)--[172.18.240.26]:5246(0) [10.253.159.139]:5256--[172.18.240.26]:5246
Feb 2 21:07:49 kernel: [*02/02/2023 21:07:49.0032] dtls_connection_find_using_link_info: Searching connection [10.253.159.139]:5256--[172.18.240.26]:5246, result 0x54aaea00
I had a check internally and found that Wism2 is EOL and end of support and hence 8.10.183 is not available.
The last version released for Wism2 was 8.5.171.
Please feel free to let me know if you have any further queries.
Can we get 8.10.183.0 for wism2.
Its urgent.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue.
Recommended to upgrade the SDA WLC to 8.10.181 as well.
jan 27th : crash file of 9120 has
from crash file
PC is at rb_
next+0xc/0x6c
LR is at pick_next_task_fair+0x4
8/0x160.
CSCwa34136 Cisco 3802 FQI/NMI reset at rb_next+0xc
the fix is to upgrade to 8.10.171
During the upgrade customer was facing FN - 72524.
While uploading the image to the WLC.
We are observing that AP’s are taking too much of time in predownloading.
We are upgrading from 8.10.162.0 to 8.10.183.0.
(TLDC-WLC1) >show ap image all
Total number of APs..............................
1340
Number of APs
Initiated.......................................
2
Downloading.....................................
0
Predownloading..................................
1000
Completed predownloading........................
0
Not Supported...................................
0
Failed to Predownload...........................
0
Predownload Predownload Flexconnect
AP Name Primary Image Backup Image Status Version Next Retry Time Retry Count Predownload
------------------ -------------- -------------- --------------- -------------- ---------------- ------------ --------------
RC1_DH-02-AP07 8.10.162.0 8.10.130.0 Predownloading 8.10.183.0 NA 0
RC1_DH-02-AP08 8.10.162.0 8.10.130.0 Predownloading 8.10.183.0 NA 0
RC1-02-AP01 8.10.162.0 8.10.130.0 Predownloading 8.10.183.0 NA 0
SDE1-04-AP03 8.10.162.0 8.3.150.6 Waiting 8.10.183.0 01:12:02 2
RC1-02-AP10 8.10.162.0 8.10.130.0 Predownloading 8.10.183.0 NA 0
RC1-02-AP09 8.10.162.0 8.10.130.0 Predownloading 8.10.183.0 NA 0
RC1-02-AP06 8.10.162.0 8.10.130.0 Predownloading 8.10.183.0 NA 0
SDE3-03-AP16 8.10.162.0 8.10.130.0 Predownloading 8.10.183.0 NA 0
RC2-GF-AP14 8.10.162.0 8.10.130.0 Predownloading 8.10.183.0 NA 0
SDE3-02-AP06 8.10.162.0 8.10.130.0 Predownloading 8.10.183.0 NA 0
Please join zoom meeting.
Join Zoom Meeting
https://nus-sg.zoom.us/j/84695360268?pwd=Sng1Tk16R1dNZXVoaGhtSTFjUVZuQT09
Meeting ID: 846 9536 0268
Passcode: 239230
To join using H.323: https://wiki.nus.edu.sg/display/cit/Making+H.323+or+SIP+Calls
Please note that this website may contain links to other websites not maintained by NUS.
Such third party websites are subject to their own data protection and privacy practices and you are encouraged to examine the privacy policies of those websites.
The session may be recorded and the recordings will be distributed/published by the organiser to the participants.
If you have any concerns on the recordings, please contact the organiser.
Only email address, first/last name will automatically be recorded but other information such as company name, telephone and/or profile picture can also be recorded, if required, during the session recording.
My name is Juan Felipe, and I have ownership of Service Request 694642521.
I am sending this e-mail as an initial point of contact.
Problem Description:
Initial issue:
When we had more number of clients connected to the APs(around 70-80 clients), few of them stopped catering to clients and after a reboot clients connected fine.
The crash was generated after rebooting the Aps to restore the network.
You had a fabric setup with over 4000+ Aps.
jan 30th : pinged internal to DE to find out if bug is fixed for 8.10.183
Here is a brief summary of our call:
1.
There is a query from Biswajit.
Can we upgrade the WLC 8.10.181.4 to 8.10.183.0, since 8.10.181.0 is deferred.
I believe the answer is Yes but wanted to double confirm from Cisco TAC.
I just got a confirmation from the DE team that 8.10.183.0 will be supported with DNAC 2.2.2.9 .
Compatibility Matrix should be updated by this week.
Please feel free to let us know if you have any further queries.
You are right.
I have requested for compatibility between DNAC 2.2.2.9 and wlc 8.10.183.
I will keep you updated as soon as I hear from the DE team.
Please correct , we need compatibility conformation on DNAC 2.2.2.9 with 8.10.183.0 from Cisco.
I just had a call with TAC engineer.
TAC confirmed on 2802i CCLIB certificate issue under https://bst.cisco.com/bugsearch/bug/CSCvy37953 buggy behaviour.
This issue is fixed under 8.10.171.0 and expected to have fix on 8.10.183.0.
TAC will share the findings shortly with logs.
Yes, Testing was done on 26th JAN for POVWLC1 and 4.
Below is the snippet.
Conclusion :- No compatibility issues found between 2.2.2.9 and 8.10.183.0.
+Baby John
Hi Satyam, you have tested this OS 8.10.183 in POV environment with DNAC 2.2.2.9 , did you find any issues ? Pls share.
I apologize for the confusion.
Yes, are you right 8.10.183 is not in the list of supported release for DNAC 2.2.2.9.
Since, 8.10.181 has been deferred and is not available for download, please allow me a day time so I can check with DNAC DE team if they can include 8.10.183 in the compatibility matrix and also get a confirmation if they have done some testing so you can proceed with the upgrade.
Please feel free to let us know if you have any further queries.
I also just noticed the same.
Below screenshot shared is from 2.3.3.6 ( by TAC ) but I concur checking the same for 2.2.2.9 compatibility with 8.10.183.0.
I have reached out Cisco TM on the same and expected to get response in 30 mins from TAC.
Hi Satyam / Sharath
Do note that the current production DNAC is on ver 2.2.2.9.
however, 8.10.183 is not in the compatibility list with DNAC 2.2.2.9 officially.
Cisco team, please advise if we still can proceed with the WLC firmware upgrade.
[Icon Description automatically generated]
I have a discussion with the DE, and yes the fix for the bug Is integrated to release 8.10.183 as well.
AP crash issue
Initial issue:
When we had more number of clients connected to the APs(around 70-80 clients), few of them stopped catering to clients and after a reboot clients connected fine. The crash was generated after rebooting the Aps to restore the network.
You had a fabric setup with over 4000+ Aps.
jan 30th : pinged internal to DE to find out if bug is fixed for 8.10.183
Here is a brief summary of our call:
1. You had 4-5 Ap crashes which were 2802i AP modes and we collected the crash logs and tech support logs from the affected APs.
2. Since you wanted SDA team to look into the fabric part to figure out if the are any issues with the routing part, I involved my SDA team on call.
3. As per our SDA teams analysis, the fabric underlaying path is not a traditional design and my colleague will give you more details about his findings and next plan of action.
Can we say if the bug is fixed in earlier release i.e.
8.10.168.0 and 8.10.171.0 which both are deferred now, will be fixed in 8.10.183.0.
Urgently.
The code 8.10.183 is currently not listed in the integrated releases for this bug.
Thanks for your support !!
Can you also confirm that this fix is integrated in 8.10.183.0.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue.
Recommended to upgrade the SDA WLC to 8.10.181 as well.
jan 27th : crash file of 9120 has
from crash file
PC is at rb_
next+0xc/0x6c
LR is at pick_next_task_fair+0x4
8/0x160.
CSCwa34136 Cisco 3802 FQI/NMI reset at rb_next+0xc
the fix is to upgrade to 8.10.171
the fix is to upgrade to 8.10.171 for the crash info seen for 9120 AP
Thank you for your email.
In this case I would recommend upgrading to 8.10.183 version which is the recommended version.
Here is the compatibility matrix for your reference:
Since the defect CSCvx96224 is fixed on 8.10.181.x, it is by default committed on 8.10.183.
Here is the link to 8.10.183 version:
https://software.cisco.com/download/home/286284728/type/280926587/release/8.10.183.0
Let me know if you are unable to download it, so I can publish it for you.
Please feel free to let me know if you have any further queries.
I can see that 8.10.181.0 is marked deferred now due to ongoing different bug CSCwd80290 <https://bst.cisco.com/bugsearch/bug/CSCwd80290> .
It is not available to download.
https://software.cisco.com/download/home/286284738/type/280926587/release/8.10.181.0?releaseIndicator=DEFERRED
Could you confirm that what can be done next ?
Hey Sharath,
Thanks for your confirmation and I hope everything is fine at your place.
In this case we can proceed with adoption of 8.10.181.0 instead escalation image 8.10.181.4.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue
For CAPWAP traffic from WLC to AP takes path from Edge which might be the reason for CAPWAP keepalives to get dropped
Fro traffic from WLC to AP, we checked packet capture on AP connected port and we see source as a FE.
Frame 20642: 173 bytes on wire (1384 bits), 173 bytes captured (1384 bits) on interface \Device\NPF_{F90C5072-EDB0-4466-991E-486944B5A317}, id 0
Ethernet II, Src: Cisco_9a:32:c0 (70:79:b3:9a:32:c0), Dst: Cisco_36:70:e7 (f8:7a:41:36:70:e7)
Internet Protocol Version 4, Src: 172.18.1.2, Dst: 172.18.1.61
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 159
Identification: 0x81b3 (33203)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 250
Protocol: UDP (17)
Header Checksum: 0xa41e [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.1.2
Destination Address: 172.18.1.61
User Datagram Protocol, Src Port: 65419, Dst Port: 4789
Virtual eXtensible Local Area Network
Ethernet II, Src: Cisco_9f:00:00 (00:00:0c:9f:00:00), Dst: ba:25:cd:f4:ad:38 (ba:25:cd:f4:ad:38)
Internet Protocol Version 4, Src: 172.18.240.13, Dst: 10.253.144.11
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 109
Identification: 0x50bf (20671)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 251
Protocol: UDP (17)
Header Checksum: 0xf77f [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.240.13
Destination Address: 10.253.144.11
User Datagram Protocol, Src Port: 5246, Dst Port: 5252
Control And Provisioning of Wireless Access Points - Control
Preamble
Datagram Transport Layer Security
Dec 1st 2022:
radio reloaded: sensord cmd(0x03) failed slot 1
[cmd timeout] wifi1: 0x9184=unknown intCode:0x1184 last 0x801c=SetRadio
The crash signature matches the bug mentioned below which has a fix in 10.8.171.
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvx96224
Informed customer to upgrade WLC to 8.10.181 which is DNAC 2.3 compatible
Informed customer that bug on CSCvx96224 is fixed on 8.10.181.0
I apologize, I was on an emergency leave.
I had a check internally and I can confirm that the bug CSCvx96224<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvx96224> is fixed on 8.10.181.0.
Please feel free to let me know if you have any further queries and I will be glad to assist.
Can you confirm if the 8.10.181.0 has the fix of affected bug “bst.cloudapps.cisco.com/bugsearch/bug/CSCvx96224” because I cannot see this bug under resolved caveats in the release notes https://www.cisco.com/c/en/us/td/docs/wireless/controller/release/notes/crn810mr8.html
We need conformation form cisco that bug fix is in 8.10.181.0 before adoption.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue
For CAPWAP traffic from WLC to AP takes path from Edge which might be the reason for CAPWAP keepalives to get dropped
Fro traffic from WLC to AP, we checked packet capture on AP connected port and we see source as a FE.
Frame 20642: 173 bytes on wire (1384 bits), 173 bytes captured (1384 bits) on interface \Device\NPF_{F90C5072-EDB0-4466-991E-486944B5A317}, id 0
Ethernet II, Src: Cisco_9a:32:c0 (70:79:b3:9a:32:c0), Dst: Cisco_36:70:e7 (f8:7a:41:36:70:e7)
Internet Protocol Version 4, Src: 172.18.1.2, Dst: 172.18.1.61
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 159
Identification: 0x81b3 (33203)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 250
Protocol: UDP (17)
Header Checksum: 0xa41e [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.1.2
Destination Address: 172.18.1.61
User Datagram Protocol, Src Port: 65419, Dst Port: 4789
Virtual eXtensible Local Area Network
Ethernet II, Src: Cisco_9f:00:00 (00:00:0c:9f:00:00), Dst: ba:25:cd:f4:ad:38 (ba:25:cd:f4:ad:38)
Internet Protocol Version 4, Src: 172.18.240.13, Dst: 10.253.144.11
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 109
Identification: 0x50bf (20671)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 251
Protocol: UDP (17)
Header Checksum: 0xf77f [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.240.13
Destination Address: 10.253.144.11
User Datagram Protocol, Src Port: 5246, Dst Port: 5252
Control And Provisioning of Wireless Access Points - Control
Preamble
Datagram Transport Layer Security
Dec 1st 2022:
radio reloaded: sensord cmd(0x03) failed slot 1
[cmd timeout] wifi1: 0x9184=unknown intCode:0x1184 last 0x801c=SetRadio
The crash signature matches the bug mentioned below which has a fix in 10.8.171.
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvx96224
Informed customer to upgrade WLC to 8.10.181 which is DNAC 2.3 compatible
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue.
Recommended to upgrade the SDA WLC to 8.10.181 as well.
Hope you are doing well!.
I just received a confirmation that you can upgrade the WLC to 8.10.181.
Our SDA team ran a sanity check with version 8.10.181.0 and everything works fine with DNAC.
I would request you to go ahead with upgrading to 8.10.181.0.
Please feel free to let me know if you have any further queries.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue
For CAPWAP traffic from WLC to AP takes path from Edge which might be the reason for CAPWAP keepalives to get dropped
Fro traffic from WLC to AP, we checked packet capture on AP connected port and we see source as a FE.
Frame 20642: 173 bytes on wire (1384 bits), 173 bytes captured (1384 bits) on interface \Device\NPF_{F90C5072-EDB0-4466-991E-486944B5A317}, id 0
Ethernet II, Src: Cisco_9a:32:c0 (70:79:b3:9a:32:c0), Dst: Cisco_36:70:e7 (f8:7a:41:36:70:e7)
Internet Protocol Version 4, Src: 172.18.1.2, Dst: 172.18.1.61
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 159
Identification: 0x81b3 (33203)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 250
Protocol: UDP (17)
Header Checksum: 0xa41e [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.1.2
Destination Address: 172.18.1.61
User Datagram Protocol, Src Port: 65419, Dst Port: 4789
Virtual eXtensible Local Area Network
Ethernet II, Src: Cisco_9f:00:00 (00:00:0c:9f:00:00), Dst: ba:25:cd:f4:ad:38 (ba:25:cd:f4:ad:38)
Internet Protocol Version 4, Src: 172.18.240.13, Dst: 10.253.144.11
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 109
Identification: 0x50bf (20671)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 251
Protocol: UDP (17)
Header Checksum: 0xf77f [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.240.13
Destination Address: 10.253.144.11
User Datagram Protocol, Src Port: 5246, Dst Port: 5252
Control And Provisioning of Wireless Access Points - Control
Preamble
Datagram Transport Layer Security
Dec 1st 2022:
radio reloaded: sensord cmd(0x03) failed slot 1
[cmd timeout] wifi1: 0x9184=unknown intCode:0x1184 last 0x801c=SetRadio
The crash signature matches the bug mentioned below which has a fix in 10.8.171.
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvx96224
Requested SDWAN team to share the 8.10.181 compatible firmware for SDWAN
Hi Vaibhav / Sharath / Kishor / Ramya,
Greetings !
As per understand provided on below mail trail it seems that we cannot have a stable SDI environment due to compatibility issues due to https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvx96224 bug is fixed in 8.10.181.4 which is not a SDI compatible.
@Kishorbabu C (kishobab)<mailto:[email protected]> and @Ramya M (ramym)<mailto:[email protected]> Please discuss internally and share an approach to get a fixed image from DE BU.
Suggestions :-
* Either 8.10.151.0 - which is SDA compatible image can have the SMU patch for this bug.
* Or 8.10.181.4 - can be engineered for SDA.
I checked and even for latest DNAC release 2.3.x ( 2.3.3.5 and 2.3.4) , I do not see 8.10.181.4 as supported/recommended image for SDA.
Thank you for the update.
Glad to hear that the Aps are stable now after the upgrade.
Let me request my colleague from SDA to assist with the compatible version.
@Vibhav Shinde (vibshind)<mailto:[email protected]>,
Could you please assist us with the 8.10.181.4 compatible versions for SDN?.
Thanks for your response.
I hope you would be doing well now.
As an update I would like to inform you that we have upgraded our on production WLC (TLDC-WLC2) to 8.10.181.4.
So far we have not received any issues of AP resetting capwap tunnel or radio resets causing client disconnections.
But upgrading WLC to fix code and migrating impacted site AP's to fixed WLC code is workaround approach which cannot be taken forward.
We require 8.10.181.4 compatible versions for SDN ( Fabric ) WLC's ( 8540's AirOS ) for long term solution.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue
For CAPWAP traffic from WLC to AP takes path from Edge which might be the reason for CAPWAP keepalives to get dropped
Fro traffic from WLC to AP, we checked packet capture on AP connected port and we see source as a FE.
Frame 20642: 173 bytes on wire (1384 bits), 173 bytes captured (1384 bits) on interface \Device\NPF_{F90C5072-EDB0-4466-991E-486944B5A317}, id 0
Ethernet II, Src: Cisco_9a:32:c0 (70:79:b3:9a:32:c0), Dst: Cisco_36:70:e7 (f8:7a:41:36:70:e7)
Internet Protocol Version 4, Src: 172.18.1.2, Dst: 172.18.1.61
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 159
Identification: 0x81b3 (33203)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 250
Protocol: UDP (17)
Header Checksum: 0xa41e [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.1.2
Destination Address: 172.18.1.61
User Datagram Protocol, Src Port: 65419, Dst Port: 4789
Virtual eXtensible Local Area Network
Ethernet II, Src: Cisco_9f:00:00 (00:00:0c:9f:00:00), Dst: ba:25:cd:f4:ad:38 (ba:25:cd:f4:ad:38)
Internet Protocol Version 4, Src: 172.18.240.13, Dst: 10.253.144.11
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 109
Identification: 0x50bf (20671)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 251
Protocol: UDP (17)
Header Checksum: 0xf77f [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.240.13
Destination Address: 10.253.144.11
User Datagram Protocol, Src Port: 5246, Dst Port: 5252
Control And Provisioning of Wireless Access Points - Control
Preamble
Datagram Transport Layer Security
Dec 1st 2022:
radio reloaded: sensord cmd(0x03) failed slot 1
[cmd timeout] wifi1: 0x9184=unknown intCode:0x1184 last 0x801c=SetRadio
The crash signature matches the bug mentioned below which has a fix in 10.8.171.
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvx96224
Hi Ameera, we discussed the bug fixed engineering image is 8.10.181.4 (right ?)
We also discussed there is no fixed image for SDA, and Sharath shall help in getting engineering image for SDA WLC( presently running on 8.10.151.0 with DNAC 2.2.2.9).
Regarding packets drops against below logs, Cisco found this is happening when there is a load, which is causing the AP to go down.
That is the BUG.
Confirmed by Oliver( in CC) that when load is not there, packet drop also not there.
BUG description: Cisco Aironet Access Point (AP) radio may trigger a device to reload under certain conditions of high device data utilization leading to packet drops
Nov 22 11:08:24 kernel: [*11/22/2022 11:08:24.4998] Max retransmission count exceeded, going back to DISCOVER mode.
* Why are we saying that there is network issue (This could be due to packet drop between WLC and AP and suggested collecting caps.) ?
Is Cisco not convinced about the bug ? Pls clarify.
NCS maintenance team, Pls follow up with Cisco to get the image.
Thanks for the time on the call.
As summary:
* The logs from one AP is showing a wired side issue that the AP is losing the keep-alive packet and reached the max retransmissions.
* This could be due to packet drop between WLC and AP and suggested collecting caps.
* From WLC side we can increase the AP count and intervals :
* https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-1/configuration-guide/b_cg81/b_cg81_chapter_01111001.pdf
Nov 22 11:08:12 kernel: [*11/22/2022 11:08:12.9134] Re-Tx Count=2, Max Re-Tx Value=5, SendSeqNum=5, NumofPendingMsgs=67
Nov 22 11:08:12 kernel: [*11/22/2022 11:08:12.9135]
Nov 22 11:08:15 kernel: [*11/22/2022 11:08:15.8115] Re-Tx Count=3, Max Re-Tx Value=5, SendSeqNum=8, NumofPendingMsgs=70
Nov 22 11:08:15 kernel: [*11/22/2022 11:08:15.8115]
Nov 22 11:08:18 kernel: [*11/22/2022 11:08:18.7069] Re-Tx Count=4, Max Re-Tx Value=5, SendSeqNum=11, NumofPendingMsgs=73
Nov 22 11:08:18 kernel: [*11/22/2022 11:08:18.7069]
Nov 22 11:08:21 kernel: [*11/22/2022 11:08:21.6039] Re-Tx Count=5, Max Re-Tx Value=5, SendSeqNum=11, NumofPendingMsgs=73
Nov 22 11:08:21 kernel: [*11/22/2022 11:08:21.6039]
Nov 22 11:08:24 kernel: [*11/22/2022 11:08:24.4998] Max retransmission count exceeded, going back to DISCOVER mode.
* Regarding the AP crash we can see we are hitting this bug https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvz08781 which is fixed on the latest on 8.10.
As we agreed @Sharath Chandrashekar (sharcha2)<mailto:[email protected]> will reach the escalation team to check the below thing related to the bug:
* What is the amount of data/clients that the AP can handle before it crash .
* We need a fixed version that is compatible with a fabric network and DNAC version 2.2.2.9.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue.
For CAPWAP traffic from WLC to AP takes path from Edge which might be the reason for CAPWAP keepalives to get dropped
Fro traffic from WLC to AP, we checked packet capture on AP connected port and we see source as a FE.
Frame 20642: 173 bytes on wire (1384 bits), 173 bytes captured (1384 bits) on interface \Device\NPF_{F90C5072-EDB0-4466-991E-486944B5A317}, id 0
Ethernet II, Src: Cisco_9a:32:c0 (70:79:b3:9a:32:c0), Dst: Cisco_36:70:e7 (f8:7a:41:36:70:e7)
Internet Protocol Version 4, Src: 172.18.1.2, Dst: 172.18.1.61
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 159
Identification: 0x81b3 (33203)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 250
Protocol: UDP (17)
Header Checksum: 0xa41e [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.1.2
Destination Address: 172.18.1.61
User Datagram Protocol, Src Port: 65419, Dst Port: 4789
Virtual eXtensible Local Area Network
Ethernet II, Src: Cisco_9f:00:00 (00:00:0c:9f:00:00), Dst: ba:25:cd:f4:ad:38 (ba:25:cd:f4:ad:38)
Internet Protocol Version 4, Src: 172.18.240.13, Dst: 10.253.144.11
0100 .... = Version: 4
.... 0101 = Header Length: 20 bytes (5)
Differentiated Services Field: 0x18 (DSCP: Unknown, ECN: Not-ECT)
Total Length: 109
Identification: 0x50bf (20671)
010. .... = Flags: 0x2, Don't fragment
...0 0000 0000 0000 = Fragment Offset: 0
Time to Live: 251
Protocol: UDP (17)
Header Checksum: 0xf77f [validation disabled]
[Header checksum status: Unverified]
Source Address: 172.18.240.13
Destination Address: 10.253.144.11
User Datagram Protocol, Src Port: 5246, Dst Port: 5252
Control And Provisioning of Wireless Access Points - Control
Preamble
Datagram Transport Layer Security
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue.
For CAPWAP traffic from WLC to AP takes path from Edge which might be the reason for CAPWAP keepalives to get dropped
Technology: Wireless
Subtechnology: 8540 Series Wireless LAN Controller (AIR-CT8540)
Problem Code: Software Failure
Product: NA
Product Family: AIRCTA2
Software Version: N/A
Router/Node Name: N/A
Problem Details: Problem Description :- Clients unable to connect to an AP suspecting AP crash.
Setup :- SDA
WLC :- 172.18.240.13 – 8.10.151.0
DNAC :- 172.18.12.35
Impacted Location :- Global/NUS_fabric1/KR-SCIYIH/SCIYIH-MPSH/SCIYIH-MPSH2
AP’s impacted :-
1. MPSH2-02-AP20 - 10.253.152.238
2. MPSH2-02-AP22 - 10.253.190.6
3. MPSH2-02-AP23
Observations :-
AT approximately 9:15 am SGT we got a complaint that client are not able to connect to NUS SSID.
We saw checked in Prime and found that AP 22 and AP 20 getting alert for crash.
Checked and collected logs for AP 10.253.152.238.
After checking further, we can see many AP’s got rebooted.
Technology: Wireless
Subtechnology: 8540 Series Wireless LAN Controller (AIR-CT8540)
Problem Code: Software Failure
Product: NA
Product Family: AIRCTA2
Software Version: N/A
Router/Node Name: N/A
Problem Details: Problem Description :- Clients unable to connect to an AP suspecting AP crash.
Setup :- SDA
WLC :- 172.18.240.13 – 8.10.151.0
DNAC :- 172.18.12.35
Impacted Location :- Global/NUS_fabric1/KR-SCIYIH/SCIYIH-MPSH/SCIYIH-MPSH2
AP’s impacted :-
1. MPSH2-02-AP20 - 10.253.152.238
2. MPSH2-02-AP22 - 10.253.190.6
3. MPSH2-02-AP23
Observations :-
AT approximately 9:15 am SGT we got a complaint that client are not able to connect to NUS SSID.
We saw checked in Prime and found that AP 22 and AP 20 getting alert for crash.
Checked and collected logs for AP 10.253.152.238.
After checking further, we can see many AP’s got rebooted.
Problem Description :- Clients unable to connect to an AP suspecting AP crash.
Setup :- SDA
WLC :- 172.18.240.13 – 8.10.151.0
DNAC :- 172.18.12.35
Impacted Location :- Global/NUS_fabric1/KR-SCIYIH/SCIYIH-MPSH/SCIYIH-MPSH2
AP’s impacted :-
1. MPSH2-02-AP20 - 10.253.152.238
2. MPSH2-02-AP22 - 10.253.190.6
3. MPSH2-02-AP23
Observations :-
AT approximately 9:15 am SGT we got a complaint that client are not able to connect to NUS SSID.
We saw checked in Prime and found that AP 22 and AP 20 getting alert for crash.
Checked and collected logs for AP 10.253.152.238.
After checking further, we can see many AP’s got rebooted.
timestamp : 2023-07-20T04:31:04.000+0000 || updatedby : sharcha2 || type : RESOLUTION SUMMARY || visibility : External || details : 8.10.185.3 special image shared with customer with bug fixes for AP crash issue | Question: Which product is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY product name should be selected from the given product name list: AIR-CT8540-1K-K9/ AIR-CT8540-K9/ NA-CIN_CLOSE_HW/ C9120AXI-S/ THIRD_PARTY_PRODUCT_HW/ AIR-CT8540-CA-K9/ AIR-AP2802I-S-K9/ AIR-CT8540-K9Z/ AIR-BZL-C240M4/ DN2-HW-APL-XL/ WLC-AP-T/ AIR-CT8540-SW-8.2 Response in this JSON format:
{"product_name": "","explanation": "", "summary": ""}
Thank you for your email.
We received an update form our escalation team and we have opened two new bugs internally to further investigate and below is the status:
1
AP BIZ2-04-AP24
Kernel Panic/ PC is at __udelay+0x30/0x48
1.
0xffffffc000338900 __udelay + 0x30
2.
0xffffffbffc44d52c osl_delay + 0x2c
New bug - CSCwe91301 for further investigation
2
AP TB-02-AP04
Kernel Panic/ PC is at get_partial_node.isra.24+0x140/0x288
1.
0xffffffc000088d30 dump_backtrace + 0x150
2.
0xffffffc000088e94 show_stack + 0x20
3.
0xffffffc000668b30 dump_stack + 0xb0
4.
0xffffffc000156058 print_bad_pte + 0x1d0
5.
0xffffffc0001579cc unmap_single_vma + 0x5d0
6.
0xffffffc0001582d8 unmap_vmas + 0x68
7.
0xffffffc00015f9b4 exit_mmap + 0xf8
8.
0xffffffc00009c354 mmput + 0x118
9.
0xffffffc0000a0afc do_exit + 0x8e0
10.
0xffffffc0000a120c do_group_exit + 0xb0
11.
0xffffffc0000a1290 __wake_up_parent + 0x28
New bug - CSCwe91264 for further investigation
3
AP ERC-02-AP49
CALLBACK FULL Reset Radio
Unable to Decode
Collect all .txt from /storages/cores/
AP ERC-02-AP51
CALLBACK FULL Reset Radio
Unable to Decode
Collect all .txt from /storages/cores/
AP ERC-B1-AP03
CALLBACK FULL Reset Radio
Unable to Decode
Collect all .txt from /storages/cores/
The below 3 AP crashes belong to AP model 1810W, which is EOL.
I will further check with the team internally to see how we can proceed on this:
4
AP i4-05-AP55
Kernel Panic/ PC is at skb_recycler_alloc+0x84/0x2e4
1.
0xc039a954 skb_recycler_alloc + 0x84
2.
0xc037bbe0 dev_alloc_skb_poolid + 0x24
3.
0xbfbe9678 __adf_nbuf_alloc + 0x28
4.
0xbfccff54 wbuf_alloc + 0x5c
5.
0xbfd08e2c wmi_buf_alloc + 0x20
6.
0xbfcf8fb4 wmi_unified_set_qboost_param + 0x30
7.
0xbfcf904c qboost_config + 0x18
8.
0xbfcf92d8 ol_ath_net80211_newassoc + 0x288
9.
0xbfc6a65c ieee80211_mlme_recv_assoc_request + 0x418
10.
0xbfc1ed5c ieee80211_ucfg_splitmac_add_client + 0xc3c
11.
0xbfcb8d04 acfg_add_client + 0x280
12.
0xbfcb92dc acfg_handle_vap_ioctl + 0x10c
13.
0xbfca73b4 ieee80211_ioctl + 0x2720
14.
0xc038962c dev_ifsioc + 0x300
15.
0xc0389db8 dev_ioctl + 0x764
16.
0xc0372e80 sock_ioctl + 0x260
17.
0xc0104730 do_vfs_ioctl + 0x5a0
18.
0xc01047b8 sys_ioctl + 0x3c
19.
0xc000e680 ret_fast_syscall + 0x0
AP i4-05-AP55
Kernel Panic/ PC is at skb_recycler_alloc+0x84/0x2e4
1.
0xc039a954 skb_recycler_alloc + 0x84
2.
0xc037bbe0 dev_alloc_skb_poolid + 0x24
3.
0xbfbe9678 __adf_nbuf_alloc + 0x28
4.
0xbfccff54 wbuf_alloc + 0x5c
5.
0xbfd08e2c wmi_buf_alloc + 0x20
6.
0xbfcf8fb4 wmi_unified_set_qboost_param + 0x30
7.
0xbfcf904c qboost_config + 0x18
8.
0xbfcf92d8 ol_ath_net80211_newassoc + 0x288
9.
0xbfc6a65c ieee80211_mlme_recv_assoc_request + 0x418
10.
0xbfc1ed5c ieee80211_ucfg_splitmac_add_client + 0xc3c
11.
0xbfcb8d04 acfg_add_client + 0x280
12.
0xbfcb92dc acfg_handle_vap_ioctl + 0x10c
13.
0xbfca73b4 ieee80211_ioctl + 0x2720
14.
0xc038962c dev_ifsioc + 0x300
15.
0xc0389db8 dev_ioctl + 0x764
16.
0xc0372e80 sock_ioctl + 0x260
17.
0xc0104730 do_vfs_ioctl + 0x5a0
18.
0xc01047b8 sys_ioctl + 0x3c
19.
0xc000e680 ret_fast_syscall + 0x0
AP CELC-03-AP11
Kernel Panic/ PC is at skb_recycler_alloc+0x84/0x2e4
1.
0xc039a954 skb_recycler_alloc + 0x84
2.
0xc037bbe0 dev_alloc_skb_poolid + 0x24
3.
0xbfbe9678 __adf_nbuf_alloc + 0x28
4.
0xbfd34cac htt_h2t_dbg_stats_get + 0x6c
5.
0xbfd2c218 ol_txrx_fw_stats_get + 0x110
6.
0xbfcd94c0 ol_ath_fw_stats_timeout + 0xb0
7.
0xc006b664 run_timer_softirq + 0x168
8.
0xc0065ba0 __do_softirq + 0xf4
9.
0xc0066070 irq_exit + 0x54
10.
0xc000ef5c handle_IRQ + 0x88
11.
0xc00085ec gic_handle_irq + 0x94
12.
0xc000e340 __irq_svc + 0x40
The below two AP crashes are still being investigated upon and will have an update shortly.
5
AP IT-01-AP13
Kernel Panic/ PC is at misc_open+0x48/0x198
0xffffffc0003d01c8 misc_open + 0x48
0xffffffc0003d01b0 misc_open + 0x30
AP E1-05-AP08
Kernel Panic/ PC is at misc_open+0x48/0x198
1.
0xffffffc0003d01c8 misc_open + 0x48
2.
0xffffffc0003d01b0 misc_open + 0x30
Could you please help me collect all .txt files from /storages/cores/ from the 3 Aps (AP ERC-02-AP49, AP ERC-02-AP51 & AP ERC-B1-AP03) so I can share it with our team for further analysis.
Please let me know your availability so I can collect the required logs.
Please feel free to let me know if you have any further queries.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue.
Recommended to upgrade the SDA WLC to 8.10.181 as well.
jan 27th : crash file of 9120 has
from crash file
PC is at rb_
next+0xc/0x6c
LR is at pick_next_task_fair+0x4
8/0x160.
CSCwa34136 Cisco 3802 FQI/NMI reset at rb_next+0xc
the fix is to upgrade to 8.10.171
During the upgrade customer was facing FN - 72524.
Customer will perform the upgrade from 6 PM SGT to next day 7 AM SGT. They plan to do reboots at 12 midnight of 24th feb and 25th feb.
I had a check internally and found that Wism2 is EOL and end of support and hence 8.10.183 is not available.
The last version released for Wism2 was 8.5.171.
Please feel free to let me know if you have any further queries.
Can we get 8.10.183.0 for wism2.
Its urgent.
Findings from crash logs:
1. From the two AP crash logs, it looks like they have different crash signatures.
Decoding one of the crash signatures, we found it to be similar to bug CSCvz08781, for which the solution is to upgrade to 8.10.161.
2. Since we were not able to find a hit for the second crash, we will have a discussion internally to see if this is a new issue.
Recommended to upgrade the SDA WLC to 8.10.181 as well.
jan 27th : crash file of 9120 has
from crash file
PC is at rb_
next+0xc/0x6c
LR is at pick_next_task_fair+0x4
8/0x160.
CSCwa34136 Cisco 3802 FQI/NMI reset at rb_next+0xc
the fix is to upgrade to 8.10.171
During the upgrade customer was facing FN - 72524.
There is a query from Biswajit.
Can we upgrade the WLC 8.10.181.4 to 8.10.183.0, since 8.10.181.0 is deferred.
I believe the answer is Yes but wanted to double confirm from Cisco TAC.
I just got a confirmation from the DE team that 8.10.183.0 will be supported with DNAC 2.2.2.9 .
Compatibility Matrix should be updated by this week.
Please feel free to let us know if you have any further queries.
You are right.
I have requested for compatibility between DNAC 2.2.2.9 and wlc 8.10.183.
I will keep you updated as soon as I hear from the DE team.
Please correct , we need compatibility conformation on DNAC 2.2.2.9 with 8.10.183.0 from Cisco.
Yes, Testing was done on 26th JAN for POVWLC1 and 4.
Below is the snippet.
Conclusion :- No compatibility issues found between 2.2.2.9 and 8.10.183.0.
+Baby John
Hi Satyam, you have tested this OS 8.10.183 in POV environment with DNAC 2.2.2.9 , did you find any issues ? Pls share.
I apologize for the confusion.
Yes, are you right 8.10.183 is not in the list of supported release for DNAC 2.2.2.9.
Since, 8.10.181 has been deferred and is not available for download, please allow me a day time so I can check with DNAC DE team if they can include 8.10.183 in the compatibility matrix and also get a confirmation if they have done some testing so you can proceed with the upgrade.
Please feel free to let us know if you have any further queries.
I also just noticed the same.
Below screenshot shared is from 2.3.3.6 ( by TAC ) but I concur checking the same for 2.2.2.9 compatibility with 8.10.183.0.
I have reached out Cisco TM on the same and expected to get response in 30 mins from TAC.
Hi Satyam / Sharath
Do note that the current production DNAC is on ver 2.2.2.9.
however, 8.10.183 is not in the compatibility list with DNAC 2.2.2.9 officially.
Cisco team, please advise if we still can proceed with the WLC firmware upgrade.
[Icon Description automatically generated]
Can we say if the bug is fixed in earlier release i.e.
8.10.168.0 and 8.10.171.0 which both are deferred now, will be fixed in 8.10.183.0.
Urgently.
Hope you are doing well!.
I just received a confirmation that you can upgrade the WLC to 8.10.181.
Our SDA team ran a sanity check with version 8.10.181.0 and everything works fine with DNAC.
I would request you to go ahead with upgrading to 8.10.181.0.
Please feel free to let me know if you have any further queries.
Hi Vaibhav / Sharath / Kishor / Ramya,
Greetings !
As per understand provided on below mail trail it seems that we cannot have a stable SDI environment due to compatibility issues due to https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvx96224 bug is fixed in 8.10.181.4 which is not a SDI compatible.
@Kishorbabu C (kishobab)<mailto:[email protected]> and @Ramya M (ramym)<mailto:[email protected]> Please discuss internally and share an approach to get a fixed image from DE BU.
Suggestions :-
* Either 8.10.151.0 - which is SDA compatible image can have the SMU patch for this bug.
* Or 8.10.181.4 - can be engineered for SDA.
I checked and even for latest DNAC release 2.3.x ( 2.3.3.5 and 2.3.4) , I do not see 8.10.181.4 as supported/recommended image for SDA.
Thank you for the update.
Glad to hear that the Aps are stable now after the upgrade.
Let me request my colleague from SDA to assist with the compatible version.
@Vibhav Shinde (vibshind)<mailto:[email protected]>,
Could you please assist us with the 8.10.181.4 compatible versions for SDN?.
Thanks for your response.
I hope you would be doing well now.
As an update I would like to inform you that we have upgraded our on production WLC (TLDC-WLC2) to 8.10.181.4.
So far we have not received any issues of AP resetting capwap tunnel or radio resets causing client disconnections.
But upgrading WLC to fix code and migrating impacted site AP's to fixed WLC code is workaround approach which cannot be taken forward.
We require 8.10.181.4 compatible versions for SDN ( Fabric ) WLC's ( 8540's AirOS ) for long term solution.
I apologize for the delay.
I was on a medical leave for the past couple of days.
I analysed the AP13 crash logs and found that the reason the crash is radio slot 1 reload due to:
radio reloaded: sensord cmd(0x03) failed slot 1
[cmd timeout] wifi1: 0x9184=unknown intCode:0x1184 last 0x801c=SetRadio
The crash signature matches the bug mentioned below which has a fix in 10.8.171.
https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvx96224
Could you please let me know if you upgraded the test WLC to the escalation image 8.10.181.4 and still see issues with crash or clients unable to connect?.
Please feel free to let me know if you have any further queries and I will be glad to assist you.
Hi Ameera, we discussed the bug fixed engineering image is 8.10.181.4 (right ?)
We also discussed there is no fixed image for SDA, and Sharath shall help in getting engineering image for SDA WLC( presently running on 8.10.151.0 with DNAC 2.2.2.9).
Regarding packets drops against below logs, Cisco found this is happening when there is a load, which is causing the AP to go down.
That is the BUG.
Confirmed by Oliver( in CC) that when load is not there, packet drop also not there.
BUG description: Cisco Aironet Access Point (AP) radio may trigger a device to reload under certain conditions of high device data utilization leading to packet drops
Nov 22 11:08:24 kernel: [*11/22/2022 11:08:24.4998] Max retransmission count exceeded, going back to DISCOVER mode.
* Why are we saying that there is network issue (This could be due to packet drop between WLC and AP and suggested collecting caps.) ?
Is Cisco not convinced about the bug ? Pls clarify.
NCS maintenance team, Pls follow up with Cisco to get the image.
Thanks for the time on the call.
As summary:
* The logs from one AP is showing a wired side issue that the AP is losing the keep-alive packet and reached the max retransmissions.
* This could be due to packet drop between WLC and AP and suggested collecting caps.
* From WLC side we can increase the AP count and intervals :
* https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-1/configuration-guide/b_cg81/b_cg81_chapter_01111001.pdf
Nov 22 11:08:12 kernel: [*11/22/2022 11:08:12.9134] Re-Tx Count=2, Max Re-Tx Value=5, SendSeqNum=5, NumofPendingMsgs=67
Nov 22 11:08:12 kernel: [*11/22/2022 11:08:12.9135]
Nov 22 11:08:15 kernel: [*11/22/2022 11:08:15.8115] Re-Tx Count=3, Max Re-Tx Value=5, SendSeqNum=8, NumofPendingMsgs=70
Nov 22 11:08:15 kernel: [*11/22/2022 11:08:15.8115]
Nov 22 11:08:18 kernel: [*11/22/2022 11:08:18.7069] Re-Tx Count=4, Max Re-Tx Value=5, SendSeqNum=11, NumofPendingMsgs=73
Nov 22 11:08:18 kernel: [*11/22/2022 11:08:18.7069]
Nov 22 11:08:21 kernel: [*11/22/2022 11:08:21.6039] Re-Tx Count=5, Max Re-Tx Value=5, SendSeqNum=11, NumofPendingMsgs=73
Nov 22 11:08:21 kernel: [*11/22/2022 11:08:21.6039]
Nov 22 11:08:24 kernel: [*11/22/2022 11:08:24.4998] Max retransmission count exceeded, going back to DISCOVER mode.
* Regarding the AP crash we can see we are hitting this bug https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvz08781 which is fixed on the latest on 8.10.
As we agreed @Sharath Chandrashekar (sharcha2)<mailto:[email protected]> will reach the escalation team to check the below thing related to the bug:
* What is the amount of data/clients that the AP can handle before it crash .
* We need a fixed version that is compatible with a fabric network and DNAC version 2.2.2.9.
Technology: Wireless
Subtechnology: 8540 Series Wireless LAN Controller (AIR-CT8540)
Problem Code: Software Failure
Product: NA
Product Family: AIRCTA2
Software Version: N/A
Router/Node Name: N/A
Problem Details: Problem Description :- Clients unable to connect to an AP suspecting AP crash.
Setup :- SDA
WLC :- 172.18.240.13 – 8.10.151.0
DNAC :- 172.18.12.35
Impacted Location :- Global/NUS_fabric1/KR-SCIYIH/SCIYIH-MPSH/SCIYIH-MPSH2
AP’s impacted :-
1. MPSH2-02-AP20 - 10.253.152.238
2. MPSH2-02-AP22 - 10.253.190.6
3. MPSH2-02-AP23
Observations :-
AT approximately 9:15 am SGT we got a complaint that client are not able to connect to NUS SSID.
We saw checked in Prime and found that AP 22 and AP 20 getting alert for crash.
Checked and collected logs for AP 10.253.152.238.
After checking further, we can see many AP’s got rebooted.
Problem Description :- Clients unable to connect to an AP suspecting AP crash.
Setup :- SDA
WLC :- 172.18.240.13 – 8.10.151.0
DNAC :- 172.18.12.35
Impacted Location :- Global/NUS_fabric1/KR-SCIYIH/SCIYIH-MPSH/SCIYIH-MPSH2
AP’s impacted :-
1. MPSH2-02-AP20 - 10.253.152.238
2. MPSH2-02-AP22 - 10.253.190.6
3. MPSH2-02-AP23
Observations :-
AT approximately 9:15 am SGT we got a complaint that client are not able to connect to NUS SSID.
We saw checked in Prime and found that AP 22 and AP 20 getting alert for crash.
Checked and collected logs for AP 10.253.152.238.
After checking further, we can see many AP’s got rebooted.
timestamp : 2023-07-20T04:31:04.000+0000 || updatedby : sharcha2 || type : RESOLUTION SUMMARY || visibility : External || details : 8.10.185.3 special image shared with customer with bug fixes for AP crash issue | Technology: Wireless
Subtechnology: 8540 Series Wireless LAN Controller (AIR-CT8540)
Problem Code: Software Failure
Product: NA
Product Family: AIRCTA2
Software Version: N/A
Router/Node Name: N/A
Problem Details: Problem Description :- Clients unable to connect to an AP suspecting AP crash.
Setup :- SDA
WLC :- 172.18.240.13 – 8.10.151.0
DNAC :- 172.18.12.35
Impacted Location :- Global/NUS_fabric1/KR-SCIYIH/SCIYIH-MPSH/SCIYIH-MPSH2
AP’s impacted :-
1. MPSH2-02-AP20 - 10.253.152.238
2. MPSH2-02-AP22 - 10.253.190.6
3. MPSH2-02-AP23
Observations :-
AT approximately 9:15 am SGT we got a complaint that client are not able to connect to NUS SSID.
We saw checked in Prime and found that AP 22 and AP 20 getting alert for crash.
Checked and collected logs for AP 10.253.152.238.
After checking further, we can see many AP’s got rebooted. | Problem Description :- Clients unable to connect to an AP suspecting AP crash.
Setup :- SDA
WLC :- 172.18.240.13 – 8.10.151.0
DNAC :- 172.18.12.35
Impacted Location :- Global/NUS_fabric1/KR-SCIYIH/SCIYIH-MPSH/SCIYIH-MPSH2
AP’s impacted :-
1. MPSH2-02-AP20 - 10.253.152.238
2. MPSH2-02-AP22 - 10.253.190.6
3. MPSH2-02-AP23
Observations :-
AT approximately 9:15 am SGT we got a complaint that client are not able to connect to NUS SSID.
We saw checked in Prime and found that AP 22 and AP 20 getting alert for crash.
Checked and collected logs for AP 10.253.152.238.
After checking further, we can see many AP’s got rebooted. | timestamp : 2023-07-20T04:31:04.000+0000 || updatedby : sharcha2 || type : RESOLUTION SUMMARY || visibility : External || details : 8.10.185.3 special image shared with customer with bug fixes for AP crash issue | nan | AIR-AP2802I-S-K9 | ap3g3-ME-3800-k9w8-ubifs-8.10.185.0.img | nan | nan | 2 | nan | Software Bug | AIRCTA2 | Wireless | 8540 Series Wireless LAN Controller (AIR-CT8540) | nan | nan | nan | nan | nan | nan |
694907945 | Question: Which software version is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY software version should be selected from the given software version list: 3.0.0.458/ 2.7.0.356/ 3.0.0.448/ 3.0.0.458/ 3.0.0 Response in this JSON format:
{"software_version": "","explanation": "", "summary": ""}
Good day!
I went through the support bundle you provided.
Please find my analysis below:
This device is showing evidence of encountering CSCwd45843<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd45843>: Java minimum heap value is to low which can lead to excessive garbage collection
Symptoms seen:
Authentication step latency on multiple policy evaluation steps.
High average request latency during times of peak load.
Authentication request latency does not recover until after the system is reloaded.
Mitigation:
Apply the universal hotfix to all nodes in the deployment and when available apply the patch with the fix for the installed ISE version.
The details of which are mentioned in my previous email to you as well.
Please find the log snippet below:
- ./ise-support-bundle-hsdc-pan-nac01-nusnggk-01-30-2023-23-17/support/showtech/showtech.out - The regex highlighted below detects the iseadminportal service starting with the low java Xms value:
32086: iseadminportal 39346 60.8 4.4 27227720 4319240 Sl Jan 18 7-07:12:48 \_ jsvc.exec -java-home /opt/CSCOcpm/jre -server -user iseadminportal -outfile /opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/logs/catalina.out -errfile /opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/logs/catalina.out -pidfile /var/run/catalina.pid -Xms256m -Xmx21474836480 -XX:MaxGCPauseMillis=500 -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=512M -XX:MaxDirectMemorySize=2g -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/localdisk/corefiles -XX:OnOutOfMemoryError=/opt/CSCOcpm/bin/iseoomrestart.sh -XX:ErrorFile=/localdisk/corefiles/hs_err_pid%p.log -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true -verbose:gc -Xloggc:/localdisk/gc/gc_app.log.20230118232240 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:GCLogFileSize=3M -XX:NumberOfGCLogFiles=31 -XX:+PrintGCApplicationStoppedTime -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -Djdk.nio.maxCachedBufferSize=1000000 -Ddisable.logs.transaction=true -DisSSLEnabled=true -Ddb.configurations=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/config/db.properties -Dlog4j.configuration=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/config/log4j.xml -Doracle.jms.minSleepTime=200 -Doracle.jms.maxSleepTime=500 -DUSERTYPE_FILTER=NSFEndPointType -Dcom.cisco.cpm.provisioning.filesaverlocation=/tmp/download -Dpdp.heartbeat.cfg=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/config/pdp_hb_config.xml -Dise.access.map=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/config/access-map.xml -Drbac.superadmin.accesscheck.on=true -Dcom.cisco.cpm.version=3.0.0.458 -Dcom.cisco.cpm.udipid=SNS-3655-K9 -Dcom.cisco.cpm.udivid=A0 -Dcom.cisco.cpm.udisn=WZP23040NF3 -Dcom.cisco.cpm.udipt=UCS -Dcom.cisco.cpm.osversion=3.0.8.091 -DREPLIC_TRANSPORT_TYPE=JGroup -Dfake.pipmgr=true -DnoContextCopy=true -DlicenseSleepTime=24 -Djavax.net.ssl.keyStore=NONE -Djavax.net.ssl.keyStoreType=PKCS11 -Dise.fipsMode=false -Djdk.tls.ephemeralDHKeySize=2048 -Dorg.terracotta.quartz.skipUpdateCheck=true -Dreplication.monitor.on=true -Dorg.owasp.esapi.SecurityConfiguration=org.owasp.esapi.reference.DefaultSecurityConfiguration -Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true -Dsmack.debugEnabled=true -Dsmack.debuggerClass=org.jivesoftware.smackx.debugger.slf4j.SLF4JSmackDebugger -Djgroups.logging.log_factory_class=com.cisco.epm.logging.CustomLoggerFactoryForJgroups -DsslEnabledProtocolsMediumSecurity=TLSv1,TLSv1.1,TLSv1.2 -DsslEnabledProtocolsHighSecurity=TLSv1.1,TLSv1.2 -Doracle.jdbc.autoCommitSpecCompliant=false -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true -Dorg.jboss.logging.provider=log4j -DMntSessionDirectory.fetchAmount=10000 -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -Dio.netty.allocator.numDirectArenas=0 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=9999 -Dcom.sun.management.jmxremote.local.only=false -XX:OnError=/usr/bin/sudo /opt/CSCOcpm/bin/isegencore.sh %p -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.util.logging.config.file=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/conf/logging.properties -Dfile.encoding=UTF-8 -Dorg.terracotta.quartz.skipUpdateCheck=true -Dnet.sf.ehcache.skipUpdateCheck=true -Dorg.quartz.scheduler.skipUpdateCheck=true -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -DPROFILER_ROOT=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30 -javaagent:/opt/CSCOcpm/appsrv/apache-tomcat/lib/javaagent.jar -Djava.endorsed.dirs= -classpath /opt/system/java/lib/CARSJava-2.0-api.jar:/opt/system/java/lib/CARSJava-2.0-impl.jar:/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/bin/bootstrap.jar:/opt/CSCOcpm/prrt/lib/prrt-interface.jar:/opt/CSCOcpm/prrt/lib/prrt-flowapi.jar:/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/lib/log4j-rolling-appender-20131024-2017.jar:/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/lib/log4j-1.2.17.jar:/opt/TimesTen/tt1121/lib/ttjdbc6.jar:/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/bin/tomcat-juli.jar -Dcatalina.base=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30 -Dcatalina.home=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30 -Djava.io.tmpdir=/opt/CSCOcpm/temp -Dorg.apache.cxf.Logger=org.apache.cxf.common.logging.Log4jLogger org.apache.catalina.startup.Bootstrap start
This is a confirmation that you are still hitting the bug CSCwd45843<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbst.cloudapps.cisco.com%2Fbugsearch%2Fbug%2FCSCwd45843&data=05%7C01%7Cgeorgekingsley.paulsundararaj%40ncs.com.sg%7C9e66dfb8dca349ae8ab408daf9b8acd7%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638096868815388131%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=c41EUSmTYxeI5W4su6ayOkPnH0BKumDN6gs68zO3GgY%3D&reserved=0>.
The hotfix available for this bug:
https://software.cisco.com/download/home/283801620/type/283802505/release/HP-CSCwd45843<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsoftware.cisco.com%2Fdownload%2Fhome%2F283801620%2Ftype%2F283802505%2Frelease%2FHP-CSCwd45843&data=05%7C01%7Cgeorgekingsley.paulsundararaj%40ncs.com.sg%7C9e66dfb8dca349ae8ab408daf9b8acd7%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638096868815388131%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=C3zMOOmCfs%2BFFLKXys0s1mPQmal8tOhej1odqqEnOww%3D&reserved=0>.
Please do let me know if you have any further queries or concerns on the same.
Update 25th Jan:
Please find my analysis below:
From the tech top output we see a high CPU is consumed by the jsvc process:
hsdc-pan-nac01/admin# tech top
Invoking tech top. Press Control-C to interrupt.
top - 12:34:09 up 450 days, 15:44, 2 users, load average: 15.65, 16.77, 17.33
Tasks: 628 total, 1 running, 627 sleeping, 0 stopped, 0 zombie
%Cpu(s): 80.9 us, 1.1 sy, 0.0 ni, 17.6 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13726596 free, 38382248 used, 45327204 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42646812 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80839 iseadmi+ 20 0 30.4g 9.3g 40464 S 1794 10.0 296801:20 jsvc
60426 iseelas+ 20 0 18.0g 16.3g 472868 S 100.0 17.5 71810:49 java
115925 iserabb+ 20 0 2884300 1.0g 2692 S 52.9 1.1 2961:51 beam.smp
3896 root 20 0 164760 2688 1576 R 11.8 0.0 0:00.03 top
1 root 20 0 195436 6776 3108 S 0.0 0.0 402:34.45 systemd
2 root 20 0 0 0 0 S 0.0 0.0 3:30.34 kthreadd
4 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0+
6 root 20 0 0 0 0 S 0.0 0.0 106:27.39 ksoftirqd+
7 root rt 0 0 0 0 S 0.0 0.0 0:50.52 migration+
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root 20 0 0 0 0 S 0.0 0.0 1720:11 rcu_sched
10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-d+
11 root rt 0 0 0 0 S 0.0 0.0 2:12.02 watchdog/0
12 root rt 0 0 0 0 S 0.0 0.0 2:15.41 watchdog/1
13 root rt 0 0 0 0 S 0.0 0.0 0:51.29 migration+
14 root 20 0 0 0 0 S 0.0 0.0 87:25.21 ksoftirqd+
On further checking the sub process ID we see the following PIDs are consuming the CPU threads: (Omitted some lined for brevity)
sh-4.2# top -H -p 80839
top - 12:52:56 up 450 days, 16:03, 6 users, load average: 17.88, 16.67, 16.70
Threads: 1707 total, 18 running, 1689 sleeping, 0 stopped, 0 zombie
%Cpu(s): 64.8 us, 1.6 sy, 0.0 ni, 33.3 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13678112 free, 38274636 used, 45483300 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42754628 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80854 iseadmi+ 20 0 30.4g 9.3g 40816 R 72.2 10.0 13611:41 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:51 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:17 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:29 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:12 jsvc
80850 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:25 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:12 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:47 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13613:02 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:11 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:31 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:00 jsvc
80858 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:45 jsvc
top - 12:52:59 up 450 days, 16:03, 6 users, load average: 17.88, 16.67, 16.70
Threads: 1710 total, 1 running, 1709 sleeping, 0 stopped, 0 zombie
%Cpu(s): 69.7 us, 0.6 sy, 0.0 ni, 29.4 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 97436048 total, 13669704 free, 38280428 used, 45485916 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42749708 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80846 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13612:54 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13611:33 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13612:11 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:20 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:31 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:14 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:15 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13611:49 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13613:04 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13611:02 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13613:14 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:29 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:33 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.2 10.0 13611:44 jsvc
top - 12:53:02 up 450 days, 16:03, 6 users, load average: 18.13, 16.74, 16.72
Threads: 1713 total, 18 running, 1695 sleeping, 0 stopped, 0 zombie
%Cpu(s): 67.5 us, 1.1 sy, 0.0 ni, 31.1 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 97436048 total, 13651640 free, 38298472 used, 45485936 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42730576 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80850 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.9 10.0 13612:30 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:56 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:22 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:33 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:17 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:17 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13611:52 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:16 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:13 jsvc
80860 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13611:56 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13613:17 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:32 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:35 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13613:06 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:46 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:36 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:04 jsvc
80858 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13612:50 jsvc
top - 12:53:05 up 450 days, 16:03, 6 users, load average: 18.20, 16.78, 16.74
Threads: 1713 total, 18 running, 1695 sleeping, 0 stopped, 0 zombie
%Cpu(s): 70.4 us, 0.6 sy, 0.0 ni, 28.8 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 97436048 total, 13647332 free, 38299860 used, 45488856 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42729004 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80852 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13611:54 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:18 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13611:38 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:16 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:34 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:59 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:25 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:36 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:20 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13613:09 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:48 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:07 jsvc
80860 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:59 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13613:19 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:38 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.2 10.0 13612:19 jsvc
top - 12:53:08 up 450 days, 16:03, 6 users, load average: 18.20, 16.78, 16.74
Threads: 1712 total, 18 running, 1694 sleeping, 0 stopped, 0 zombie
%Cpu(s): 66.5 us, 1.5 sy, 0.0 ni, 31.6 id, 0.1 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13645540 free, 38302700 used, 45487808 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42726808 avail Mem
From catalina.out we observed the following:
We see the that the following Garbage collector threads are consuming high CPU:
"main" #1 prio=5 os_prio=0 tid=0x0000000002313000 nid=0x13bc7 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"VM Thread" os_prio=0 tid=0x0000000002642000 nid=0x13be2 runnable
"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x0000000002325800 nid=0x13bce runnable
"GC task thread#1 (ParallelGC)" os_prio=0 tid=0x0000000002327800 nid=0x13bcf runnable
"GC task thread#2 (ParallelGC)" os_prio=0 tid=0x0000000002329000 nid=0x13bd0 runnable
"GC task thread#3 (ParallelGC)" os_prio=0 tid=0x000000000232b000 nid=0x13bd1 runnable
"GC task thread#4 (ParallelGC)" os_prio=0 tid=0x000000000232d000 nid=0x13bd2 runnable
"GC task thread#5 (ParallelGC)" os_prio=0 tid=0x000000000232f000 nid=0x13bd3 runnable
"GC task thread#6 (ParallelGC)" os_prio=0 tid=0x0000000002330800 nid=0x13bd4 runnable
"GC task thread#7 (ParallelGC)" os_prio=0 tid=0x0000000002332800 nid=0x13bd5 runnable
"GC task thread#8 (ParallelGC)" os_prio=0 tid=0x0000000002334800 nid=0x13bd6 runnable
"GC task thread#9 (ParallelGC)" os_prio=0 tid=0x0000000002336000 nid=0x13bd7 runnable
"GC task thread#10 (ParallelGC)" os_prio=0 tid=0x0000000002338000 nid=0x13bd8 runnable
"GC task thread#11 (ParallelGC)" os_prio=0 tid=0x000000000233a000 nid=0x13bd9 runnable
"GC task thread#12 (ParallelGC)" os_prio=0 tid=0x000000000233b800 nid=0x13bda runnable
"GC task thread#13 (ParallelGC)" os_prio=0 tid=0x000000000233d800 nid=0x13bdb runnable
"GC task thread#14 (ParallelGC)" os_prio=0 tid=0x000000000233f800 nid=0x13bdc runnable
"GC task thread#15 (ParallelGC)" os_prio=0 tid=0x0000000002341000 nid=0x13bdd runnable
"GC task thread#16 (ParallelGC)" os_prio=0 tid=0x0000000002343000 nid=0x13bde runnable
"GC task thread#17 (ParallelGC)" os_prio=0 tid=0x0000000002345000 nid=0x13bdf runnable
"VM Periodic Task Thread" os_prio=0 tid=0x0000000002ac3800 nid=0x13cd6 waiting on condition
JNI global references: 3603
Heap
PSYoungGen total 79872K, used 288K [0x0000000635d80000, 0x000000063b280000, 0x00000007e0800000)
eden space 72704K, 0% used [0x0000000635d80000,0x0000000635dc8230,0x000000063a480000)
from space 7168K, 0% used [0x000000063ab80000,0x000000063ab80000,0x000000063b280000)
to space 7168K, 0% used [0x000000063a480000,0x000000063a480000,0x000000063ab80000)
ParOldGen total 2978304K, used 2963138K [0x00000002e0800000, 0x0000000396480000, 0x0000000635d80000)
object space 2978304K, 99% used [0x00000002e0800000,0x00000003955b0b70,0x0000000396480000)
Metaspace used 335348K, capacity 357063K, committed 358016K, reserved 1370112K
class space used 33786K, capacity 37225K, committed 37504K, reserved 1048576K
Garbage collector usage in RHEL: As long as an object is being referenced, the JVM considers it alive. Once an object is no longer referenced and therefore is not reachable by the application code, the garbage collector removes it and reclaims the unused memory. The reason GC is using more CPU cycles is that its aggressively reclaiming a huge amount of Objects which are not referenced anymore.
From the Catalina.out file we could see also the VCS threads being blocked:
"VCSPersistEventHandler-1-thread-1" #1920 prio=5 os_prio=0 tid=0x00007f42c5a27800 nid=0x185d7 waiting for monitor entry [0x00007f4224182000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.elasticsearch.action.bulk.BulkProcessor.internalAdd(BulkProcessor.java:287)
- waiting to lock <0x00000002ef9a1a28> (a org.elasticsearch.action.bulk.BulkProcessor)
at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:272)
at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:268)
at com.cisco.ise.vcs.crud.ESBulkProcessor.processRequest(ESBulkProcessor.java:122)
- locked <0x00000002ef51a570> (a com.cisco.ise.vcs.crud.ESBulkProcessor)
at com.cisco.ise.vcs.crud.VCSCrudProcessor.upsertBulkData(VCSCrudProcessor.java:98)
- locked <0x0000000308461c08> (a com.cisco.ise.vcs.crud.VCSCrudProcessor)
at com.cisco.ise.vcs.event.handler.VCSPersistEventHandler.updateContextRepository(VCSPersistEventHandler.java:147)
at com.cisco.ise.vcs.event.handler.VCSPersistEventHandler.handleEvent(VCSPersistEventHandler.java:113)
at com.cisco.ise.vcs.event.VCSEventHandler.run(VCSEventHandler.java:130)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Because of the above observations we see that you are encountering the bug CSCwd45843<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd45843>;. There is a hotfix available for this bug:
https://software.cisco.com/download/home/283801620/type/283802505/release/HP-CSCwd45843
This is a generic HP and can be applied on any version and is available in CCO site, you could install it on all the nodes and monitor.
But we would suggests to install patch 6 first and then install this HP.
This bug is addressed in ISE 3.0 p7 which is yet to be released, until then you can install patch 6 first and then the HP over it.
Good day!
Thanks for the files you attached so far and I was able to come to a root cause of the issue.
Please find my analysis below:
From the tech top output we see a high CPU is consumed by the jsvc process:
hsdc-pan-nac01/admin# tech top
Invoking tech top.
Press Control-C to interrupt.
top - 12:34:09 up 450 days, 15:44, 2 users, load average: 15.65, 16.77, 17.33
Tasks: 628 total, 1 running, 627 sleeping, 0 stopped, 0 zombie
%Cpu(s): 80.9 us, 1.1 sy, 0.0 ni, 17.6 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13726596 free, 38382248 used, 45327204 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42646812 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80839 iseadmi+ 20 0 30.4g 9.3g 40464 S 1794 10.0 296801:20 jsvc
60426 iseelas+ 20 0 18.0g 16.3g 472868 S 100.0 17.5 71810:49 java
115925 iserabb+ 20 0 2884300 1.0g 2692 S 52.9 1.1 2961:51 beam.smp
3896 root 20 0 164760 2688 1576 R 11.8 0.0 0:00.03 top
1 root 20 0 195436 6776 3108 S 0.0 0.0 402:34.45 systemd
2 root 20 0 0 0 0 S 0.0 0.0 3:30.34 kthreadd
4 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0+
6 root 20 0 0 0 0 S 0.0 0.0 106:27.39 ksoftirqd+
7 root rt 0 0 0 0 S 0.0 0.0 0:50.52 migration+
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root 20 0 0 0 0 S 0.0 0.0 1720:11 rcu_sched
10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-d+
11 root rt 0 0 0 0 S 0.0 0.0 2:12.02 watchdog/0
12 root rt 0 0 0 0 S 0.0 0.0 2:15.41 watchdog/1
13 root rt 0 0 0 0 S 0.0 0.0 0:51.29 migration+
14 root 20 0 0 0 0 S 0.0 0.0 87:25.21 ksoftirqd+
On further checking the sub process ID we see the following PIDs are consuming the CPU threads: (Omitted some lined for brevity)
sh-4.2# top -H -p 80839
top - 12:52:56 up 450 days, 16:03, 6 users, load average: 17.88, 16.67, 16.70
Threads: 1707 total, 18 running, 1689 sleeping, 0 stopped, 0 zombie
%Cpu(s): 64.8 us, 1.6 sy, 0.0 ni, 33.3 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13678112 free, 38274636 used, 45483300 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42754628 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80854 iseadmi+ 20 0 30.4g 9.3g 40816 R 72.2 10.0 13611:41 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:51 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:17 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:29 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:12 jsvc
80850 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:25 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:12 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:47 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13613:02 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:11 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:31 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:00 jsvc
80858 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:45 jsvc
top - 12:52:59 up 450 days, 16:03, 6 users, load average: 17.88, 16.67, 16.70
Threads: 1710 total, 1 running, 1709 sleeping, 0 stopped, 0 zombie
%Cpu(s): 69.7 us, 0.6 sy, 0.0 ni, 29.4 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 97436048 total, 13669704 free, 38280428 used, 45485916 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42749708 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80846 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13612:54 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13611:33 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13612:11 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:20 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:31 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:14 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:15 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13611:49 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13613:04 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13611:02 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13613:14 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:29 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:33 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.2 10.0 13611:44 jsvc
top - 12:53:02 up 450 days, 16:03, 6 users, load average: 18.13, 16.74, 16.72
Threads: 1713 total, 18 running, 1695 sleeping, 0 stopped, 0 zombie
%Cpu(s): 67.5 us, 1.1 sy, 0.0 ni, 31.1 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 97436048 total, 13651640 free, 38298472 used, 45485936 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42730576 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80850 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.9 10.0 13612:30 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:56 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:22 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:33 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:17 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:17 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13611:52 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:16 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:13 jsvc
80860 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13611:56 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13613:17 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:32 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:35 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13613:06 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:46 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:36 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:04 jsvc
80858 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13612:50 jsvc
top - 12:53:05 up 450 days, 16:03, 6 users, load average: 18.20, 16.78, 16.74
Threads: 1713 total, 18 running, 1695 sleeping, 0 stopped, 0 zombie
%Cpu(s): 70.4 us, 0.6 sy, 0.0 ni, 28.8 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 97436048 total, 13647332 free, 38299860 used, 45488856 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42729004 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80852 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13611:54 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:18 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13611:38 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:16 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:34 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:59 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:25 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:36 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:20 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13613:09 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:48 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:07 jsvc
80860 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:59 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13613:19 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:38 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.2 10.0 13612:19 jsvc
top - 12:53:08 up 450 days, 16:03, 6 users, load average: 18.20, 16.78, 16.74
Threads: 1712 total, 18 running, 1694 sleeping, 0 stopped, 0 zombie
%Cpu(s): 66.5 us, 1.5 sy, 0.0 ni, 31.6 id, 0.1 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13645540 free, 38302700 used, 45487808 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42726808 avail Mem
From catalina.out we observed the following:
We see the that the following Garbage collector threads are consuming high CPU:
"main" #1 prio=5 os_prio=0 tid=0x0000000002313000 nid=0x13bc7 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"VM Thread" os_prio=0 tid=0x0000000002642000 nid=0x13be2 runnable
"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x0000000002325800 nid=0x13bce runnable
"GC task thread#1 (ParallelGC)" os_prio=0 tid=0x0000000002327800 nid=0x13bcf runnable
"GC task thread#2 (ParallelGC)" os_prio=0 tid=0x0000000002329000 nid=0x13bd0 runnable
"GC task thread#3 (ParallelGC)" os_prio=0 tid=0x000000000232b000 nid=0x13bd1 runnable
"GC task thread#4 (ParallelGC)" os_prio=0 tid=0x000000000232d000 nid=0x13bd2 runnable
"GC task thread#5 (ParallelGC)" os_prio=0 tid=0x000000000232f000 nid=0x13bd3 runnable
"GC task thread#6 (ParallelGC)" os_prio=0 tid=0x0000000002330800 nid=0x13bd4 runnable
"GC task thread#7 (ParallelGC)" os_prio=0 tid=0x0000000002332800 nid=0x13bd5 runnable
"GC task thread#8 (ParallelGC)" os_prio=0 tid=0x0000000002334800 nid=0x13bd6 runnable
"GC task thread#9 (ParallelGC)" os_prio=0 tid=0x0000000002336000 nid=0x13bd7 runnable
"GC task thread#10 (ParallelGC)" os_prio=0 tid=0x0000000002338000 nid=0x13bd8 runnable
"GC task thread#11 (ParallelGC)" os_prio=0 tid=0x000000000233a000 nid=0x13bd9 runnable
"GC task thread#12 (ParallelGC)" os_prio=0 tid=0x000000000233b800 nid=0x13bda runnable
"GC task thread#13 (ParallelGC)" os_prio=0 tid=0x000000000233d800 nid=0x13bdb runnable
"GC task thread#14 (ParallelGC)" os_prio=0 tid=0x000000000233f800 nid=0x13bdc runnable
"GC task thread#15 (ParallelGC)" os_prio=0 tid=0x0000000002341000 nid=0x13bdd runnable
"GC task thread#16 (ParallelGC)" os_prio=0 tid=0x0000000002343000 nid=0x13bde runnable
"GC task thread#17 (ParallelGC)" os_prio=0 tid=0x0000000002345000 nid=0x13bdf runnable
"VM Periodic Task Thread" os_prio=0 tid=0x0000000002ac3800 nid=0x13cd6 waiting on condition
JNI global references: 3603
Heap
PSYoungGen total 79872K, used 288K [0x0000000635d80000, 0x000000063b280000, 0x00000007e0800000)
eden space 72704K, 0% used [0x0000000635d80000,0x0000000635dc8230,0x000000063a480000)
from space 7168K, 0% used [0x000000063ab80000,0x000000063ab80000,0x000000063b280000)
to space 7168K, 0% used [0x000000063a480000,0x000000063a480000,0x000000063ab80000)
ParOldGen total 2978304K, used 2963138K [0x00000002e0800000, 0x0000000396480000, 0x0000000635d80000)
object space 2978304K, 99% used [0x00000002e0800000,0x00000003955b0b70,0x0000000396480000)
Metaspace used 335348K, capacity 357063K, committed 358016K, reserved 1370112K
class space used 33786K, capacity 37225K, committed 37504K, reserved 1048576K
Garbage collector usage in RHEL: As long as an object is being referenced, the JVM considers it alive.
Once an object is no longer referenced and therefore is not reachable by the application code, the garbage collector removes it and reclaims the unused memory.
The reason GC is using more CPU cycles is that its aggressively reclaiming a huge amount of Objects which are not referenced anymore.
From the Catalina.out file we could see also the VCS threads being blocked:
"VCSPersistEventHandler-1-thread-1" #1920 prio=5 os_prio=0 tid=0x00007f42c5a27800 nid=0x185d7 waiting for monitor entry [0x00007f4224182000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.elasticsearch.action.bulk.BulkProcessor.internalAdd(BulkProcessor.java:287)
- waiting to lock <0x00000002ef9a1a28> (a org.elasticsearch.action.bulk.BulkProcessor)
at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:272)
at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:268)
at com.cisco.ise.vcs.crud.ESBulkProcessor.processRequest(ESBulkProcessor.java:122)
- locked <0x00000002ef51a570> (a com.cisco.ise.vcs.crud.ESBulkProcessor)
at com.cisco.ise.vcs.crud.VCSCrudProcessor.upsertBulkData(VCSCrudProcessor.java:98)
- locked <0x0000000308461c08> (a com.cisco.ise.vcs.crud.VCSCrudProcessor)
at com.cisco.ise.vcs.event.handler.VCSPersistEventHandler.updateContextRepository(VCSPersistEventHandler.java:147)
at com.cisco.ise.vcs.event.handler.VCSPersistEventHandler.handleEvent(VCSPersistEventHandler.java:113)
at com.cisco.ise.vcs.event.VCSEventHandler.run(VCSEventHandler.java:130)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Because of the above observations we see that you are encountering the bug CSCwd45843<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd45843>.
There is a hotfix available for this bug:
https://software.cisco.com/download/home/283801620/type/283802505/release/HP-CSCwd45843
This is a generic HP and can be applied on any version and is available in CCO site, you could install it on all the nodes and monitor.
But we would suggests to install patch 6 first and then install this HP.
This bug is addressed in ISE 3.0 p7 which is yet to be released, until then you can install patch 6 first and then the HP over it.
Do let me know if you have any further queries or concerns.
Technology: Identity Services Engine (ISE) - 3.0
Subtechnology: ISE Performance (High CPU / Memory / IO / GUI Slowness)
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: CISE
Software Version: N/A
Router/Node Name: N/A
Problem Details: ISE - High Load Average for ISE PAN node
ISE - High Load Average for ISE PAN node
timestamp : 2023-02-27T01:52:17.000+0000 || updatedby : rupespat || type : RESOLUTION SUMMARY || visibility : External || details : observations we see that CU is encountering the bug CSCwd45843<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd45843>;. There is a hotfix available for this bug:
https://software.cisco.com/download/home/283801620/type/283802505/release/HP-CSCwd45843
This is a generic HP and can be applied on any version and is available in CCO site, you could install it on all the nodes and monitor.
But we would suggests to install patch 6 first and then install this HP.
This bug is addressed in ISE 3.0 p7 which is yet to be released, until then you can install patch 6 first and then the HP over it.
======
No response from CU since 9th February 23, closing the case after 3 strike and manager followup | Question: Which product is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY product name should be selected from the given product name list: SNS-3655-K9/ SNS-3695-K9 Response in this JSON format:
{"product_name": "","explanation": "", "summary": ""}
Good day!
I went through the support bundle you provided.
Please find my analysis below:
This device is showing evidence of encountering CSCwd45843<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd45843>: Java minimum heap value is to low which can lead to excessive garbage collection
Symptoms seen:
Authentication step latency on multiple policy evaluation steps.
High average request latency during times of peak load.
Authentication request latency does not recover until after the system is reloaded.
Mitigation:
Apply the universal hotfix to all nodes in the deployment and when available apply the patch with the fix for the installed ISE version.
The details of which are mentioned in my previous email to you as well.
Please find the log snippet below:
- ./ise-support-bundle-hsdc-pan-nac01-nusnggk-01-30-2023-23-17/support/showtech/showtech.out - The regex highlighted below detects the iseadminportal service starting with the low java Xms value:
32086: iseadminportal 39346 60.8 4.4 27227720 4319240 Sl Jan 18 7-07:12:48 \_ jsvc.exec -java-home /opt/CSCOcpm/jre -server -user iseadminportal -outfile /opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/logs/catalina.out -errfile /opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/logs/catalina.out -pidfile /var/run/catalina.pid -Xms256m -Xmx21474836480 -XX:MaxGCPauseMillis=500 -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=512M -XX:MaxDirectMemorySize=2g -XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/localdisk/corefiles -XX:OnOutOfMemoryError=/opt/CSCOcpm/bin/iseoomrestart.sh -XX:ErrorFile=/localdisk/corefiles/hs_err_pid%p.log -Dorg.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH=true -verbose:gc -Xloggc:/localdisk/gc/gc_app.log.20230118232240 -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation -XX:GCLogFileSize=3M -XX:NumberOfGCLogFiles=31 -XX:+PrintGCApplicationStoppedTime -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -Djdk.nio.maxCachedBufferSize=1000000 -Ddisable.logs.transaction=true -DisSSLEnabled=true -Ddb.configurations=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/config/db.properties -Dlog4j.configuration=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/config/log4j.xml -Doracle.jms.minSleepTime=200 -Doracle.jms.maxSleepTime=500 -DUSERTYPE_FILTER=NSFEndPointType -Dcom.cisco.cpm.provisioning.filesaverlocation=/tmp/download -Dpdp.heartbeat.cfg=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/config/pdp_hb_config.xml -Dise.access.map=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/config/access-map.xml -Drbac.superadmin.accesscheck.on=true -Dcom.cisco.cpm.version=3.0.0.458 -Dcom.cisco.cpm.udipid=SNS-3655-K9 -Dcom.cisco.cpm.udivid=A0 -Dcom.cisco.cpm.udisn=WZP23040NF3 -Dcom.cisco.cpm.udipt=UCS -Dcom.cisco.cpm.osversion=3.0.8.091 -DREPLIC_TRANSPORT_TYPE=JGroup -Dfake.pipmgr=true -DnoContextCopy=true -DlicenseSleepTime=24 -Djavax.net.ssl.keyStore=NONE -Djavax.net.ssl.keyStoreType=PKCS11 -Dise.fipsMode=false -Djdk.tls.ephemeralDHKeySize=2048 -Dorg.terracotta.quartz.skipUpdateCheck=true -Dreplication.monitor.on=true -Dorg.owasp.esapi.SecurityConfiguration=org.owasp.esapi.reference.DefaultSecurityConfiguration -Dcom.sun.xml.bind.v2.bytecode.ClassTailor.noOptimize=true -Dsmack.debugEnabled=true -Dsmack.debuggerClass=org.jivesoftware.smackx.debugger.slf4j.SLF4JSmackDebugger -Djgroups.logging.log_factory_class=com.cisco.epm.logging.CustomLoggerFactoryForJgroups -DsslEnabledProtocolsMediumSecurity=TLSv1,TLSv1.1,TLSv1.2 -DsslEnabledProtocolsHighSecurity=TLSv1.1,TLSv1.2 -Doracle.jdbc.autoCommitSpecCompliant=false -Dcom.sun.jndi.ldap.object.disableEndpointIdentification=true -Dorg.jboss.logging.provider=log4j -DMntSessionDirectory.fetchAmount=10000 -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Dlog4j.skipJansi=true -Dio.netty.allocator.numDirectArenas=0 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=9999 -Dcom.sun.management.jmxremote.local.only=false -XX:OnError=/usr/bin/sudo /opt/CSCOcpm/bin/isegencore.sh %p -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.util.logging.config.file=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/conf/logging.properties -Dfile.encoding=UTF-8 -Dorg.terracotta.quartz.skipUpdateCheck=true -Dnet.sf.ehcache.skipUpdateCheck=true -Dorg.quartz.scheduler.skipUpdateCheck=true -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2 -DPROFILER_ROOT=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30 -javaagent:/opt/CSCOcpm/appsrv/apache-tomcat/lib/javaagent.jar -Djava.endorsed.dirs= -classpath /opt/system/java/lib/CARSJava-2.0-api.jar:/opt/system/java/lib/CARSJava-2.0-impl.jar:/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/bin/bootstrap.jar:/opt/CSCOcpm/prrt/lib/prrt-interface.jar:/opt/CSCOcpm/prrt/lib/prrt-flowapi.jar:/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/lib/log4j-rolling-appender-20131024-2017.jar:/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/lib/log4j-1.2.17.jar:/opt/TimesTen/tt1121/lib/ttjdbc6.jar:/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30/bin/tomcat-juli.jar -Dcatalina.base=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30 -Dcatalina.home=/opt/CSCOcpm/appsrv/apache-tomcat-9.0.30 -Djava.io.tmpdir=/opt/CSCOcpm/temp -Dorg.apache.cxf.Logger=org.apache.cxf.common.logging.Log4jLogger org.apache.catalina.startup.Bootstrap start
This is a confirmation that you are still hitting the bug CSCwd45843<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbst.cloudapps.cisco.com%2Fbugsearch%2Fbug%2FCSCwd45843&data=05%7C01%7Cgeorgekingsley.paulsundararaj%40ncs.com.sg%7C9e66dfb8dca349ae8ab408daf9b8acd7%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638096868815388131%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=c41EUSmTYxeI5W4su6ayOkPnH0BKumDN6gs68zO3GgY%3D&reserved=0>.
The hotfix available for this bug:
https://software.cisco.com/download/home/283801620/type/283802505/release/HP-CSCwd45843<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsoftware.cisco.com%2Fdownload%2Fhome%2F283801620%2Ftype%2F283802505%2Frelease%2FHP-CSCwd45843&data=05%7C01%7Cgeorgekingsley.paulsundararaj%40ncs.com.sg%7C9e66dfb8dca349ae8ab408daf9b8acd7%7Cca90d8f589634b6ebca99ac468bcc7a8%7C1%7C0%7C638096868815388131%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=C3zMOOmCfs%2BFFLKXys0s1mPQmal8tOhej1odqqEnOww%3D&reserved=0>.
Please do let me know if you have any further queries or concerns on the same.
Update 25th Jan:
Please find my analysis below:
From the tech top output we see a high CPU is consumed by the jsvc process:
hsdc-pan-nac01/admin# tech top
Invoking tech top. Press Control-C to interrupt.
top - 12:34:09 up 450 days, 15:44, 2 users, load average: 15.65, 16.77, 17.33
Tasks: 628 total, 1 running, 627 sleeping, 0 stopped, 0 zombie
%Cpu(s): 80.9 us, 1.1 sy, 0.0 ni, 17.6 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13726596 free, 38382248 used, 45327204 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42646812 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80839 iseadmi+ 20 0 30.4g 9.3g 40464 S 1794 10.0 296801:20 jsvc
60426 iseelas+ 20 0 18.0g 16.3g 472868 S 100.0 17.5 71810:49 java
115925 iserabb+ 20 0 2884300 1.0g 2692 S 52.9 1.1 2961:51 beam.smp
3896 root 20 0 164760 2688 1576 R 11.8 0.0 0:00.03 top
1 root 20 0 195436 6776 3108 S 0.0 0.0 402:34.45 systemd
2 root 20 0 0 0 0 S 0.0 0.0 3:30.34 kthreadd
4 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0+
6 root 20 0 0 0 0 S 0.0 0.0 106:27.39 ksoftirqd+
7 root rt 0 0 0 0 S 0.0 0.0 0:50.52 migration+
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root 20 0 0 0 0 S 0.0 0.0 1720:11 rcu_sched
10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-d+
11 root rt 0 0 0 0 S 0.0 0.0 2:12.02 watchdog/0
12 root rt 0 0 0 0 S 0.0 0.0 2:15.41 watchdog/1
13 root rt 0 0 0 0 S 0.0 0.0 0:51.29 migration+
14 root 20 0 0 0 0 S 0.0 0.0 87:25.21 ksoftirqd+
On further checking the sub process ID we see the following PIDs are consuming the CPU threads: (Omitted some lined for brevity)
sh-4.2# top -H -p 80839
top - 12:52:56 up 450 days, 16:03, 6 users, load average: 17.88, 16.67, 16.70
Threads: 1707 total, 18 running, 1689 sleeping, 0 stopped, 0 zombie
%Cpu(s): 64.8 us, 1.6 sy, 0.0 ni, 33.3 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13678112 free, 38274636 used, 45483300 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42754628 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80854 iseadmi+ 20 0 30.4g 9.3g 40816 R 72.2 10.0 13611:41 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:51 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:17 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:29 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:12 jsvc
80850 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:25 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:12 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:47 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13613:02 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:11 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:31 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:00 jsvc
80858 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:45 jsvc
top - 12:52:59 up 450 days, 16:03, 6 users, load average: 17.88, 16.67, 16.70
Threads: 1710 total, 1 running, 1709 sleeping, 0 stopped, 0 zombie
%Cpu(s): 69.7 us, 0.6 sy, 0.0 ni, 29.4 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 97436048 total, 13669704 free, 38280428 used, 45485916 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42749708 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80846 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13612:54 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13611:33 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13612:11 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:20 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:31 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:14 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:15 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13611:49 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13613:04 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13611:02 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13613:14 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:29 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:33 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.2 10.0 13611:44 jsvc
top - 12:53:02 up 450 days, 16:03, 6 users, load average: 18.13, 16.74, 16.72
Threads: 1713 total, 18 running, 1695 sleeping, 0 stopped, 0 zombie
%Cpu(s): 67.5 us, 1.1 sy, 0.0 ni, 31.1 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 97436048 total, 13651640 free, 38298472 used, 45485936 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42730576 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80850 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.9 10.0 13612:30 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:56 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:22 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:33 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:17 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:17 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13611:52 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:16 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:13 jsvc
80860 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13611:56 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13613:17 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:32 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:35 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13613:06 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:46 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:36 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:04 jsvc
80858 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13612:50 jsvc
top - 12:53:05 up 450 days, 16:03, 6 users, load average: 18.20, 16.78, 16.74
Threads: 1713 total, 18 running, 1695 sleeping, 0 stopped, 0 zombie
%Cpu(s): 70.4 us, 0.6 sy, 0.0 ni, 28.8 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 97436048 total, 13647332 free, 38299860 used, 45488856 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42729004 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80852 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13611:54 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:18 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13611:38 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:16 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:34 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:59 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:25 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:36 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:20 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13613:09 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:48 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:07 jsvc
80860 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:59 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13613:19 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:38 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.2 10.0 13612:19 jsvc
top - 12:53:08 up 450 days, 16:03, 6 users, load average: 18.20, 16.78, 16.74
Threads: 1712 total, 18 running, 1694 sleeping, 0 stopped, 0 zombie
%Cpu(s): 66.5 us, 1.5 sy, 0.0 ni, 31.6 id, 0.1 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13645540 free, 38302700 used, 45487808 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used. 42726808 avail Mem
From catalina.out we observed the following:
We see the that the following Garbage collector threads are consuming high CPU:
"main" #1 prio=5 os_prio=0 tid=0x0000000002313000 nid=0x13bc7 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"VM Thread" os_prio=0 tid=0x0000000002642000 nid=0x13be2 runnable
"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x0000000002325800 nid=0x13bce runnable
"GC task thread#1 (ParallelGC)" os_prio=0 tid=0x0000000002327800 nid=0x13bcf runnable
"GC task thread#2 (ParallelGC)" os_prio=0 tid=0x0000000002329000 nid=0x13bd0 runnable
"GC task thread#3 (ParallelGC)" os_prio=0 tid=0x000000000232b000 nid=0x13bd1 runnable
"GC task thread#4 (ParallelGC)" os_prio=0 tid=0x000000000232d000 nid=0x13bd2 runnable
"GC task thread#5 (ParallelGC)" os_prio=0 tid=0x000000000232f000 nid=0x13bd3 runnable
"GC task thread#6 (ParallelGC)" os_prio=0 tid=0x0000000002330800 nid=0x13bd4 runnable
"GC task thread#7 (ParallelGC)" os_prio=0 tid=0x0000000002332800 nid=0x13bd5 runnable
"GC task thread#8 (ParallelGC)" os_prio=0 tid=0x0000000002334800 nid=0x13bd6 runnable
"GC task thread#9 (ParallelGC)" os_prio=0 tid=0x0000000002336000 nid=0x13bd7 runnable
"GC task thread#10 (ParallelGC)" os_prio=0 tid=0x0000000002338000 nid=0x13bd8 runnable
"GC task thread#11 (ParallelGC)" os_prio=0 tid=0x000000000233a000 nid=0x13bd9 runnable
"GC task thread#12 (ParallelGC)" os_prio=0 tid=0x000000000233b800 nid=0x13bda runnable
"GC task thread#13 (ParallelGC)" os_prio=0 tid=0x000000000233d800 nid=0x13bdb runnable
"GC task thread#14 (ParallelGC)" os_prio=0 tid=0x000000000233f800 nid=0x13bdc runnable
"GC task thread#15 (ParallelGC)" os_prio=0 tid=0x0000000002341000 nid=0x13bdd runnable
"GC task thread#16 (ParallelGC)" os_prio=0 tid=0x0000000002343000 nid=0x13bde runnable
"GC task thread#17 (ParallelGC)" os_prio=0 tid=0x0000000002345000 nid=0x13bdf runnable
"VM Periodic Task Thread" os_prio=0 tid=0x0000000002ac3800 nid=0x13cd6 waiting on condition
JNI global references: 3603
Heap
PSYoungGen total 79872K, used 288K [0x0000000635d80000, 0x000000063b280000, 0x00000007e0800000)
eden space 72704K, 0% used [0x0000000635d80000,0x0000000635dc8230,0x000000063a480000)
from space 7168K, 0% used [0x000000063ab80000,0x000000063ab80000,0x000000063b280000)
to space 7168K, 0% used [0x000000063a480000,0x000000063a480000,0x000000063ab80000)
ParOldGen total 2978304K, used 2963138K [0x00000002e0800000, 0x0000000396480000, 0x0000000635d80000)
object space 2978304K, 99% used [0x00000002e0800000,0x00000003955b0b70,0x0000000396480000)
Metaspace used 335348K, capacity 357063K, committed 358016K, reserved 1370112K
class space used 33786K, capacity 37225K, committed 37504K, reserved 1048576K
Garbage collector usage in RHEL: As long as an object is being referenced, the JVM considers it alive. Once an object is no longer referenced and therefore is not reachable by the application code, the garbage collector removes it and reclaims the unused memory. The reason GC is using more CPU cycles is that its aggressively reclaiming a huge amount of Objects which are not referenced anymore.
From the Catalina.out file we could see also the VCS threads being blocked:
"VCSPersistEventHandler-1-thread-1" #1920 prio=5 os_prio=0 tid=0x00007f42c5a27800 nid=0x185d7 waiting for monitor entry [0x00007f4224182000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.elasticsearch.action.bulk.BulkProcessor.internalAdd(BulkProcessor.java:287)
- waiting to lock <0x00000002ef9a1a28> (a org.elasticsearch.action.bulk.BulkProcessor)
at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:272)
at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:268)
at com.cisco.ise.vcs.crud.ESBulkProcessor.processRequest(ESBulkProcessor.java:122)
- locked <0x00000002ef51a570> (a com.cisco.ise.vcs.crud.ESBulkProcessor)
at com.cisco.ise.vcs.crud.VCSCrudProcessor.upsertBulkData(VCSCrudProcessor.java:98)
- locked <0x0000000308461c08> (a com.cisco.ise.vcs.crud.VCSCrudProcessor)
at com.cisco.ise.vcs.event.handler.VCSPersistEventHandler.updateContextRepository(VCSPersistEventHandler.java:147)
at com.cisco.ise.vcs.event.handler.VCSPersistEventHandler.handleEvent(VCSPersistEventHandler.java:113)
at com.cisco.ise.vcs.event.VCSEventHandler.run(VCSEventHandler.java:130)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Because of the above observations we see that you are encountering the bug CSCwd45843<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd45843>;. There is a hotfix available for this bug:
https://software.cisco.com/download/home/283801620/type/283802505/release/HP-CSCwd45843
This is a generic HP and can be applied on any version and is available in CCO site, you could install it on all the nodes and monitor.
But we would suggests to install patch 6 first and then install this HP.
This bug is addressed in ISE 3.0 p7 which is yet to be released, until then you can install patch 6 first and then the HP over it.
Good day!
Thanks for the files you attached so far and I was able to come to a root cause of the issue.
Please find my analysis below:
From the tech top output we see a high CPU is consumed by the jsvc process:
hsdc-pan-nac01/admin# tech top
Invoking tech top.
Press Control-C to interrupt.
top - 12:34:09 up 450 days, 15:44, 2 users, load average: 15.65, 16.77, 17.33
Tasks: 628 total, 1 running, 627 sleeping, 0 stopped, 0 zombie
%Cpu(s): 80.9 us, 1.1 sy, 0.0 ni, 17.6 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13726596 free, 38382248 used, 45327204 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42646812 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80839 iseadmi+ 20 0 30.4g 9.3g 40464 S 1794 10.0 296801:20 jsvc
60426 iseelas+ 20 0 18.0g 16.3g 472868 S 100.0 17.5 71810:49 java
115925 iserabb+ 20 0 2884300 1.0g 2692 S 52.9 1.1 2961:51 beam.smp
3896 root 20 0 164760 2688 1576 R 11.8 0.0 0:00.03 top
1 root 20 0 195436 6776 3108 S 0.0 0.0 402:34.45 systemd
2 root 20 0 0 0 0 S 0.0 0.0 3:30.34 kthreadd
4 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0+
6 root 20 0 0 0 0 S 0.0 0.0 106:27.39 ksoftirqd+
7 root rt 0 0 0 0 S 0.0 0.0 0:50.52 migration+
8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh
9 root 20 0 0 0 0 S 0.0 0.0 1720:11 rcu_sched
10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-d+
11 root rt 0 0 0 0 S 0.0 0.0 2:12.02 watchdog/0
12 root rt 0 0 0 0 S 0.0 0.0 2:15.41 watchdog/1
13 root rt 0 0 0 0 S 0.0 0.0 0:51.29 migration+
14 root 20 0 0 0 0 S 0.0 0.0 87:25.21 ksoftirqd+
On further checking the sub process ID we see the following PIDs are consuming the CPU threads: (Omitted some lined for brevity)
sh-4.2# top -H -p 80839
top - 12:52:56 up 450 days, 16:03, 6 users, load average: 17.88, 16.67, 16.70
Threads: 1707 total, 18 running, 1689 sleeping, 0 stopped, 0 zombie
%Cpu(s): 64.8 us, 1.6 sy, 0.0 ni, 33.3 id, 0.0 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13678112 free, 38274636 used, 45483300 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42754628 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80854 iseadmi+ 20 0 30.4g 9.3g 40816 R 72.2 10.0 13611:41 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:51 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:17 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:29 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:12 jsvc
80850 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:25 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:12 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:47 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13613:02 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:11 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:31 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13611:00 jsvc
80858 iseadmi+ 20 0 30.4g 9.3g 40816 R 66.7 10.0 13612:45 jsvc
top - 12:52:59 up 450 days, 16:03, 6 users, load average: 17.88, 16.67, 16.70
Threads: 1710 total, 1 running, 1709 sleeping, 0 stopped, 0 zombie
%Cpu(s): 69.7 us, 0.6 sy, 0.0 ni, 29.4 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 97436048 total, 13669704 free, 38280428 used, 45485916 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42749708 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80846 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13612:54 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13611:33 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.9 10.0 13612:11 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:20 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:31 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:14 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:15 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13611:49 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13613:04 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13611:02 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13613:14 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:29 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.6 10.0 13612:33 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 40816 S 83.2 10.0 13611:44 jsvc
top - 12:53:02 up 450 days, 16:03, 6 users, load average: 18.13, 16.74, 16.72
Threads: 1713 total, 18 running, 1695 sleeping, 0 stopped, 0 zombie
%Cpu(s): 67.5 us, 1.1 sy, 0.0 ni, 31.1 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
KiB Mem : 97436048 total, 13651640 free, 38298472 used, 45485936 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42730576 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80850 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.9 10.0 13612:30 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:56 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:22 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:33 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:17 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:17 jsvc
80852 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13611:52 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:16 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:13 jsvc
80860 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13611:56 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13613:17 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:32 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.6 10.0 13612:35 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13613:06 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:46 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:36 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13611:04 jsvc
80858 iseadmi+ 20 0 30.4g 9.3g 41072 R 75.2 10.0 13612:50 jsvc
top - 12:53:05 up 450 days, 16:03, 6 users, load average: 18.20, 16.78, 16.74
Threads: 1713 total, 18 running, 1695 sleeping, 0 stopped, 0 zombie
%Cpu(s): 70.4 us, 0.6 sy, 0.0 ni, 28.8 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 97436048 total, 13647332 free, 38299860 used, 45488856 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42729004 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
80852 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13611:54 jsvc
80855 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:18 jsvc
80856 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13611:38 jsvc
80859 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:16 jsvc
80862 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.8 10.0 13612:34 jsvc
80846 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:59 jsvc
80847 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:25 jsvc
80848 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:36 jsvc
80851 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:20 jsvc
80853 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13613:09 jsvc
80854 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:48 jsvc
80857 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:07 jsvc
80860 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13611:59 jsvc
80861 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13613:19 jsvc
80863 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.5 10.0 13612:38 jsvc
80849 iseadmi+ 20 0 30.4g 9.3g 41072 R 84.2 10.0 13612:19 jsvc
top - 12:53:08 up 450 days, 16:03, 6 users, load average: 18.20, 16.78, 16.74
Threads: 1712 total, 18 running, 1694 sleeping, 0 stopped, 0 zombie
%Cpu(s): 66.5 us, 1.5 sy, 0.0 ni, 31.6 id, 0.1 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem : 97436048 total, 13645540 free, 38302700 used, 45487808 buff/cache
KiB Swap: 8191996 total, 7797512 free, 394484 used.
42726808 avail Mem
From catalina.out we observed the following:
We see the that the following Garbage collector threads are consuming high CPU:
"main" #1 prio=5 os_prio=0 tid=0x0000000002313000 nid=0x13bc7 runnable [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"VM Thread" os_prio=0 tid=0x0000000002642000 nid=0x13be2 runnable
"GC task thread#0 (ParallelGC)" os_prio=0 tid=0x0000000002325800 nid=0x13bce runnable
"GC task thread#1 (ParallelGC)" os_prio=0 tid=0x0000000002327800 nid=0x13bcf runnable
"GC task thread#2 (ParallelGC)" os_prio=0 tid=0x0000000002329000 nid=0x13bd0 runnable
"GC task thread#3 (ParallelGC)" os_prio=0 tid=0x000000000232b000 nid=0x13bd1 runnable
"GC task thread#4 (ParallelGC)" os_prio=0 tid=0x000000000232d000 nid=0x13bd2 runnable
"GC task thread#5 (ParallelGC)" os_prio=0 tid=0x000000000232f000 nid=0x13bd3 runnable
"GC task thread#6 (ParallelGC)" os_prio=0 tid=0x0000000002330800 nid=0x13bd4 runnable
"GC task thread#7 (ParallelGC)" os_prio=0 tid=0x0000000002332800 nid=0x13bd5 runnable
"GC task thread#8 (ParallelGC)" os_prio=0 tid=0x0000000002334800 nid=0x13bd6 runnable
"GC task thread#9 (ParallelGC)" os_prio=0 tid=0x0000000002336000 nid=0x13bd7 runnable
"GC task thread#10 (ParallelGC)" os_prio=0 tid=0x0000000002338000 nid=0x13bd8 runnable
"GC task thread#11 (ParallelGC)" os_prio=0 tid=0x000000000233a000 nid=0x13bd9 runnable
"GC task thread#12 (ParallelGC)" os_prio=0 tid=0x000000000233b800 nid=0x13bda runnable
"GC task thread#13 (ParallelGC)" os_prio=0 tid=0x000000000233d800 nid=0x13bdb runnable
"GC task thread#14 (ParallelGC)" os_prio=0 tid=0x000000000233f800 nid=0x13bdc runnable
"GC task thread#15 (ParallelGC)" os_prio=0 tid=0x0000000002341000 nid=0x13bdd runnable
"GC task thread#16 (ParallelGC)" os_prio=0 tid=0x0000000002343000 nid=0x13bde runnable
"GC task thread#17 (ParallelGC)" os_prio=0 tid=0x0000000002345000 nid=0x13bdf runnable
"VM Periodic Task Thread" os_prio=0 tid=0x0000000002ac3800 nid=0x13cd6 waiting on condition
JNI global references: 3603
Heap
PSYoungGen total 79872K, used 288K [0x0000000635d80000, 0x000000063b280000, 0x00000007e0800000)
eden space 72704K, 0% used [0x0000000635d80000,0x0000000635dc8230,0x000000063a480000)
from space 7168K, 0% used [0x000000063ab80000,0x000000063ab80000,0x000000063b280000)
to space 7168K, 0% used [0x000000063a480000,0x000000063a480000,0x000000063ab80000)
ParOldGen total 2978304K, used 2963138K [0x00000002e0800000, 0x0000000396480000, 0x0000000635d80000)
object space 2978304K, 99% used [0x00000002e0800000,0x00000003955b0b70,0x0000000396480000)
Metaspace used 335348K, capacity 357063K, committed 358016K, reserved 1370112K
class space used 33786K, capacity 37225K, committed 37504K, reserved 1048576K
Garbage collector usage in RHEL: As long as an object is being referenced, the JVM considers it alive.
Once an object is no longer referenced and therefore is not reachable by the application code, the garbage collector removes it and reclaims the unused memory.
The reason GC is using more CPU cycles is that its aggressively reclaiming a huge amount of Objects which are not referenced anymore.
From the Catalina.out file we could see also the VCS threads being blocked:
"VCSPersistEventHandler-1-thread-1" #1920 prio=5 os_prio=0 tid=0x00007f42c5a27800 nid=0x185d7 waiting for monitor entry [0x00007f4224182000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.elasticsearch.action.bulk.BulkProcessor.internalAdd(BulkProcessor.java:287)
- waiting to lock <0x00000002ef9a1a28> (a org.elasticsearch.action.bulk.BulkProcessor)
at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:272)
at org.elasticsearch.action.bulk.BulkProcessor.add(BulkProcessor.java:268)
at com.cisco.ise.vcs.crud.ESBulkProcessor.processRequest(ESBulkProcessor.java:122)
- locked <0x00000002ef51a570> (a com.cisco.ise.vcs.crud.ESBulkProcessor)
at com.cisco.ise.vcs.crud.VCSCrudProcessor.upsertBulkData(VCSCrudProcessor.java:98)
- locked <0x0000000308461c08> (a com.cisco.ise.vcs.crud.VCSCrudProcessor)
at com.cisco.ise.vcs.event.handler.VCSPersistEventHandler.updateContextRepository(VCSPersistEventHandler.java:147)
at com.cisco.ise.vcs.event.handler.VCSPersistEventHandler.handleEvent(VCSPersistEventHandler.java:113)
at com.cisco.ise.vcs.event.VCSEventHandler.run(VCSEventHandler.java:130)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Because of the above observations we see that you are encountering the bug CSCwd45843<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd45843>.
There is a hotfix available for this bug:
https://software.cisco.com/download/home/283801620/type/283802505/release/HP-CSCwd45843
This is a generic HP and can be applied on any version and is available in CCO site, you could install it on all the nodes and monitor.
But we would suggests to install patch 6 first and then install this HP.
This bug is addressed in ISE 3.0 p7 which is yet to be released, until then you can install patch 6 first and then the HP over it.
Do let me know if you have any further queries or concerns.
Technology: Identity Services Engine (ISE) - 3.0
Subtechnology: ISE Performance (High CPU / Memory / IO / GUI Slowness)
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: CISE
Software Version: N/A
Router/Node Name: N/A
Problem Details: ISE - High Load Average for ISE PAN node
ISE - High Load Average for ISE PAN node
timestamp : 2023-02-27T01:52:17.000+0000 || updatedby : rupespat || type : RESOLUTION SUMMARY || visibility : External || details : observations we see that CU is encountering the bug CSCwd45843<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd45843>;. There is a hotfix available for this bug:
https://software.cisco.com/download/home/283801620/type/283802505/release/HP-CSCwd45843
This is a generic HP and can be applied on any version and is available in CCO site, you could install it on all the nodes and monitor.
But we would suggests to install patch 6 first and then install this HP.
This bug is addressed in ISE 3.0 p7 which is yet to be released, until then you can install patch 6 first and then the HP over it.
======
No response from CU since 9th February 23, closing the case after 3 strike and manager followup | Technology: Identity Services Engine (ISE) - 3.0
Subtechnology: ISE Performance (High CPU / Memory / IO / GUI Slowness)
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: CISE
Software Version: N/A
Router/Node Name: N/A
Problem Details: ISE - High Load Average for ISE PAN node | ISE - High Load Average for ISE PAN node | timestamp : 2023-02-27T01:52:17.000+0000 || updatedby : rupespat || type : RESOLUTION SUMMARY || visibility : External || details : observations we see that CU is encountering the bug CSCwd45843<https://bst.cloudapps.cisco.com/bugsearch/bug/CSCwd45843>;. There is a hotfix available for this bug:
https://software.cisco.com/download/home/283801620/type/283802505/release/HP-CSCwd45843
This is a generic HP and can be applied on any version and is available in CCO site, you could install it on all the nodes and monitor.
But we would suggests to install patch 6 first and then install this HP.
This bug is addressed in ISE 3.0 p7 which is yet to be released, until then you can install patch 6 first and then the HP over it.
======
No response from CU since 9th February 23, closing the case after 3 strike and manager followup | nan | SNS-3655-K9 | ise-3.0.0.458.SPA.x86_64.iso | nan | nan | 3 | nan | Configuration Assistance (process not intuitive, too complex, inconsistent...) | CISE | Identity Services Engine (ISE) - 3.0 | ISE Performance (High CPU / Memory / IO / GUI Slowness) | nan | nan | nan | nan | nan | nan |
694068972 | Question: Which software version is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY software version should be selected from the given software version list: 9.16.2.3/ 9.14.2.4/ 9.14.2.4/ 9.14.1 Response in this JSON format:
{"software_version": "","explanation": "", "summary": ""}
Technology: Security - Network Firewalls and Intrusion Prevention Systems
Subtechnology: Adaptive Security Appliance (ASA) non-VPN problem
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: FPRUHI
Software Version: N/A
Router/Node Name: N/A
Problem Details: FPR9K-SM-36 Network traffic issue
FPR9K-SM-36 Network traffic issue
timestamp : 2022-12-06T16:39:12.000+0000 || updatedby : dhanukri || type : RESOLUTION SUMMARY || visibility : External || details : Closing the case as there is no response from customer | Question: Which product is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY product name should be selected from the given product name list: FPR9K-SM-36/ FPR-C9300-AC/ FPR1150-ASA-K9 Response in this JSON format:
{"product_name": "","explanation": "", "summary": ""}
Technology: Security - Network Firewalls and Intrusion Prevention Systems
Subtechnology: Adaptive Security Appliance (ASA) non-VPN problem
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: FPRUHI
Software Version: N/A
Router/Node Name: N/A
Problem Details: FPR9K-SM-36 Network traffic issue
FPR9K-SM-36 Network traffic issue
timestamp : 2022-12-06T16:39:12.000+0000 || updatedby : dhanukri || type : RESOLUTION SUMMARY || visibility : External || details : Closing the case as there is no response from customer | Technology: Security - Network Firewalls and Intrusion Prevention Systems
Subtechnology: Adaptive Security Appliance (ASA) non-VPN problem
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: FPRUHI
Software Version: N/A
Router/Node Name: N/A
Problem Details: FPR9K-SM-36 Network traffic issue | FPR9K-SM-36 Network traffic issue | timestamp : 2022-12-06T16:39:12.000+0000 || updatedby : dhanukri || type : RESOLUTION SUMMARY || visibility : External || details : Closing the case as there is no response from customer | nan | FPR9K-SM-36 | cisco-asa.9.14.2.4.SPA.csp | nan | nan | 3 | nan | Configuration Assistance (process not intuitive, too complex, inconsistent...) | FPRUHI | Adaptive Security Appliance | ASA Firepower Devices - Non-VPN | nan | nan | nan | nan | nan | nan |
694571397 | Question: Which software version is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY software version should be selected from the given software version list: 17.03.04a/ 17.3.5/ 2.2.2.8 Response in this JSON format:
{"software_version": "","explanation": "", "summary": ""}
Technology: Cisco DNA Center - On-Prem
Subtechnology: Cisco DNA Center - LAN Automation (SDA)
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: ASR1000
Software Version: N/A
Router/Node Name: N/A
Problem Details: Hi,
we have observer this issue before and last time after change the SFP the issue was resolved. Please check the SR 693710312 & 694133288. however this time we have change the SFP and Fiber cable as well still the CRC is coming.
I am attaching the logs on testing time.
Device info:
ASR1009-X
Cisco IOS XE Software, Version 17.03.04a
Note: now we administrator shut the port.
Thanks,
Emrose
Technology: Cisco DNA Center - On-Prem
Subtechnology: Cisco DNA Center - LAN Automation (SDA)
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: ASR1000
Software Version: N/A
Router/Node Name: N/A
Problem Details: Hi,
we have observer this issue before and last time after change the SFP the issue was resolved. Please check the SR 693710312 & 694133288. however this time we have change the SFP and Fiber cable as well still the CRC is coming.
I am attaching the logs on testing time.
Device info:
ASR1009-X
Cisco IOS XE Software, Version 17.03.04a
Note: now we administrator shut the port.
Thanks,
Emrose
Hi,
we have observer this issue before and last time after change the SFP the issue was resolved. Please check the SR 693710312 & 694133288. however this time we have change the SFP and Fiber cable as well still the CRC is coming.
I am attaching the logs on testing time.
Device info:
ASR1009-X
Cisco IOS XE Software, Version 17.03.04a
Note: now we administrator shut the port.
Thanks,
Emrose
timestamp : 2022-12-30T13:30:48.000+0000 || updatedby : agarroab || type : RESOLUTION SUMMARY || visibility : External || details : There are CRC errors in line interface card
After the card was replaced, there are no more CRC errors.
Faulty card returned successfully. | Question: Which product is principally discussed in these documents?
Answer "NONE" if you cannot find answer in the given documents.
The ONLY product name should be selected from the given product name list: ASR1009-X Response in this JSON format:
{"product_name": "","explanation": "", "summary": ""}
Technology: Cisco DNA Center - On-Prem
Subtechnology: Cisco DNA Center - LAN Automation (SDA)
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: ASR1000
Software Version: N/A
Router/Node Name: N/A
Problem Details: Hi,
we have observer this issue before and last time after change the SFP the issue was resolved. Please check the SR 693710312 & 694133288. however this time we have change the SFP and Fiber cable as well still the CRC is coming.
I am attaching the logs on testing time.
Device info:
ASR1009-X
Cisco IOS XE Software, Version 17.03.04a
Note: now we administrator shut the port.
Thanks,
Emrose
Hi,
we have observer this issue before and last time after change the SFP the issue was resolved. Please check the SR 693710312 & 694133288. however this time we have change the SFP and Fiber cable as well still the CRC is coming.
I am attaching the logs on testing time.
Device info:
ASR1009-X
Cisco IOS XE Software, Version 17.03.04a
Note: now we administrator shut the port.
Thanks,
Emrose
timestamp : 2022-12-30T13:30:48.000+0000 || updatedby : agarroab || type : RESOLUTION SUMMARY || visibility : External || details : There are CRC errors in line interface card
After the card was replaced, there are no more CRC errors.
Faulty card returned successfully. | Technology: Cisco DNA Center - On-Prem
Subtechnology: Cisco DNA Center - LAN Automation (SDA)
Problem Code: Error Messages, Logs, Debugs
Product: NA
Product Family: ASR1000
Software Version: N/A
Router/Node Name: N/A
Problem Details: Hi,
we have observer this issue before and last time after change the SFP the issue was resolved. Please check the SR 693710312 & 694133288. however this time we have change the SFP and Fiber cable as well still the CRC is coming.
I am attaching the logs on testing time.
Device info:
ASR1009-X
Cisco IOS XE Software, Version 17.03.04a
Note: now we administrator shut the port.
Thanks,
Emrose | Hi,
we have observer this issue before and last time after change the SFP the issue was resolved. Please check the SR 693710312 & 694133288. however this time we have change the SFP and Fiber cable as well still the CRC is coming.
I am attaching the logs on testing time.
Device info:
ASR1009-X
Cisco IOS XE Software, Version 17.03.04a
Note: now we administrator shut the port.
Thanks,
Emrose | timestamp : 2022-12-30T13:30:48.000+0000 || updatedby : agarroab || type : RESOLUTION SUMMARY || visibility : External || details : There are CRC errors in line interface card
After the card was replaced, there are no more CRC errors.
Faulty card returned successfully. | nan | ASR1009-X | asr1000rpx86-universalk9.17.03.04a.SPA.bin | nan | nan | 3 | nan | Hardware Failure | ASR1000 | Cisco DNA Center - On-Prem | Cisco DNA Center - LAN Automation (SDA) | nan | nan | nan | nan | nan | nan |
694925542 | "Question: Which software version is principally discussed in these documents? \nAnswer \"NONE\" if (...TRUNCATED) | "Question: Which product is principally discussed in these documents? \nAnswer \"NONE\" if you canno(...TRUNCATED) | "Technology: LAN Switching\nSubtechnology: Cat3850 - Switching Issues\nProblem Code: Error Messages,(...TRUNCATED) | "C9300 SDA FE - %FED_L3_ERRMSG-3-RSRC_ERR: Switch 4 R0/0: fed: Failed to allocate hardware resource (...TRUNCATED) | "timestamp : 2023-02-14T13:36:10.000+0000 || updatedby : jasterry || type : RESOLUTION SUMMARY || vi(...TRUNCATED) | nan | C9300-48U | cat9k_iosxe.17.06.02.SPA.bin | nan | nan | 3 | nan | Software -not a bug (scalability, version selection, install/upgrade help...) | C9300 | Cisco DNA Center - On-Prem | Cisco Software-Defined Access (SDA Wired) | nan | nan | nan | nan | nan | nan |
694935054 | "Question: Which software version is principally discussed in these documents? \nAnswer \"NONE\" if (...TRUNCATED) | "Question: Which product is principally discussed in these documents? \nAnswer \"NONE\" if you canno(...TRUNCATED) | "Technology: Wireless\nSubtechnology: 8540 Series Wireless LAN Controller (AIR-CT8540)\nProblem Code(...TRUNCATED) | AP's in central library are facing issues joining back to WLC | "timestamp : 2023-02-28T01:04:20.000+0000 || updatedby : sherholm || type : RESOLUTION SUMMARY || vi(...TRUNCATED) | nan | THIRD_PARTY_PRODUCT_HW | THIRD_PARTY_PRODUCT_SW | nan | nan | 3 | nan | Software -not a bug (scalability, version selection, install/upgrade help...) | AIRCTA2 | Wireless | 8540 Series Wireless LAN Controller (AIR-CT8540) | nan | nan | nan | nan | nan | nan |
695268540 | "Question: Which software version is principally discussed in these documents? \nAnswer \"NONE\" if (...TRUNCATED) | "Question: Which product is principally discussed in these documents? \nAnswer \"NONE\" if you canno(...TRUNCATED) | "Technology: Cisco DNA Center - On-Prem\nSubtechnology: Cisco Software-Defined Access (SDA Wired)\nP(...TRUNCATED) | "SDA Fabric Edge - Intermittent AAA Server Down status impacting user/device authentication\n\nThis (...TRUNCATED) | resolved issue | SDA Fabric Edge - Intermittent AAA Server Down status impacting user/device authentication | C9300-48U-A | 17.6.4 | 17.6.4 | 59.0 | 3 | 100.0 | Configuration Assistance (process not intuitive, too complex, inconsistent...) | C9300 | Cisco DNA Center - On-Prem | Cisco Software-Defined Access (SDA Wired) | 01t6R000006k8XpQAI | Error Messages, Logs, Debugs | Cisco Software-Defined Access (SDA Wired) | Cisco DNA Center - On-Prem | 01t15000005W0LTAA0 | nan |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 63