Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>We are <strong>Prometheus version 2.26.0</strong> and <strong>kubernetes verion 1.21.7 in Azure</strong>. We mount the data in <strong>Azure storage NFS</strong> and it was working fine. From last few days Prometheus container is crash-looping and below are the logs</p> <pre><code>level=info ts=2022-01-26T08:04:14.375Z caller=main.go:418 msg=&quot;Starting Prometheus&quot; version=&quot;(version=2.26.0, branch=HEAD, revision=3cafc58827d1ebd1a67749f88be4218f0bab3d8d)&quot; level=info ts=2022-01-26T08:04:14.375Z caller=main.go:423 build_context=&quot;(go=go1.16.2, user=root@a67cafebe6d0, date=20210331-11:56:23)&quot; level=info ts=2022-01-26T08:04:14.375Z caller=main.go:424 host_details=&quot;(Linux 5.4.0-1065-azure #68~18.04.1-Ubuntu SMP Fri Dec 3 14:08:44 UTC 2021 x86_64 prometheus-6b9d9d54f4-nc45x (none))&quot; level=info ts=2022-01-26T08:04:14.375Z caller=main.go:425 fd_limits=&quot;(soft=1048576, hard=1048576)&quot; level=info ts=2022-01-26T08:04:14.375Z caller=main.go:426 vm_limits=&quot;(soft=unlimited, hard=unlimited)&quot; level=info ts=2022-01-26T08:04:14.503Z caller=web.go:540 component=web msg=&quot;Start listening for connections&quot; address=0.0.0.0:9090 level=info ts=2022-01-26T08:04:14.507Z caller=main.go:795 msg=&quot;Starting TSDB ...&quot; level=info ts=2022-01-26T08:04:14.509Z caller=tls_config.go:191 component=web msg=&quot;TLS is disabled.&quot; http2=false level=info ts=2022-01-26T08:04:14.560Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641478251052 maxt=1641513600000 ulid=01FRSEHC4YHV3N26JY5AMNZFRW level=info ts=2022-01-26T08:04:14.593Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641513600037 maxt=1641578400000 ulid=01FRVCAP2VJGDF0Z9CS24EXAJJ level=info ts=2022-01-26T08:04:14.624Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641578400038 maxt=1641643200000 ulid=01FRXA4AQHMHAEYWRKQFGP075M level=info ts=2022-01-26T08:04:14.651Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641643200422 maxt=1641708000000 ulid=01FRZ7XQQ4RA96DCPPBP22D71N level=info ts=2022-01-26T08:04:14.679Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641708000020 maxt=1641772800000 ulid=01FS15QDG6BS7H6M6Y09HG3E12 level=info ts=2022-01-26T08:04:14.707Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641772800011 maxt=1641837600000 ulid=01FS33GT38PRSB9VP56YFXT2M0 level=info ts=2022-01-26T08:04:14.736Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641963555381 maxt=1641967200000 ulid=01FS6MRNZEWT1Z6P697K09KHD7 level=info ts=2022-01-26T08:04:14.763Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641837600100 maxt=1641902400000 ulid=01FS6R88C70TCD8CYC4XJ95X23 level=info ts=2022-01-26T08:04:14.810Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641967200019 maxt=1642032000000 ulid=01FS8WXQP3YJ7EXBVNYBQG4DVY level=info ts=2022-01-26T08:04:14.836Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642032000072 maxt=1642096800000 ulid=01FSATQBR4XBQRDM72ATFS9PQ2 level=info ts=2022-01-26T08:04:14.863Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642096800059 maxt=1642161600000 ulid=01FSCRHE2YBDX7GPRPSH6BNGRX level=info ts=2022-01-26T08:04:14.895Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642161600091 maxt=1642226400000 ulid=01FSEPB1GPGAANVCQ2VKW9BQ4G level=info ts=2022-01-26T08:04:14.948Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642226400026 maxt=1642291200000 ulid=01FSGM4J0G1D0A6H1GD3N9C372 level=info ts=2022-01-26T08:04:14.973Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642291200005 maxt=1642356000000 ulid=01FSJHY6W0FRYDHCXBVB5XPFYG level=info ts=2022-01-26T08:04:15.002Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642356000027 maxt=1642420800000 ulid=01FSMFR96DASV6YPN66W7C86H9 level=info ts=2022-01-26T08:04:15.077Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642420800042 maxt=1642485600000 ulid=01FSPDHGWRT65D8CKWQ2JPRHW3 level=info ts=2022-01-26T08:04:15.105Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642485600006 maxt=1642550400000 ulid=01FSRBAVP2MW71H08F32D6HGB4 level=info ts=2022-01-26T08:04:15.130Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642550400028 maxt=1642615200000 ulid=01FST9482FD0Z3PHXHNW2W616E level=info ts=2022-01-26T08:04:15.157Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642680000018 maxt=1642687200000 ulid=01FSW00TJKJ7CGCQ7JJS3XQK8G level=info ts=2022-01-26T08:04:15.187Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642687200018 maxt=1642694400000 ulid=01FSW6WHTSEAXHWV5J7PQP94X7 level=info ts=2022-01-26T08:04:15.213Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642615200021 maxt=1642680000000 ulid=01FSW6XYH2Y429PG5YRM0K45XS level=info ts=2022-01-26T08:04:15.275Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642694400018 maxt=1642701600000 ulid=01FSWDR92Y7H302NDZRX1V2PX9 level=info ts=2022-01-26T08:04:21.840Z caller=head.go:696 component=tsdb msg=&quot;Replaying on-disk memory mappable chunks if any&quot; level=info ts=2022-01-26T08:04:22.623Z caller=head.go:710 component=tsdb msg=&quot;On-disk memory mappable chunks replay completed&quot; duration=782.403397ms level=info ts=2022-01-26T08:04:22.623Z caller=head.go:716 component=tsdb msg=&quot;Replaying WAL, this may take a while&quot; level=info ts=2022-01-26T08:04:34.169Z caller=head.go:742 component=tsdb msg=&quot;WAL checkpoint loaded&quot; level=info ts=2022-01-26T08:04:38.895Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=299 maxSegment=7511 level=warn ts=2022-01-26T08:04:46.423Z caller=main.go:645 msg=&quot;Received SIGTERM, exiting gracefully...&quot; level=info ts=2022-01-26T08:04:46.424Z caller=main.go:668 msg=&quot;Stopping scrape discovery manager...&quot; level=info ts=2022-01-26T08:04:46.424Z caller=main.go:682 msg=&quot;Stopping notify discovery manager...&quot; level=info ts=2022-01-26T08:04:46.424Z caller=main.go:704 msg=&quot;Stopping scrape manager...&quot; level=info ts=2022-01-26T08:04:46.424Z caller=main.go:678 msg=&quot;Notify discovery manager stopped&quot; level=info ts=2022-01-26T08:04:46.425Z caller=main.go:698 msg=&quot;Scrape manager stopped&quot; level=info ts=2022-01-26T08:04:46.426Z caller=manager.go:934 component=&quot;rule manager&quot; msg=&quot;Stopping rule manager...&quot; level=info ts=2022-01-26T08:04:46.426Z caller=manager.go:944 component=&quot;rule manager&quot; msg=&quot;Rule manager stopped&quot; level=info ts=2022-01-26T08:04:46.426Z caller=notifier.go:601 component=notifier msg=&quot;Stopping notification manager...&quot; level=info ts=2022-01-26T08:04:46.426Z caller=main.go:872 msg=&quot;Notifier manager stopped&quot; level=info ts=2022-01-26T08:04:46.426Z caller=main.go:664 msg=&quot;Scrape discovery manager stopped&quot; level=info ts=2022-01-26T08:04:46.792Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=300 maxSegment=7511 level=info ts=2022-01-26T08:04:46.870Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=301 maxSegment=7511 level=info ts=2022-01-26T08:04:46.901Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=302 maxSegment=7511 level=info ts=2022-01-26T08:04:46.946Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=303 maxSegment=7511 level=info ts=2022-01-26T08:04:46.974Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=304 maxSegment=7511 level=info ts=2022-01-26T08:04:47.008Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=305 maxSegment=7511 level=info ts=2022-01-26T08:04:47.034Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=306 maxSegment=7511 level=info ts=2022-01-26T08:04:47.067Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=307 maxSegment=7511 level=info ts=2022-01-26T08:04:47.098Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=308 maxSegment=7511 level=info ts=2022-01-26T08:04:47.124Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=309 maxSegment=7511 level=info ts=2022-01-26T08:04:47.158Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=310 maxSegment=7511 level=info ts=2022-01-26T08:04:47.203Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=311 maxSegment=7511 level=info ts=2022-01-26T08:04:47.254Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=312 maxSegment=7511 level=info ts=2022-01-26T08:04:47.486Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=313 maxSegment=7511 level=info ts=2022-01-26T08:04:47.511Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=314 maxSegment=7511 level=info ts=2022-01-26T08:04:47.539Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=315 maxSegment=7511 level=info ts=2022-01-26T08:04:47.564Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=316 maxSegment=7511 . . . . . . . . . level=info ts=2022-01-26T08:05:15.161Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1401 maxSegment=7511 level=info ts=2022-01-26T08:05:15.182Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1402 maxSegment=7511 level=info ts=2022-01-26T08:05:15.205Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1403 maxSegment=7511 level=info ts=2022-01-26T08:05:15.229Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1404 maxSegment=7511 level=info ts=2022-01-26T08:05:15.251Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1405 maxSegment=7511 level=info ts=2022-01-26T08:05:15.274Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1406 maxSegment=7511 level=info ts=2022-01-26T08:05:15.297Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1407 maxSegment=7511 level=info ts=2022-01-26T08:05:15.323Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1408 maxSegment=7511 level=info ts=2022-01-26T08:05:15.349Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1409 maxSegment=7511 level=info ts=2022-01-26T08:05:15.372Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1410 maxSegment=7511 level=info ts=2022-01-26T08:05:15.426Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1411 maxSegment=7511 level=info ts=2022-01-26T08:05:15.452Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1412 maxSegment=7511 level=info ts=2022-01-26T08:05:15.475Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1413 maxSegment=7511 level=info ts=2022-01-26T08:05:15.498Z caller=head.go:768 component=tsdb msg=&quot;WAL segment loaded&quot; segment=1414 maxSegment=7511 rpc error: code = NotFound desc = an error occurred when try to find container &quot;ae14079418f59b04bb80d8413e8fdc34f167bfe762317ef674e05466d34c9e1f&quot;: not found </code></pre> <p>So I deleted the deployment and redeployed to the the same storage account, the I got new error</p> <pre><code>level=info ts=2022-01-26T11:10:11.530Z caller=main.go:418 msg=&quot;Starting Prometheus&quot; version=&quot;(version=2.26.0, branch=HEAD, revision=3cafc58827d1ebd1a67749f88be4218f0bab3d8d)&quot; level=info ts=2022-01-26T11:10:11.534Z caller=main.go:423 build_context=&quot;(go=go1.16.2, user=root@a67cafebe6d0, date=20210331-11:56:23)&quot; level=info ts=2022-01-26T11:10:11.535Z caller=main.go:424 host_details=&quot;(Linux 5.4.0-1064-azure #67~18.04.1-Ubuntu SMP Wed Nov 10 11:38:21 UTC 2021 x86_64 prometheus-6b9d9d54f4-wnmzh (none))&quot; level=info ts=2022-01-26T11:10:11.536Z caller=main.go:425 fd_limits=&quot;(soft=1048576, hard=1048576)&quot; level=info ts=2022-01-26T11:10:11.536Z caller=main.go:426 vm_limits=&quot;(soft=unlimited, hard=unlimited)&quot; level=info ts=2022-01-26T11:10:14.168Z caller=web.go:540 component=web msg=&quot;Start listening for connections&quot; address=0.0.0.0:9090 level=info ts=2022-01-26T11:10:15.385Z caller=main.go:795 msg=&quot;Starting TSDB ...&quot; level=info ts=2022-01-26T11:10:16.022Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641837600024 maxt=1641902400000 ulid=01FS51ANKBFTVNRPZ68FGQQ5GA level=info ts=2022-01-26T11:10:16.309Z caller=tls_config.go:191 component=web msg=&quot;TLS is disabled.&quot; http2=false level=info ts=2022-01-26T11:10:16.494Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641902400005 maxt=1641967200000 ulid=01FS6Z46FGXN932K7D39D9166D level=info ts=2022-01-26T11:10:16.806Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1641967200106 maxt=1642032000000 ulid=01FS8WXRJ7Q80FKD4C8EJNR0AD level=info ts=2022-01-26T11:10:17.011Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642032000003 maxt=1642096800000 ulid=01FSATQE1VMNR101KRW1X10Q75 level=info ts=2022-01-26T11:10:17.305Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642096800206 maxt=1642161600000 ulid=01FSCRGVT1E7562SF7EQN12JBM level=info ts=2022-01-26T11:10:18.240Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642161600059 maxt=1642226400000 ulid=01FSEPAFP2CX03ANRB7Q1AG514 level=info ts=2022-01-26T11:10:21.046Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642226400051 maxt=1642291200000 ulid=01FSGM3WT0TKR0XW9BD4QSKPQE level=info ts=2022-01-26T11:10:21.422Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642291200113 maxt=1642356000000 ulid=01FSJHXKHMANW0E6FXDXVM265G level=info ts=2022-01-26T11:10:22.822Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642356000032 maxt=1642420800000 ulid=01FSMFQ6XJ97VJFKNCYQBVB4DZ level=info ts=2022-01-26T11:10:23.536Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642420800021 maxt=1642485600000 ulid=01FSPDGM95FDDV2CDWX93BTDCS level=info ts=2022-01-26T11:10:23.880Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642485600072 maxt=1642550400000 ulid=01FSRBA555RWY4QNP4HD9YKRBM level=info ts=2022-01-26T11:10:25.021Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642550400031 maxt=1642615200000 ulid=01FST93N3C82K9VS20MKTMGGYC level=info ts=2022-01-26T11:10:25.713Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642615200014 maxt=1642680000000 ulid=01FSW6X95FRNSN1XJZ2YK0MXW7 level=info ts=2022-01-26T11:10:26.634Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642680000012 maxt=1642744800000 ulid=01FSY4PXA7V1XQHHA3MC35JSWQ level=info ts=2022-01-26T11:10:27.776Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642744800174 maxt=1642809600000 ulid=01FT02G9XGHPV8GME53ZPMYXE6 level=info ts=2022-01-26T11:10:28.760Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642809600070 maxt=1642874400000 ulid=01FT209WP8AXXVZB1NCSC55ACE level=info ts=2022-01-26T11:10:29.618Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642874400253 maxt=1642939200000 ulid=01FT3Y3A4H72FFW318RKHEXXGA level=info ts=2022-01-26T11:10:30.313Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1642939200047 maxt=1643004000000 ulid=01FT5VX3YC838QN5VQFAERV1QX level=info ts=2022-01-26T11:10:30.483Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643004000040 maxt=1643068800000 ulid=01FT7SPHC5EV0SS1R0WT04H9FR level=info ts=2022-01-26T11:10:30.696Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643068800035 maxt=1643133600000 ulid=01FT9QFZXBZ7EYY2CTE8WXZTB9 level=info ts=2022-01-26T11:10:31.838Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643133600000 maxt=1643155200000 ulid=01FTA574G4M45WX97Z470DQF73 level=info ts=2022-01-26T11:10:33.686Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643176800008 maxt=1643184000000 ulid=01FTASSZCG8V5N2VGAGFBYJBSR level=info ts=2022-01-26T11:10:36.078Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643184000000 maxt=1643191200000 ulid=01FTB0NP47JW5JCF808QZZ8WZQ level=info ts=2022-01-26T11:10:36.442Z caller=repair.go:57 component=tsdb msg=&quot;Found healthy block&quot; mint=1643155200065 maxt=1643176800000 ulid=01FTB0P9H3H09B2ADD5X1RXFW6 level=info ts=2022-01-26T11:10:40.079Z caller=main.go:668 msg=&quot;Stopping scrape discovery manager...&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:682 msg=&quot;Stopping notify discovery manager...&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:704 msg=&quot;Stopping scrape manager...&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:678 msg=&quot;Notify discovery manager stopped&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:664 msg=&quot;Scrape discovery manager stopped&quot; level=info ts=2022-01-26T11:10:40.079Z caller=main.go:698 msg=&quot;Scrape manager stopped&quot; level=info ts=2022-01-26T11:10:40.080Z caller=manager.go:934 component=&quot;rule manager&quot; msg=&quot;Stopping rule manager...&quot; level=info ts=2022-01-26T11:10:40.080Z caller=manager.go:944 component=&quot;rule manager&quot; msg=&quot;Rule manager stopped&quot; level=info ts=2022-01-26T11:10:40.080Z caller=notifier.go:601 component=notifier msg=&quot;Stopping notification manager...&quot; level=info ts=2022-01-26T11:10:40.080Z caller=main.go:872 msg=&quot;Notifier manager stopped&quot; level=error ts=2022-01-26T11:10:40.080Z caller=main.go:881 err=&quot;opening storage failed: lock DB directory: resource temporarily unavailable&quot; </code></pre> <p>The yaml is provided by the <strong>Istio</strong> .Below is deployment yaml file.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: labels: component: &quot;server&quot; app: prometheus release: prometheus chart: prometheus-14.6.1 heritage: Helm name: prometheus namespace: istio-system spec: selector: matchLabels: component: &quot;server&quot; app: prometheus release: prometheus replicas: 1 template: metadata: labels: component: &quot;server&quot; app: prometheus release: prometheus chart: prometheus-14.6.1 heritage: Helm sidecar.istio.io/inject: &quot;false&quot; spec: enableServiceLinks: true serviceAccountName: prometheus containers: - name: prometheus-server-configmap-reload image: &quot;jimmidyson/configmap-reload:v0.5.0&quot; imagePullPolicy: &quot;IfNotPresent&quot; args: - --volume-dir=/etc/config - --webhook-url=http://127.0.0.1:9090/-/reload resources: {} volumeMounts: - name: config-volume mountPath: /etc/config readOnly: true - name: prometheus-server image: &quot;prom/prometheus:v2.26.0&quot; imagePullPolicy: &quot;IfNotPresent&quot; args: - --storage.tsdb.retention.time=15d - --config.file=/etc/config/prometheus.yml - --storage.tsdb.path=/data - --web.console.libraries=/etc/prometheus/console_libraries - --web.console.templates=/etc/prometheus/consoles - --web.enable-lifecycle ports: - containerPort: 9090 readinessProbe: httpGet: path: /-/ready port: 9090 initialDelaySeconds: 0 periodSeconds: 5 timeoutSeconds: 4 failureThreshold: 3 successThreshold: 1 livenessProbe: httpGet: path: /-/healthy port: 9090 initialDelaySeconds: 30 periodSeconds: 15 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 1 resources: {} volumeMounts: - name: config-volume mountPath: /etc/config - name: azurefileshare mountPath: /data subPath: &quot;&quot; hostNetwork: false dnsPolicy: ClusterFirst securityContext: fsGroup: 65534 runAsGroup: 65534 runAsNonRoot: true runAsUser: 65534 terminationGracePeriodSeconds: 300 volumes: - name: config-volume configMap: name: prometheus - name: azurefileshare azureFile: secretName: log-storage-secret shareName: prometheusfileshare readOnly: false </code></pre> <p><strong>Expected Behavior</strong> When I mount the data to new container, It should load the data.</p> <p><strong>Actual Behavior</strong> Not able to load the data or not able bind the data with newly created pod when old pod dies</p> <p>Help me out, to resolve the issue.</p>
kiran
<p><em><strong>Thank You <a href="https://stackoverflow.com/users/3781502/ywh">YwH</a> for your suggestion, Posting this an answer so it can help other community member if they encounter the same issue in future.</strong></em></p> <p>As stated in this <a href="https://istio.io/latest/docs/ops/integrations/prometheus/" rel="nofollow noreferrer"><em><strong>document</strong></em></a> Istio provides a basic sample installation to quickly get Prometheus up and running: This is intended for demonstration only, and is not tuned for performance or security.</p> <blockquote> <p>Note : Isio configuration is well-suited for small clusters and monitoring for short time horizons, it is not suitable for large-scale meshes or monitoring over a period of days or weeks</p> </blockquote> <p><em><strong>Solution : Prometheus is a stateful application, better deployed with a StatefulSet, not Deployment.</strong></em></p> <p>StatefulSets are valuable for applications that require one or more of the following.</p> <p><em><code>Stable, persistent storage. Ordered, graceful deployment and scaling.</code></em></p> <p><em><strong>You can use this <a href="https://github.com/GoogleCloudPlatform/click-to-deploy/blob/master/k8s/prometheus/manifest/prometheus-statefulset.yaml" rel="nofollow noreferrer"><em><strong>Stateful</strong></em></a> code for deployment of prometheus container.</strong></em></p>
RahulKumarShaw
<p>I want to create service account for native Kubernetes cluster so that I can send API calls:</p> <pre><code>kubernetes@kubernetes1:~$ kubectl create serviceaccount user1 serviceaccount/user1 created kubernetes@kubernetes1:~$ kubectl create clusterrole nodeaccessrole --verb=get --verb=list --verb=watch --resource=nodes clusterrole.rbac.authorization.k8s.io/nodeaccessrole created kubernetes@kubernetes1:~$ kubectl create clusterrolebinding nodeaccessrolebinding --serviceaccount=default:user1 --clusterrole=nodeaccessrole clusterrolebinding.rbac.authorization.k8s.io/nodeaccessrolebinding created kubernetes@kubernetes1:~$ kubernetes@kubernetes1:~$ kubectl get serviceaccount user1 NAME SECRETS AGE user1 0 7m15s kubernetes@kubernetes1:~$ </code></pre> <p>Do you know how I can get the token?</p> <p><em><strong>SOLUTION for v1.25.1:</strong></em></p> <pre><code>kubectl create sa cicd kubectl get sa,secret cat &lt;&lt;EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: cicd spec: serviceAccount: cicd containers: - image: nginx name: cicd EOF kubectl exec cicd -- cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo kubectl create token cicd kubectl create token cicd --duration=999999h cat &lt;&lt;EOF | kubectl apply -f - apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: cicd annotations: kubernetes.io/service-account.name: &quot;cicd&quot; EOF kubectl get sa,secret kubectl describe secret cicd kubectl describe sa cicd kubectl get sa cicd -oyaml kubectl get sa,secret </code></pre> <p>One thing is not clear:</p> <pre><code>kubectl exec cicd -- cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo </code></pre> <p>should I use '--' into the above commands?</p>
Peter Penzov
<p>If you just want to retrieve the token from the given SA you can simply execute:</p> <pre><code>kubectl get secret $(kubectl get sa &lt;sa-name&gt; -o jsonpath='{.secrets[0].name}' -n &lt;namespace&gt;) -o jsonpath='{.data.token}' -n &lt;namespace&gt; | base64 --decode </code></pre> <p>Feel free to remove the <code>| base64 --decode</code> if you don't want to decode. Just as a side node, this command might need to be amended depending on the type of secret, however for your use-case this should work</p> <p>Once you have your value you can execute curl commands, such as:</p> <pre><code>curl -k -H &quot;Authorization: Bearer $TOKEN&quot; -X GET &quot;https://&lt;KUBE-API-IP&gt;:6443/api/v1/nodes&quot; </code></pre>
Mike
<p>When trying to use scaled objects it fails with the error</p> <blockquote> <p>Failed to create the scaledobject 'azure-monitor-scaler'. Error: (400) : ScaledObject is currently not yet supported in the portal.</p> </blockquote> <p>I am using the following code as per their documentation. Still seem it is not supported by azure portal.</p> <pre><code>kind: Secret metadata: name: azure-monitor-secrets data: activeDirectoryClientId: test activeDirectoryClientPassword: test123 --- apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: azure-monitor-trigger-auth spec: secretTargetRef: - parameter: activeDirectoryClientId name: azure-monitor-secrets key: activeDirectoryClientId - parameter: activeDirectoryClientPassword name: azure-monitor-secrets key: activeDirectoryClientPassword # or Pod Identity, kind: Secret is not required in case of pod Identity podIdentity: provider: azure --- apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: azure-monitor-scaler spec: scaleTargetRef: name: sample-dep minReplicaCount: 1 maxReplicaCount: 10 triggers: - type: azure-monitor metadata: resourceURI: Microsoft.Network/applicationgateways/ag tenantId: 22323-2321-2232-1212 subscriptionId: 2323232323232323 resourceGroupName: sample-rd metricName: AvgRequestCountPerHealthyHost metricFilter: BackendSettingsPool eq 'pool' metricAggregationInterval: &quot;0:0:10&quot; metricAggregationType: Average targetValue: &quot;10&quot; authenticationRef: name: azure-monitor-trigger-auth </code></pre>
Rohit
<p>It appears that the LA scaler is broken by changes in keda version 2.7.0.</p> <p>You can try to run an older version of keda with LA scaler it should work for you.</p> <p>You can do it by running the following command: <code>helm install keda kedacore/keda --version 2.0.0 --namespace keda</code></p>
RahulKumarShaw
<p>it's possible to synchronize repo github for dags with an azure storage account?<br /> I want that evey time i put the dags in github repository it appears on azure file share.</p>
sghiouri
<p><em><strong>Airflow will not create the shared filesystem if you specify a Git repository. Instead, it will clone the DAG files to each of the nodes, and sync them periodically with the remote repository.</strong></em></p> <p>You can refer this <a href="https://docs.bitnami.com/azure-templates/infrastructure/apache-airflow/configuration/sync-dags/" rel="nofollow noreferrer"><em>Document</em></a> to Synchronize DAGs With A Remote Git Repository. Also do refer this <a href="https://www.c-sharpcorner.com/article/azure-devops-copy-files-from-git-repository-to-azure-storage-account/" rel="nofollow noreferrer"><em>Document</em></a> that will help you to Copy Files From Git Repository To Azure Storage Account using Azure Devops</p>
RahulKumarShaw
<p>Yesterday, I stopped a Helm upgrade when it was running on a release pipeline in Azure DevOps and the followings deployments failed.</p> <p>I tried to see the chart that has failed with the aim of delete it but the chart of the microservice (&quot;auth&quot;) doesn't appear. I used the command «helm list -n [namespace_of_AKS]» and it doesn't appear.</p> <p>What can I do to solve this problem?</p> <p><strong>Error in Azure Release Pipeline</strong></p> <pre><code>2022-03-24T08:01:39.2649230Z Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress 2022-03-24T08:01:39.2701686Z ##[error]Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress </code></pre> <p><strong>Helm List</strong> <a href="https://i.stack.imgur.com/E8dB4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/E8dB4.png" alt="Helm List" /></a></p>
fjalcaraz
<p>This error can happen for few reasons, but it most commonly occurs when there is an interruption during the upgrade/install process as you already mentioned.</p> <p>To fix this one may need to, <strong>first rollback to another version, then reinstall</strong> or helm upgrade again.</p> <p><em>Try below command to list</em></p> <pre><code>helm ls --namespace &lt;namespace&gt; </code></pre> <p>but you may note that when running that command ,it may not show any columns with information</p> <p><em>Try to check the history of the previous deployment</em></p> <pre><code>helm history &lt;release&gt; --namespace &lt;namespace&gt; </code></pre> <p>This provides with information mostly like the original installation was never completed successfully and is <strong>pending state</strong> something like STATUS: pending-upgrade state.</p> <p><em>To escape from this state, use the rollback command:</em></p> <pre><code>helm rollback &lt;release&gt; &lt;revision&gt; --namespace &lt;namespace&gt; </code></pre> <p>revision is optional, but you should try to provide it.</p> <p>You may then try to issue your original command again to upgrade or reinstall.</p>
kavyaS
<p><strong>Summary</strong></p> <p>How can I transform a single-threaded R Shiny App to operate on a Kubernetes cluster, ensuring that each user accessing the URL is assigned a unique Docker container instance? I currently have a Dockerfile for the app and aim to modify it for Kubernetes deployment, allowing concurrent user interactions without resource constraints. Please advise on implementing Kubernetes for my R Shiny App and utilizing the existing Dockerfile with Kubernetes.</p> <p><strong>Context</strong></p> <p>I have a single-threaded R Shiny App that currently runs on a local machine or server. However, I want to scale it up and deploy it on a Kubernetes cluster to handle multiple users concurrently. Currently, the app is containerized using a Dockerfile.</p> <p>My goal is to convert the existing Dockerfile to a configuration that allows the R Shiny App to run on a Kubernetes cluster. Specifically, I want to ensure that each user who accesses the application's URL is provided with a new instance of the Docker container. This way, multiple users can interact with the app simultaneously without any conflicts or resource limitations.</p> <p>I am seeking guidance on how to implement Kubernetes for my R Shiny App and how to utilize the Kubernetes solution from the existing Dockerfile. Any suggestions or insights on how to achieve this goal would be greatly appreciated.</p>
Payal Deshmukh
<p>I would check out <a href="https://www.shinyproxy.io" rel="nofollow noreferrer">ShinyProxy</a>. I am currently using it for its containerization and SAML authentication. They have some <a href="https://www.shinyproxy.io/documentation/configuration/#kubernetes" rel="nofollow noreferrer">configuration for Kubernetes</a>, but I haven't checked that out yet.</p>
hailey
<p>I have deployed ECK (using helm) on my k8s cluster and i am attempting to install elasticsearch following the docs. <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html" rel="noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html</a></p> <p>I have externally exposed service/elasticsearch-prod-es-http so that i can connect to it from outside of my k8s cluster. However as you can see when i try to connect to it either from curl or the browser i receive an error &quot;502 Bad Gateway&quot; error.</p> <pre><code>curl elasticsearch.dev.acme.com &lt;html&gt; &lt;head&gt;&lt;title&gt;502 Bad Gateway&lt;/title&gt;&lt;/head&gt; &lt;body&gt; &lt;center&gt;&lt;h1&gt;502 Bad Gateway&lt;/h1&gt;&lt;/center&gt; &lt;/body&gt; &lt;/html&gt; </code></pre> <p>Upon checking the pod (elasticsearch-prod-es-default-0) i can see the following message repeated.</p> <blockquote> <p>{&quot;type&quot;: &quot;server&quot;, &quot;timestamp&quot;: &quot;2021-04-27T13:12:20,048Z&quot;, &quot;level&quot;: &quot;WARN&quot;, &quot;component&quot;: &quot;o.e.x.s.t.n.SecurityNetty4HttpServerTransport&quot;, &quot;cluster.name&quot;: &quot;elasticsearch-prod&quot;, &quot;node.name&quot;: &quot;elasticsearch-prod-es-default-0&quot;, &quot;message&quot;: &quot;received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/10.0.5.81:9200, remoteAddress=/10.0.3.50:46380}&quot;, &quot;cluster.uuid&quot;: &quot;t0mRfv7kREGQhXW9DVM3Vw&quot;, &quot;node.id&quot;: &quot;nCyAItDmSqGZRa3lApsC6g&quot; }</p> </blockquote> <p><strong>Can you help me understand why this is occuring and how to fix it?</strong></p> <p>I suspect it has something to do with my TLS configuration because when i disable TLS, im able to connect to it externally without issues. However in a production environment i think keeping TLS enabled is important?</p> <p>FYI i am able to port-forward the service and connect to it with curl using the -k flag.</p> <p><strong>What i have tried</strong></p> <ol> <li>I have tried adding my domain to the section as described here <a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san" rel="noreferrer">https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-http-settings-tls-sans.html#k8s-elasticsearch-http-service-san</a></li> <li>I have tried using openssl to generate a self signed certificate but that did not work. Trying to connect locally returns the following error message.</li> </ol> <blockquote> <p>curl -u &quot;elastic:$PASSWORD&quot; &quot;https://localhost:9200&quot; curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: <a href="https://curl.haxx.se/docs/sslcerts.html" rel="noreferrer">https://curl.haxx.se/docs/sslcerts.html</a> curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.</p> </blockquote> <ol start="3"> <li>I have tried generating a certificate using the tool <a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport" rel="noreferrer">https://www.elastic.co/guide/en/elasticsearch/reference/7.9/configuring-tls.html#tls-transport</a></li> </ol> <blockquote> <p>bin/elasticsearch-certutil ca bin/elasticsearch-certutil cert --ca elastic-stack-ca.12 --pem</p> </blockquote> <p>Then using the .crt and .key generated i created a kubectl secret <code>elastic-tls-cert</code>. But again curling localhost without -k gave the following error:</p> <blockquote> <p>curl --cacert cacert.pem -u &quot;elastic:$PASSWORD&quot; -XGET &quot;https://localhost:9200&quot; curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: <a href="https://curl.haxx.se/docs/sslcerts.html" rel="noreferrer">https://curl.haxx.se/docs/sslcerts.html</a> curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above.</p> </blockquote> <p><strong>elasticsearch.yml</strong></p> <pre><code># This sample sets up an Elasticsearch cluster with 3 nodes. apiVersion: elasticsearch.k8s.elastic.co/v1 kind: Elasticsearch metadata: name: elasticsearch-prod namespace: elastic-system spec: version: 7.12.0 nodeSets: - name: default config: # most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value node.roles: [&quot;master&quot;, &quot;data&quot;, &quot;ingest&quot;, &quot;ml&quot;] # this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost node.store.allow_mmap: false xpack.security.enabled: true podTemplate: metadata: labels: # additional labels for pods foo: bar spec: nodeSelector: acme/node-type: ops # this changes the kernel setting on the node to allow ES to use mmap # if you uncomment this init container you will likely also want to remove the # &quot;node.store.allow_mmap: false&quot; setting above # initContainers: # - name: sysctl # securityContext: # privileged: true # command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144'] ### # uncomment the line below if you are using a service mesh such as linkerd2 that uses service account tokens for pod identification. # automountServiceAccountToken: true containers: - name: elasticsearch # specify resource limits and requests resources: limits: memory: 4Gi cpu: 1 env: - name: ES_JAVA_OPTS value: &quot;-Xms2g -Xmx2g&quot; count: 3 # # request 2Gi of persistent data storage for pods in this topology element volumeClaimTemplates: - metadata: name: elasticsearch-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 250Gi storageClassName: elasticsearch # # inject secure settings into Elasticsearch nodes from k8s secrets references # secureSettings: # - secretName: ref-to-secret # - secretName: another-ref-to-secret # # expose only a subset of the secret keys (optional) # entries: # - key: value1 # path: newkey # project a key to a specific path (optional) http: service: spec: # expose this cluster Service with a LoadBalancer type: NodePort # tls: # selfSignedCertificate: # add a list of SANs into the self-signed HTTP certificate subjectAltNames: # - ip: 192.168.1.2 # - ip: 192.168.1.3 # - dns: elasticsearch.dev.acme.com # - dns: localhost # certificate: # # provide your own certificate # secretName: elastic-tls-cert </code></pre> <p><strong>kubectl version</strong></p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;20&quot;, GitVersion:&quot;v1.20.4&quot;, GitCommit:&quot;e87da0bd6e03ec3fea7933c4b5263d151aafd07c&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2021-02-18T16:12:00Z&quot;, GoVersion:&quot;go1.15.8&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19+&quot;, GitVersion:&quot;v1.19.6-eks-49a6c0&quot;, GitCommit:&quot;49a6c0bf091506e7bafcdb1b142351b69363355a&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-12-23T22:10:21Z&quot;, GoVersion:&quot;go1.15.5&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p><strong>helm list</strong></p> <pre><code> NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION elastic-operator elastic-system 1 2021-04-26 11:18:02.286692269 +0100 BST deployed eck-operator-1.5.0 1.5.0 </code></pre> <p><strong>resources</strong></p> <pre><code>pod/elastic-operator-0 1/1 Running 0 4h58m 10.0.5.142 ip-10-0-5-71.us-east-2.compute.internal &lt;none&gt; &lt;none&gt; pod/elasticsearch-prod-es-default-0 1/1 Running 0 9m5s 10.0.5.81 ip-10-0-5-71.us-east-2.compute.internal &lt;none&gt; &lt;none&gt; pod/elasticsearch-prod-es-default-1 1/1 Running 0 9m5s 10.0.1.128 ip-10-0-1-207.us-east-2.compute.internal &lt;none&gt; &lt;none&gt; pod/elasticsearch-prod-es-default-2 1/1 Running 0 9m5s 10.0.5.60 ip-10-0-5-71.us-east-2.compute.internal &lt;none&gt; &lt;none&gt; NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/elastic-operator-webhook ClusterIP 172.20.218.208 &lt;none&gt; 443/TCP 26h app.kubernetes.io/instance=elastic-operator,app.kubernetes.io/name=elastic-operator service/elasticsearch-prod-es-default ClusterIP None &lt;none&gt; 9200/TCP 9m5s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod,elasticsearch.k8s.elastic.co/statefulset-name=elasticsearch-prod-es-default service/elasticsearch-prod-es-http NodePort 172.20.229.173 &lt;none&gt; 9200:30604/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod service/elasticsearch-prod-es-transport ClusterIP None &lt;none&gt; 9300/TCP 9m6s common.k8s.elastic.co/type=elasticsearch,elasticsearch.k8s.elastic.co/cluster-name=elasticsearch-prod </code></pre> <p><strong>aws alb ingress controller</strong></p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: elastic-ingress namespace: elastic-system annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/group.name: &quot;&lt;redacted&gt;&quot; alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTP&quot;:80,&quot;HTTPS&quot;: 443}]' alb.ingress.kubernetes.io/certificate-arn: &lt;redacted&gt; alb.ingress.kubernetes.io/tags: Environment=prod,Team=dev alb.ingress.kubernetes.io/healthcheck-path: /health alb.ingress.kubernetes.io/healthcheck-interval-seconds: '300' alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=acme-aws-ingress-logs,access_logs.s3.prefix=dev-ingress spec: rules: - host: elasticsearch.dev.acme.com http: paths: - path: /* pathType: Prefix backend: service: name: elasticsearch-prod-es-http port: number: 9200 # - host: kibana.dev.acme.com # http: # paths: # - path: /* # pathType: Prefix # backend: # service: # name: kibana-prod-kb-http # port: # number: 5601 </code></pre>
Kay
<p>You have to disable http ssl, for this you have to modify the config/elasticsearch.yml file and change the associated variable to false:</p> <pre><code>xpack.security.http.ssl: enabled: false keystore.path: certs/http.p12 </code></pre>
Jorge Mora
<br> I am trying to setup the kuard demo app in the namespace example-ns exposed by nginx ingress. <br> Exposing it in the default namespace works but when I expose it in the namespace example-ns I get: <br> ```503 Service Temporarily Unavailable``` <p>These are to service, deployment and ingress yamls I use for kuard:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: kuard namespace: example-ns spec: selector: matchLabels: app: kuard replicas: 1 template: metadata: labels: app: kuard spec: containers: - image: gcr.io/kuar-demo/kuard-amd64:1 imagePullPolicy: Always name: kuard ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: kuard namespace: example-ns spec: ports: - port: 80 targetPort: 8080 protocol: TCP selector: app: kuard --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: kuard namespace: example-ns annotations: kubernetes.io/ingress.class: &quot;nginx&quot; cert-manager.io/cluster-issuer: &quot;letsencrypt-prod&quot; nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: htpasswd nginx.ingress.kubernetes.io/auth-realm: &quot;Enter your credentials&quot; spec: tls: - hosts: - example.mydomain.dev secretName: quickstart-example-tls rules: - host: example.mydomain.dev http: paths: - path: / pathType: Prefix backend: service: name: kuard port: number: 80 </code></pre> <p>As you can see everything is in the same namespace and describing the ingress results in:</p> <pre><code>❯ kubectl describe ingress kuard -n example-ns Name: kuard Labels: &lt;none&gt; Namespace: example-ns Address: 192.168.69.1 Ingress Class: &lt;none&gt; Default backend: &lt;default&gt; TLS: quickstart-example-tls terminates example.mydomain.dev Rules: Host Path Backends ---- ---- -------- example.mydomain.dev / kuard:80 (10.69.58.226:8080) Annotations: cert-manager.io/cluster-issuer: letsencrypt-prod kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/auth-realm: Enter your credentials nginx.ingress.kubernetes.io/auth-secret: htpasswd nginx.ingress.kubernetes.io/auth-type: basic Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateCertificate 28m cert-manager-ingress-shim Successfully created Certificate &quot;quickstart-example-tls&quot; Normal Sync 27m (x2 over 28m) nginx-ingress-controller Scheduled for sync Normal Sync 27m (x2 over 28m) nginx-ingress-controller Scheduled for sync </code></pre> <p>I also read same issues like <a href="https://stackoverflow.com/questions/51878195/kubernetes-cross-namespace-ingress-network">this</a> but this solution is not working as seen here.<br> Anyone has an idea whats wrong here?<br> Thanks in advance!</p> <p><strong>SOLUTION:</strong></p> <p>I checked the logs of the ingress controller and saw that the auth secret was in the default namespace. Thats why only pods from default namespace were acessible. Moving the secret into the proper namespace solved the issue!</p>
Robert Fent
<p>First of all you should not use the Annotation <code>kubernetes.io/ingress.class</code> anymore as it's deprecated. Instead use <code>.spec.ingressClassName</code> to refer to your desired Ingress Controller:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-myservicea spec: ingressClassName: nginx rules: ... </code></pre> <p>It seems like that your Ingress in your desired Namespace can't seem to sync with the Controller, so if there is any Netpols in your <code>example-ns</code> and the Namespace where your Controller resides; back them up and delete them, to make sure the connection isn't being blocked.</p> <p>Next you should check the logs of your Ingress Controller itself, if the connection reaches it; you will see surely the reason in the logs why the Ingress resource doesn't work. Also sharing your Config for the Ingress Controller would be helpful.</p>
Mike
<p>I hope somebody can help me. I'm trying to pull a private docker image with no success. I already tried some solutions that I found, but without success.</p> <p>Docker, Gitlab, Gitlab-Runner, Kubernetes all run on the same server</p> <p>Insecure Registry</p> <pre><code>$ sudo cat /etc/docker/daemon.json { &quot;insecure-registries&quot;:[&quot;10.0.10.20:5555&quot;]} </code></pre> <p>Config.json</p> <pre><code>$ cat .docker/config.json { &quot;auths&quot;: { &quot;10.0.10.20:5555&quot;: { &quot;auth&quot;: &quot;NDUwNjkwNDcwODoxMjM0NTZzIQ==&quot; }, &quot;https://index.docker.io/v1/&quot;: { &quot;auth&quot;: &quot;NDUwNjkwNDcwODpGcGZHMXQyMDIyQCE=&quot; } } } </code></pre> <p>Secret</p> <pre><code>$ kubectl create secret generic regcred \ --from-file=.dockerconfigjson=~/.docker/config.json \ --type=kubernetes.io/dockerconfigjson </code></pre> <p>I'm trying to create a Kubernetes pod from a private docker image. However, I get the following error:</p> <pre><code>Name: private-reg Namespace: default Priority: 0 Node: 10.0.10.20 Start Time: Thu, 12 May 2022 12:44:22 -0400 Labels: &lt;none&gt; Annotations: &lt;none&gt; Status: Pending IP: 10.244.0.61 IPs: IP: 10.244.0.61 Containers: private-reg-container: Container ID: Image: 10.0.10.20:5555/development/app-image-base:latest Image ID: Port: &lt;none&gt; Host Port: &lt;none&gt; State: Waiting Reason: ErrImagePull Ready: False Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stjn4 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-stjn4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BackOff 2m7s (x465 over 107m) kubelet Back-off pulling image &quot;10.0.10.20:5555/development/expedicao-api-image-base:latest&quot; Normal Pulling 17s (x3 over 53s) kubelet Pulling image &quot;10.0.10.20:5555/development/expedicao-api-image-base:latest&quot; Warning Failed 17s (x3 over 53s) kubelet Failed to pull image &quot;10.0.10.20:5555/development/expedicao-api-image-base:latest&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;10.0.10.20:5555/development/app-image-base:latest&quot;: failed to resolve reference &quot;10.0.10.20:5555/development/app-image-base:latest&quot;: failed to do request: Head &quot;https://10.0.10.20:5555/v2/development/app-image-base/manifests/latest&quot;: http: server gave HTTP response to HTTPS client Warning Failed 17s (x3 over 53s) kubelet Error: ErrImagePull Normal BackOff 3s (x2 over 29s) kubelet Back-off pulling image &quot;10.0.10.20:5555/development/expedicao-api-image-base:latest&quot; Warning Failed 3s (x2 over 29s) kubelet Error: ImagePullBackOff </code></pre> <p>When I pull the image directly in docker, no problem occurs even with the secret</p> <p>Pull image</p> <pre><code>$ docker login 10.0.10.20:5555 Username: 4506904708 Password: WARNING! Your password will be stored unencrypted in ~/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded $ docker pull 10.0.10.20:5555/development/app-image-base:latest latest: Pulling from development/app-image-base Digest: sha256:1385a8aa2bc7bac1a8d3e92ead66fdf5db3d6625b736d908d1fec61ba59b6bdc Status: Image is up to date for 10.0.10.20:5555/development/app-image-base:latest 10.0.10.20:5555/development/app-image-base:latest </code></pre> <p><strong>Can someone help me?</strong></p>
Jeysson Paiva
<p>First, you need to create a file in /etc/containerd/config.toml</p> <pre><code># Config file is parsed as version 1 by default. # To use the long form of plugin names set &quot;version = 2&quot; [plugins.cri.registry.mirrors] [plugins.cri.registry.mirrors.&quot;10.0.10.20:5555&quot;] endpoint = [&quot;http://10.0.10.20:5555&quot;] </code></pre> <p>Second, restart contained</p> <pre><code>$ systemctl restart containerd </code></pre>
Jeysson Paiva
<p>Am new to trino. Tried installing trino on kubernetes using the helm chart available under trinodb/charts.</p> <p>Coordinator and worker pods come up fine, but am unable to find catalog location. Checked the helm chart and it seems to not have it defined anywhere either.</p> <p>How did others who used the helm chart define new connectors and use.</p> <p>Any pointers ?</p>
user3866620
<p>They added catalogs in their chart, have a look at this pull request <a href="https://github.com/trinodb/charts/pull/1" rel="nofollow noreferrer">https://github.com/trinodb/charts/pull/1</a></p>
Shyam
<p>OAUTH2 is used for authentication and the OAUTH2 proxy is deployed in Kubernetes. When a request is received by the NGINX Ingress controller, it always routes the traffic to OAUTH proxy. The requirement is when the request contains a specific header (For example: abc) then those requests should be routed directly to the backend. Those shouldn't be routed to OAUTH proxy. Can this be done using some sort of an annotation in NGINX Ingress controller? Can we by pass those traffic going to OAUTH2?</p>
Container-Man
<p>You may want to have a look at <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#canary</a></p> <p>Let's say you have a normal Ingress:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-backend spec: ingressClassName: nginx rules: - host: XXX http: paths: - path: / pathType: Prefix backend: service: name: backend port: number: 80 </code></pre> <p>Set the header name and value for your desired backend on a second Ingress, with canary enabled.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-backend-header annotations: nginx.ingress.kubernetes.io/canary: &quot;true&quot; nginx.ingress.kubernetes.io/canary-by-header: sample-header nginx.ingress.kubernetes.io/canary-by-header-value: abc spec: ingressClassName: nginx rules: - host: XXX http: paths: - path: / pathType: Prefix backend: service: name: backend-with-header port: number: 80 </code></pre> <p>Now, every request with sample-header: abc routes to the second ingress/service. Any other value, e. g. sample-header: test, will route to the first ingress/service.</p>
Mike
<p>Sorry if this question might sound &quot;convoluted&quot; but here it goes...</p> <p>I'm currently designing a k8s solution based on Firecracker and Kata-containers. I'd like the environment to be as isolated/secure as possible. My thoughts around this are:</p> <ol> <li>deploy k8s masters as Firecracker nodes having API-server, Controller, Scheduler and etcd</li> <li>deploy k8s workers as Firecracker nodes having Kubelet, Kube-proxy and using Kata-containers + Firecracker for deployed workload. The workload will be a combination of MQTT cluster components and in-house developed FaaS components (probably using OpenFaaS)</li> </ol> <p>It's point 2 above which makes me feel a little awkward/convoluted. Am I over complicating things, introducing complexity which will cause problems related to (CNI) networking among worker nodes etc? Isolation and minimizing attack vectors are all important, but maybe I'm trying &quot;to be too much of a s.m.a.r.t.a.s.s&quot; here :)</p> <p>I really like the concept with Firecrackers microVM architecture with reduced security risks and reduced footprint and it would make for a wonderful solution to tenant isolation. However, am I better of to use another CRI-conforming runtime together with Kata for the actual workload being deployed on the workers?</p> <p>Many thanks in advance for your thoughts/comments on this!</p>
Kodo
<p>You might want to take a look at <a href="https://github.com/weaveworks-liquidmetal" rel="nofollow noreferrer">https://github.com/weaveworks-liquidmetal</a> and consider whether contributing to that would get you further towards your goal? alternative runtimes (like kata) for different workloads are welcomed in PR’s. There is a liquid-metal slack channel in the Weaveworks user group of you have any queries. Disclosure I currently work at Weaveworks :)</p>
Slax
<p>I am deploying pgadmin and postgres on kubernetes. When i look at deployments I see that 2 deployments are not ready. When I look at logs of Pgadmin, I see that it gives error as it can not connect to postgres. I use configmap to connect pgadmin to postgres. When I look at logs of postgres I see error.</p> <p>Logs:</p> <pre><code>The files belonging to this database system will be owned by user &quot;postgres&quot;. This user must also own the server process. The database cluster will be initialized with locale &quot;en_US.utf8&quot;. The default database encoding has accordingly been set to &quot;UTF8&quot;. The default text search configuration will be set to &quot;english&quot;. Data page checksums are disabled. fixing permissions on existing directory /var/lib/postgresql/data ... ok creating subdirectories ... ok selecting dynamic shared memory implementation ... posix selecting default max_connections ... 20 selecting default shared_buffers ... 400kB selecting default time zone ... Etc/UTC creating configuration files ... ok Bus error (core dumped) child process exited with exit code 135 initdb: removing contents of data directory &quot;/var/lib/postgresql/data&quot; running bootstrap script ... </code></pre> <p>yaml file:</p> <pre><code>#configmap apiVersion: v1 kind: ConfigMap metadata: name: postgres-configmap data: db_url: postgres-service --- #postgres apiVersion: apps/v1 kind: Deployment metadata: name: postgres-deployment labels: app: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:13.3 ports: - containerPort: 5432 env: - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres-secret key: postgres-password --- apiVersion: v1 kind: Service metadata: name: postgres-service spec: selector: app: postgres ports: - protocol: TCP port: 5432 targetPort: 5432 --- #pgadmin apiVersion: apps/v1 kind: Deployment metadata: name: pgadmin-deployment labels: app: pgadmin spec: replicas: 1 selector: matchLabels: app: pgadmin template: metadata: labels: app: pgadmin spec: containers: - name: pgadmin image: dpage/pgadmin4 ports: - containerPort: 49762 env: - name: PGADMIN_DEFAULT_EMAIL value: [email protected] - name: PGADMIN_DEFAULT_PASSWORD value: password - name: PGADMIN_LISTEN_ADDRESS valueFrom: configMapKeyRef: name: postgres-configmap key: db_url --- apiVersion: v1 kind: Service metadata: name: pgadmin-service spec: selector: app: pgadmin type: LoadBalancer ports: - protocol: TCP port: 49762 targetPort: 49762 nodePort: 30001 </code></pre>
name46327
<p>After analysing the comments it looks like below resources have been helpful to solve this problem:</p> <ol> <li><a href="https://stackoverflow.com/questions/30848670/how-to-customize-the-configuration-file-of-the-official-postgresql-docker-image">How to customize the configuration file of the official PostgreSQL Docker image?</a></li> <li><a href="https://github.com/docker-library/postgres/issues/451#issuecomment-447472044" rel="nofollow noreferrer">https://github.com/docker-library/postgres/issues/451#issuecomment-447472044</a></li> </ol> <p>To sum up, editing <code>/usr/share/postgresql/postgresql.conf.sample</code> file while postgres runs inside a container can be done by putting a custom <code>postgresql.conf</code> in a temporary file inside the container and overwriting the default configuration at runtime as described <a href="https://github.com/docker-library/postgres/issues/451#issuecomment-447472044" rel="nofollow noreferrer">here</a>. Also, keeping a dummy entry point script using &quot;play with kubernetes&quot; <a href="https://www.google.com/search?q=play%20with%20kubernetes&amp;rlz=1CAZVTZ_enPL954PL954&amp;ei=pd_VYKWEBeiOrwTs8JjwDg&amp;oq=play%20with%20kubernetes&amp;gs_lcp=Cgdnd3Mtd2l6EAMyBAgAEEMyAggAMgIIADICCAAyAggAMgIIADICCAAyAggAOgcIABBHELADSgQIQRgAULv6OVi7-jlgwvs5aAJwAngAgAFziAHTAZIBAzEuMZgBAKABAaoBB2d3cy13aXrIAQjAAQE&amp;sclient=gws-wiz&amp;ved=0ahUKEwjl6rKe97LxAhVox4sKHWw4Bu4Q4dUDCA4&amp;uact=5" rel="nofollow noreferrer">websites</a> and then spinning up the container or trying to copy the file to the container might be useful.</p>
Jakub Siemaszko
<p>I am unsure what the difference between &quot;plain calico&quot;</p> <pre><code>kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml </code></pre> <p>and the &quot;calico tigera&quot; (operator) is.</p> <pre><code>helm repo add projectcalico https://projectcalico.docs.tigera.io/charts helm install calico projectcalico/tigera-operator --version v3.24.1\ --create-namespace -f values.yaml --namespace tigera-operator </code></pre> <p>I only really need a CNI, ideally the least contorted. My impression is that the tigera is somehow a &quot;new extented version&quot; and it makes me sad to see suddenly a much fuller K8s cluster because of this (seems hence like mainly the devs of Calico wanted to get funding and needed to blow up the complexity for fame of their product, but I might be wrong hence the question)</p> <pre><code>root@cp:~# kubectl get all -A | grep -e 'NAMESPACE\|calico' NAMESPACE NAME READY STATUS RESTARTS AGE calico-apiserver pod/calico-apiserver-8665d9fcfb-6z7sv 1/1 Running 0 7m30s calico-apiserver pod/calico-apiserver-8665d9fcfb-95rlh 1/1 Running 0 7m30s calico-system pod/calico-kube-controllers-78687bb75f-ns5nj 1/1 Running 0 8m3s calico-system pod/calico-node-2q8h9 1/1 Running 0 7m43s calico-system pod/calico-typha-6d48dfd49d-p5p47 1/1 Running 0 7m47s calico-system pod/csi-node-driver-9gjc4 2/2 Running 0 8m4s NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE calico-apiserver service/calico-api ClusterIP 10.105.6.52 &lt;none&gt; 443/TCP 7m30s calico-system service/calico-kube-controllers-metrics ClusterIP 10.105.39.117 &lt;none&gt; 9094/TCP 8m3s calico-system service/calico-typha ClusterIP 10.102.152.6 &lt;none&gt; 5473/TCP 8m5s NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE calico-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 8m4s calico-system daemonset.apps/csi-node-driver 1 1 1 1 1 kubernetes.io/os=linux 8m4s NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE calico-apiserver deployment.apps/calico-apiserver 2/2 2 2 7m30s calico-system deployment.apps/calico-kube-controllers 1/1 1 1 8m3s calico-system deployment.apps/calico-typha 1/1 1 1 8m4s NAMESPACE NAME DESIRED CURRENT READY AGE calico-apiserver replicaset.apps/calico-apiserver-8665d9fcfb 2 2 2 7m30s calico-system replicaset.apps/calico-kube-controllers-78687bb75f 1 1 1 8m3s calico-system replicaset.apps/calico-typha-588b4ff644 0 0 0 8m4s calico-system replicaset.apps/calico-typha-6d48dfd49d 1 1 1 7m47s </code></pre>
humanityANDpeace
<p>CNI is a small network plugin that is used for allocating IP address, but calico tigera is responsible for whole kubernetes networking and connecting nodes and services</p>
Amirhossein eidy
<p>Is it possible to list the kubernetes CNI and pod-network-cidr details used on kubernetes cluster? Preferably using <code>kubectl</code>.</p>
Logu
<p>Additionally to the previous answer you can use:</p> <pre><code>kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' </code></pre> <p>to get pod CIDR addresses for each of the nodes in your cluster.</p>
Jakub Siemaszko
<p>I'm learning k8s, I found an example in the MS docs. The problem I'm having is that I want to switch what GITHUB repo thats being used. I havent been able to figure out the path within this yaml example</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: name: azure-vote-back spec: replicas: 1 selector: matchLabels: app: azure-vote-back template: metadata: labels: app: azure-vote-back spec: nodeSelector: &quot;kubernetes.io/os&quot;: linux containers: - name: azure-vote-back image: mcr.microsoft.com/oss/bitnami/redis:6.0.8 env: - name: ALLOW_EMPTY_PASSWORD value: &quot;yes&quot; resources: requests: cpu: 100m memory: 128Mi limits: cpu: 250m memory: 256Mi ports: - containerPort: 6379 name: redis --- apiVersion: v1 kind: Service metadata: name: azure-vote-back spec: ports: - port: 6379 selector: app: azure-vote-back --- apiVersion: apps/v1 kind: Deployment metadata: name: azure-vote-front spec: replicas: 1 selector: matchLabels: app: azure-vote-front template: metadata: labels: app: azure-vote-front spec: nodeSelector: &quot;kubernetes.io/os&quot;: linux containers: - name: azure-vote-front image: mcr.microsoft.com/azuredocs/azure-vote-front:v1 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 250m memory: 256Mi ports: - containerPort: 80 env: - name: REDIS value: &quot;azure-vote-back&quot; --- apiVersion: v1 kind: Service metadata: name: azure-vote-front spec: type: LoadBalancer ports: - port: 80 selector: app: azure-vote-front </code></pre>
user770022
<p>This YAML example doesn't have a Github Repo field at all. That's why you can't find a path.</p> <p>If you're trying to change the container image source, it has to be from a container registry (or your own filesystem), which is located at</p> <pre><code>containers: image: mcr.microsoft.com/azuredocs/azure-vote-front:v1 </code></pre> <p>where mcr.microsoft.com is the container registry.</p> <p>You won't be able to connect this directly to a Github Repository, but any container registry will work, and I believe Github has one at <a href="https://ghcr.io" rel="nofollow noreferrer">https://ghcr.io</a> (that link itself will direct you back to Github)</p>
Paradoc
<p>I made a persistent volume claim on kubernetes to save mongodb data after restarting the deployment I found that data is not existed also my PVC is in bound state.</p> <p><strong>here is my yaml file</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: auth-mongo-depl spec: replicas: 1 selector: matchLabels: app: auth-mongo template: metadata: labels: app: auth-mongo spec: volumes: - name: auth-mongo-data persistentVolumeClaim: claimName: auth-mongo-pvc containers: - name: auth-mongo image: mongo ports: - containerPort: 27017 name: 'auth-mongo-port' volumeMounts: - name: auth-mongo-data mountPath: '/data/db' --- # Persistent Volume Claim apiVersion: v1 kind: PersistentVolumeClaim metadata: name: auth-mongo-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi </code></pre> <p>and I made clusterIP service for the deployment</p>
MohamedSalah97
<p>First off, if the PVC status is still <code>Bound</code> and the desired pod happens to start on another node, it will fail as the PV can't be mounted into the pod. This happens because the <code>reclaimPolicy: Retain</code> of the StorageClass (can also be set on the PV directly <code>persistentVolumeReclaimPolicy: Retain</code>). In order to fix this, you have to manually overrite/delete the <code>claimRef</code> of the PV. Use <code>kubectl patch pv PV_NAME -p '{&quot;spec&quot;:{&quot;claimRef&quot;: null}}'</code> to do this, after doing so the PV's status should be <code>Available</code>.</p> <p>In order to see if the your application writes any data to the desired path, run your application and exec into it (<code>kubectl -n NAMESPACE POD_NAME -it -- /bin/sh</code>) and check your <code>/data/db</code>. You could also create an file with some random text, restart your application and check again.</p> <p>I'm fairly certain that if your PV isn't being recreated every time your application starts (which shouldn't be the case, because of <code>Retain</code>), then it's highly that your Application isn't writing to the path specified. But you could also share your <code>PersistentVolume</code> config with us, as there might be some misconfiguration there as well.</p>
Mike
<p>I have a custom application helm chart with an ingress object which is deployed in production. Now I need to migrate the ingress source code object from the helm chart to terraform to give control over the object to another team. Technically no problem with accepting a downtime. But I want to keep the ingress object from being undeployed by the helm chart during deployment as there is a letsencrypt certificate attached to it.</p> <p>So is there a possibility to tell helm to keep the ingress object when I remove the ingress in the source of the helm chart during helm upgrade?</p>
Uwe Bartels
<p>found the answer myself in the helm anntotations. <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource" rel="nofollow noreferrer">https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource</a></p> <p>That mean's you deploy the ingress again via helm chart with the annotation &quot;helm.sh/resource-policy&quot;: keep. Then you remove the ingress from the helm chart and redeploy it. Now the ingress is still deployed in kubernetes but not anymore under control of the helm release. Next step is to model/code the ingress in terraform and import the resource via terraform import. Last step is to test with <code>terraform plan</code> if the imported resource corresponds completely with the coded ingress in terraform</p> <p>That's it.</p>
Uwe Bartels
<p>I'm writing a helm chart where I need to supply a <code>nfs.server</code> value for the volume mount from the <code>ConfigMap</code> (<strong>efs-url</strong> in the example below).</p> <p>There are examples in the docs on how to pass the value from the <code>ConfigMap</code> to env variables or even mount <code>ConfigMaps</code>. I understand how I can pass this value from the <code>values.yaml</code> but I just can't find an example on how it can be done using a <code>ConfigMap</code>.</p> <p>I have control over this <code>ConfigMap</code> so I can reformat it as needed.</p> <ol> <li>Am I missing something very obvious?</li> <li>Is it even possible to do?</li> <li>If not, what are the possible workarounds?</li> </ol> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: efs-url data: url: yourEFSsystemID.efs.yourEFSregion.amazonaws.com --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: efs-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: efs-provisioner spec: containers: - name: efs-provisioner image: quay.io/external_storage/efs-provisioner:latest env: - name: FILE_SYSTEM_ID valueFrom: configMapKeyRef: name: efs-provisioner key: file.system.id - name: AWS_REGION valueFrom: configMapKeyRef: name: efs-provisioner key: aws.region - name: PROVISIONER_NAME valueFrom: configMapKeyRef: name: efs-provisioner key: provisioner.name volumeMounts: - name: pv-volume mountPath: /persistentvolumes volumes: - name: pv-volume nfs: server: &lt;&lt;&lt; VALUE SHOULD COME FROM THE CONFIG MAP &gt;&gt;&gt; path: / </code></pre>
Valera Maniuk
<p>Having analysed the comments it looks like using ConfigMap approach is not suitable for this example as ConfigMap</p> <blockquote> <p>is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.</p> </blockquote> <p>To read more about ConfigMaps and how they can be utilized one can visit the <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">&quot;ConfigMaps&quot; section</a> and the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">&quot;Configure a Pod to Use a ConfigMap&quot; section</a>.</p>
Jakub Siemaszko
<p>I have an Openshift 3 Cluster containing the two following containers: selenium-hub and selenium-node-chrome. Please see below the attached deployment and service yaml files.</p> <p><strong>Hub Deployment:</strong></p> <pre><code>apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: labels: app: selenium-hub selenium-hub: master name: selenium-hub spec: replicas: 1 selector: type: selenium-hub template: metadata: labels: type: selenium-hub name: selenium-hub spec: containers: - image: 'selenium/hub:latest' imagePullPolicy: IfNotPresent name: master ports: - containerPort: 4444 protocol: TCP - containerPort: 4442 protocol: TCP - containerPort: 4443 protocol: TCP triggers: - type: ConfigChange </code></pre> <p><strong>Hub Service:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: selenium-hub selenium-hub: master name: selenium-hub spec: ports: - name: selenium-hub port: 4444 protocol: TCP targetPort: 4444 - name: publish port: 4442 protocol: TCP targetPort: 4442 - name: subscribe port: 4443 protocol: TCP targetPort: 4443 selector: type: selenium-hub type: ClusterIP </code></pre> <p><strong>Node Deployment:</strong></p> <pre><code>apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: labels: app: selenium-node-chrome name: selenium-node-chrome spec: replicas: 1 revisionHistoryLimit: 10 selector: browser: chrome template: metadata: labels: app: node-chrome browser: chrome name: selenium-node-chrome-master spec: containers: - env: - name: SE_EVENT_BUS_HOST value: selenium-hub - name: SE_EVENT_BUS_PUBLISH_PORT value: '4442' - name: SE_EVENT_BUS_SUBSCRIBE_PORT value: '4443' - name: SE_NODE_HOST value: node-chrome - name: SE_NODE_PORT value: '5555' image: 'selenium/node-chrome:4.0.0-20211102' imagePullPolicy: IfNotPresent name: master ports: - containerPort: 5555 protocol: TCP triggers: - type: ConfigChange </code></pre> <p><strong>Node Service:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app: selenium-node-chrome name: selenium-node-chrome spec: ports: - name: node-port port: 5555 protocol: TCP targetPort: 5555 - name: node-port-grid port: 4444 protocol: TCP targetPort: 4444 selector: browser: chrome type: ClusterIP </code></pre> <p><strong>My Issue:</strong> The hub and the node are starting, but the node just keeps sending the registration event and the hub is logging some infos, which i dont really understand. Please see the logs attached below.</p> <p><strong>Node Log:</strong></p> <pre><code>Setting up SE_NODE_GRID_URL... Selenium Grid Node configuration: [events] publish = &quot;tcp://selenium-hub:4442&quot; subscribe = &quot;tcp://selenium-hub:4443&quot; [server] host = &quot;node-chrome&quot; port = &quot;5555&quot; [node] session-timeout = &quot;300&quot; override-max-sessions = false detect-drivers = false max-sessions = 1 [[node.driver-configuration]] display-name = &quot;chrome&quot; stereotype = '{&quot;browserName&quot;: &quot;chrome&quot;, &quot;browserVersion&quot;: &quot;95.0&quot;, &quot;platformName&quot;: &quot;Linux&quot;}' max-sessions = 1 Starting Selenium Grid Node... 11:34:31.635 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding 11:34:31.643 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing 11:34:31.774 INFO [UnboundZmqEventBus.&lt;init&gt;] - Connecting to tcp://selenium-hub:4442 and tcp://selenium-hub:4443 11:34:31.843 INFO [UnboundZmqEventBus.&lt;init&gt;] - Sockets created 11:34:32.854 INFO [UnboundZmqEventBus.&lt;init&gt;] - Event bus ready 11:34:33.018 INFO [NodeServer.createHandlers] - Reporting self as: http://node-chrome:5555 11:34:33.044 INFO [NodeOptions.getSessionFactories] - Detected 1 available processors 11:34:33.115 INFO [NodeOptions.report] - Adding chrome for {&quot;browserVersion&quot;: &quot;95.0&quot;,&quot;browserName&quot;: &quot;chrome&quot;,&quot;platformName&quot;: &quot;Linux&quot;,&quot;se:vncEnabled&quot;: true} 1 times 11:34:33.130 INFO [Node.&lt;init&gt;] - Binding additional locator mechanisms: name, relative, id 11:34:33.471 INFO [NodeServer$1.start] - Starting registration process for node id 2832e819-cf31-4bd9-afcc-cd2b27578d58 11:34:33.473 INFO [NodeServer.execute] - Started Selenium node 4.0.0 (revision 3a21814679): http://node-chrome:5555 11:34:33.476 INFO [NodeServer$1.lambda$start$1] - Sending registration event... 11:34:43.479 INFO [NodeServer$1.lambda$start$1] - Sending registration event... 11:34:53.481 INFO [NodeServer$1.lambda$start$1] - Sending registration event... </code></pre> <p><strong>Hub Log:</strong></p> <pre><code>2021-12-07 11:14:22,663 INFO spawned: 'selenium-grid-hub' with pid 11 2021-12-07 11:14:23,664 INFO success: selenium-grid-hub entered RUNNING state, process has stayed up for &gt; than 0 seconds (startsecs) 11:14:23.953 INFO [LoggingOptions.configureLogEncoding] - Using the system default encoding 11:14:23.961 INFO [OpenTelemetryTracer.createTracer] - Using OpenTelemetry for tracing 11:14:24.136 INFO [BoundZmqEventBus.&lt;init&gt;] - XPUB binding to [binding to tcp://*:4442, advertising as tcp://XXXXXXX:4442], XSUB binding to [binding to tcp://*:4443, advertising as tcp://XXXXXX:4443] 11:14:24.246 INFO [UnboundZmqEventBus.&lt;init&gt;] - Connecting to tcp://XXXXXX:4442 and tcp://XXXXXXX:4443 11:14:24.275 INFO [UnboundZmqEventBus.&lt;init&gt;] - Sockets created 11:14:25.278 INFO [UnboundZmqEventBus.&lt;init&gt;] - Event bus ready 11:14:26.232 INFO [Hub.execute] - Started Selenium Hub 4.1.0 (revision 87802e897b): http://XXXXXXX:4444 11:14:46.965 INFO [Node.&lt;init&gt;] - Binding additional locator mechanisms: name, relative, id 11:15:46.916 INFO [Node.&lt;init&gt;] - Binding additional locator mechanisms: relative, name, id 11:17:52.377 INFO [Node.&lt;init&gt;] - Binding additional locator mechanisms: relative, id, name </code></pre> <p>Can anyone tell me why the hub wont register the node? If you need any further informations, let me know. Thanks alot</p>
Jakob Wilmesmeier
<p>So, bit late, but still I had this same issue - the <code>docker-compose</code> example gave me selenium-hub as the host, which is correct in that scenario as it points towards the container defined by the selenium-hub service. However, in Kubernetes, the inter-pod communication needs to go via a Service. There are multiple kinds of Service, but in order to access it from inside the cluster, it's easiest in this case to use a ClusterIP (<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">docs here for more info</a>).</p> <p>The way I resolved it was to have a Service for both the ports that the event bus uses:</p> <ul> <li>bus-publisher (port 4442)</li> <li>bus-subscription (port 4443)</li> </ul> <p>In a manifest yaml, this looks like:</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: app-name: selenium name: bus-sub namespace: selenium spec: ports: - port: 4443 protocol: TCP targetPort: 4443 selector: app: selenium-hub type: ClusterIP </code></pre>
Sierra1011
<p>I have a Kubernetes cluster with an nginx ingress controller. The nginx controller often randomly reloads <a href="https://i.stack.imgur.com/KXTLg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KXTLg.png" alt="enter image description here" /></a></p> <p>On this cluster i've now deployed a web socket server. And it seems like every time the nginx ingress controller restarts, the web socket connection also gets closed. I can't seem to find out why the nginx ingress controller is restarting and how I can prevent this from happening or can prevent the web socket from closing the connection.</p> <p>Does anyone know what I can do to keep the web socket connections alive?</p> <p>Cloud provider: Google Kubernetes Engine</p> <p>Kubernetes Version: 1.18.20-gke.501</p> <p>Application Package Manager: Helm v3</p> <p>YAML of the nginx-ingress-controller (We haven't changed anything in this YAML, I reckon it's a default configuration):</p> <pre><code>apiVersion: v1 kind: Service metadata: creationTimestamp: &quot;2018-06-19T14:21:41Z&quot; finalizers: - service.kubernetes.io/load-balancer-cleanup labels: app: nginx-ingress chart: nginx-ingress-0.12.2 component: controller heritage: Tiller release: nginx-ingress-1 name: nginx-ingress-1-controller namespace: loadbalancers resourceVersion: &quot;277233411&quot; selfLink: /api/v1/namespaces/loadbalancers/services/nginx-ingress-1-controller uid: 278d0e86-73cc-22e8-4567-35725aa57fef spec: clusterIP: (ip-goes-here) externalTrafficPolicy: Cluster ports: - name: http nodePort: 32269 port: 80 protocol: TCP targetPort: 80 - name: https nodePort: 32258 port: 443 protocol: TCP targetPort: 443 selector: app: nginx-ingress component: controller release: nginx-ingress-1 sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: (ip-goes-here) </code></pre> <p>Websocket deployment:</p> <p>deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ template &quot;websocket-server.fullname&quot; . }} labels: app: {{ template &quot;websocket-server.name&quot; . }} chart: {{ template &quot;websocket-server.chart&quot; . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: {{ template &quot;websocket-server.name&quot; . }} release: {{ .Release.Name }} template: metadata: labels: app: {{ template &quot;websocket-server.name&quot; . }} release: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} envFrom: - configMapRef: name: env-vars image: &quot;{{ .Values.image.repository }}:{{ .Values.image.tag }}&quot; imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: 8001 protocol: TCP resources: {{ toYaml .Values.resources | indent 12 }} {{- with .Values.nodeSelector }} nodeSelector: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{ toYaml . | indent 8 }} {{- end }} # Get credentials of the docker # registry from secret imagePullSecrets: - name: {{ .Values.image.imagePullSecrets }} </code></pre> <p>values.yaml:</p> <pre><code>replicaCount: 1 image: repository: eu.gcr.io/test-234231/websocket-server tag: latest pullPolicy: Always imagePullSecrets: test service: type: ClusterIP port: 8001 ingress: enabled: true path: / hosts: - ws.domainhere.com annotations: kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: &quot;true&quot; tls: - secretName: tls-certificate hosts: - ws.domainhere.com resources: # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. limits: cpu: 100m memory: 150Mi requests: cpu: 50m memory: 100Mi nodeSelector: {} tolerations: [] affinity: {} envVars: </code></pre> <p>ingress.yaml</p> <pre><code>{{- if .Values.ingress.enabled -}} {{- $fullName := include &quot;websocket-server.fullname&quot; . -}} {{- $ingressPath := .Values.ingress.path -}} apiVersion: extensions/v1beta1 kind: Ingress metadata: name: {{ $fullName }} labels: app: {{ template &quot;websocket-server.name&quot; . }} chart: {{ template &quot;websocket-server.chart&quot; . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} {{- with .Values.ingress.annotations }} annotations: {{ toYaml . | indent 4 }} {{- end }} spec: {{- if .Values.ingress.tls }} tls: {{- range .Values.ingress.tls }} - hosts: {{- range .hosts }} - {{ . }} {{- end }} secretName: {{ .secretName }} {{- end }} {{- end }} rules: {{- range .Values.ingress.hosts }} - host: {{ . }} http: paths: - path: {{ $ingressPath }} backend: serviceName: {{ $fullName }} servicePort: http {{- end }} {{- end }} </code></pre> <p>Dockerfile</p> <pre><code>FROM python:3.8 EXPOSE 8001 ENV PYTHONUNBUFFERED 1 RUN groupadd -r tornado \ &amp;&amp; useradd -r -g tornado tornado # Requirements have to be pulled and installed here, otherwise caching won't work COPY ./requirements.txt requirements.txt RUN pip install --no-cache-dir -r requirements.txt &amp;&amp; rm -rf /requirements COPY . /app RUN chown -R tornado /app USER tornado WORKDIR /app ENTRYPOINT [&quot;python3&quot;, &quot;asyncio_server.py&quot;] </code></pre> <p>I've tried setting keep-alive to &quot;0&quot; in the ConfigMap of the loadbalancer on staging. After this however the issue still presists. The logs look as following:</p> <pre><code>I0817 09:41:54.852526 6 event.go:218] Event(v1.ObjectReference{Kind:&quot;ConfigMap&quot;, Namespace:&quot;loadbalancers&quot;, Name:&quot;nginx-ingress-1-controller&quot;, UID:&quot;278d0e86-73cc-22e8-4567-35725aa57fef&quot;, APIVersion:&quot;v1&quot;, ResourceVersion:&quot;463499015&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'UPDATE' ConfigMap loadbalancers/nginx-ingress-1-controller I0817 09:41:54.922325 6 controller.go:183] backend reload required I0817 09:41:55.578489 6 controller.go:192] ingress backend successfully reloaded... I0817 09:44:16.384627 6 controller.go:183] backend reload required I0817 09:44:17.134365 6 controller.go:192] ingress backend successfully reloaded... I0817 09:49:08.813187 6 controller.go:183] backend reload required I0817 09:49:09.478537 6 controller.go:192] ingress backend successfully reloaded... </code></pre> <p>In another environment where we get more traffic the logs look like this:</p> <pre><code>I0817 09:52:46.314745 6 controller.go:183] backend reload required I0817 09:52:48.656831 6 controller.go:192] ingress backend successfully reloaded... I0817 09:52:48.664464 6 controller.go:183] backend reload required 2021/08/17 09:52:49 [error] 550545#550545: *4035841 connect() failed (111: Connection refused) while connecting to upstream, client: 10.4.3.1, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 10.4.3.1 - [10.4.3.1] - - [17/Aug/2021:09:52:49 +0000] &quot;POST host HTTP/1.1&quot; 200 12 &quot;-&quot; &quot;&quot; 1583 0.027 [app-80] xx.xxx.x.xx:5000, xx.xxx.x.xx:5000 0, xx.xxx.x.xx, 0.026 502, 200 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035848 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 10.4.3.1 - [10.4.3.1] - - [17/Aug/2021:09:52:50 +0000] &quot;POST host HTTP/1.1&quot; 200 12 &quot;-&quot; &quot;&quot; 1588 0.107 [app-80] xx.xxx.x.xx:5000 xx.xxx.x.xx 200 xx.xxx.x.xx - [xx.xxx.x.xx] - - [17/Aug/2021:09:52:50 +0000] &quot;POST host HTTP/1.1&quot; 200 12 &quot;-&quot; &quot;&quot; 1644 0.155 [app-80] xx.xxx.x.xx:5000, xx.xxx.x.xx:5000, xx.xxx.x.xx:5000, xx.xxx.x.xx:5000, xx.xxx.x.xx:5000, 502, 502, 502, 502, 502, 502, 502, 502, 502, 502, 502, 502, 502, 502, 502, 502, 502, 200 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; 2021/08/17 09:52:50 [error] 550545#550545: *4035867 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.x.xx, server: client.company.com, request: &quot;POST host HTTP/1.1&quot;, upstream: &quot;host&quot;, host: &quot;client.company.com&quot; xx.xx.xxx.xx - [xx.xx.xxx.xx] - - [17/Aug/2021:09:52:50 +0000] &quot;POST host HTTP/1.1&quot; 200 12 &quot;-&quot; &quot;&quot; 1527 0.099 [app-80] xx.xxx.x.xx:5000, xx.xxx.x.xx:5000, xx.xxx.x.xx:5000, xx.xxx.x.xx:5000 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12 0.000, 0.001, 0.006, 0.002, 0.001, 0.001, 0.004, 0.001, 0.001, 0.001, 0.081 502, 502, 502, 502, 502, 502, 502, 502, 502, 502, 200 xx.xx.xxx.xx - [xx.xx.xxx.xx] - - [17/Aug/2021:09:52:50 +0000] &quot;GET /?user_preferred_language=pl HTTP/2.0&quot; 200 605 &quot;host&quot; &quot;Mozilla/5.0 (Linux; Android 11; SAMSUNG SM-A025G) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/14.2 Chrome/87.0.4280.141 Mobile Safari/537.36&quot; 408 0.002 [app-http] xx.xxx.x.xx:80 1085 0.002 200 xx.xx.xxx.xx - [xx.xx.xxx.xx] - - [17/Aug/2021:09:52:50 +0000] &quot;GET /api HTTP/1.1&quot; 301 0 &quot;-&quot; &quot;python-requests/2.21.0&quot; 266 0.002 [second-app-80] xx.x.x.xx:5000 0 0.002 301 xx.xx.xxx.xx - [xx.xx.xxx.xx] - - [17/Aug/2021:09:52:50 +0000] &quot;GET /api HTTP/1.1&quot; 301 0 &quot;-&quot; &quot;python-requests/2.21.0&quot; 266 0.003 [second-app-80] xx.x.x.xx:5000 0 0.003 301 xx.xx.xxx.xx - [xx.xx.xxx.xx] - - [17/Aug/2021:09:52:50 +0000] &quot;GET /api HTTP/1.1&quot; 200 231 &quot;-&quot; &quot;python-requests/2.21.0&quot; 267 0.037 [second-app-80] xx.x.x.xx:5000 277 0.037 200 xx.xx.xxx.xx - [xx.xx.xxx.xx] - - [17/Aug/2021:09:52:50 +0000] &quot;GET /api HTTP/1.1&quot; 200 231 &quot;-&quot; &quot;python-requests/2.21.0&quot; 267 0.039 [second-app-80] xx.x.x.xxx:5000 277 0.039 200 I0817 09:52:51.752194 6 controller.go:192] ingress backend successfully reloaded... </code></pre>
Jeroen Beljaars
<p>Whenever you create a new ingress with ingress-nginx, a new location will be added to Nginx config file in the ingress-nginx pod. So whenever there is a change in Nginx config file all nginx workers will restart so new config file can be used. Old workers will serve existing requests for some time (worker_restart_timeout) and all new requests will go to new workers. after worker_restart_timeout old workers will stop and your WebSocket connection served by that worker will be dropped.</p> <p>worker_restart_timeout can be increased so your existing connection won't be closed for that time but if there are so many changes in the Nginx config file old workers will be in shutting down state for that time and new worker will be created. Eventually, your pod will be evicted with OOM error.</p>
Aman Singhvi
<p>Hello amazing community.</p> <p>I hope somebody can help to understand this concept to build some strong understanding of kubernetes in azure and networking.</p> <p>I have azure kubernetes cluster. Network Type(plugin) is: Azure CNI Network Policy: Azure</p> <p>I just run the following yaml file to deploy 2 pods running ubuntu:</p> <p>ubuntu-app-a</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ubuntu-app-a-deployment labels: app: ubuntu-app-a spec: replicas: 1 selector: matchLabels: app: ubuntu-app-a template: metadata: labels: app: ubuntu-app-a spec: containers: - name: ubuntu-app-b image: ubuntu command: [&quot;/bin/sh&quot;] args: [&quot;-c&quot;, &quot;sleep infinity&quot;] restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: ubuntu-app-a-clusterip-service labels: app: ubuntu-app-a spec: type: ClusterIP selector: app: ubuntu-app-a ports: - port: 80 targetPort: 80 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ubuntu-app-b-network spec: podSelector: matchLabels: app: ubuntu-app-b ingress: - from: - podSelector: matchLabels: app: ubuntu-app-a </code></pre> <p>ubuntu-app-b</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ubuntu-app-b-deployment labels: app: ubuntu-app-b spec: replicas: 1 selector: matchLabels: app: ubuntu-app-b template: metadata: labels: app: ubuntu-app-b spec: containers: - name: ubuntu-app-b image: ubuntu command: [&quot;/bin/sh&quot;] args: [&quot;-c&quot;, &quot;sleep infinity&quot;] restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: ubuntu-app-b-clusterip-service labels: app: ubuntu-app-b spec: type: ClusterIP selector: app: ubuntu-app-b ports: - port: 80 targetPort: 80 </code></pre> <p>Once both pods were up and running, i bashed into <code>app-a</code> and <code>ping</code> the <code>app-b</code>.</p> <p>So far here everything is good and it is what i am expecting. All the pods can communicate to each other in the same vnet and namespace.</p> <p>But I was curious about restricting the internal communication of those pods by using network security group and control the inbound and outbound traffic.</p> <p>For example, I would like only the app-a to be able to ping app-b and not the other way around.</p> <p>I have been reading in microsoft documentation but I got confused as they were advising to do not play with subnets as this could cause internal issues in the cluster.</p> <p>So I was wondering how I can set a more restring internal communication in my cluster?</p> <p>If anyone can help me with this or direct me into the right path I will be grateful.</p> <p>Thank you so much in advance</p>
Nayden Van
<p>By default in Kubernetes all network traffic is allowed. So every pod can communicate with every other pod/service within the same cluster. NetworkPolicies are pretty much Firewalls in order to restrict traffic between pods. NetworkPolicies are enforced by the CNI (Container Network Interface), which in your case is Azure CNI, but Azure has two different Policy enforcement methods (depending on your needs, but those are rather &quot;special&quot; cases most of the time). The only notable difference is that Calico supports Global Netpols whereas Azure NPM only supports Namespace-wide policies.</p> <p>As with most objects in Kubernetes, NetworkPolicies work with some sort of selectors, aka which pods should be targeted by it.</p> <hr /> <p>For best practice reasons, all traffic should be denied (if you have several teams or a big cluster running). You can achieve this by creating the following resources in every Application Namespace.</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-all spec: podSelector: {} policyTypes: - Ingress - Egress </code></pre> <p>With this in place no pods would be able to communicate with each other, so your <code>ping</code> should fail.</p> <hr /> <p>Once you've blocked all ingress and egress traffic, you can explicitly allow traffic from desired pods. Now we have to allow Pod A Egress Traffic to Pod B, and allow Ingress Traffic from Pod A in Pod B. You were already in the right path with your Network Policy. The following should work as desired.</p> <pre><code>--- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ubuntu-app-b-network spec: podSelector: matchLabels: app: ubuntu-app-b ingress: - from: - podSelector: matchLabels: app: ubuntu-app-a --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: ubuntu-app-a-network spec: podSelector: matchLabels: app: ubuntu-app-a egress: - to: - podSelector: matchLabels: app: ubuntu-app-b </code></pre> <p>Let me know if this works for you, If not I'll quickly recreate your situation.</p>
Mike
<p>I am using Google's Cloud Code extension with Visual Studio Code to use GCP's Cloud Build and deploy to a local kubernetes cluster (Docker Desktop). I have directed Cloud Build to run unit tests after installing modules.</p> <p>When I build using the command line <code>gcloud beta builds submit</code>, the Cloud Build does the module install and successfully fails to build because I intentionally wrote a failing unit test. So that's great.</p> <p>However, when I try to build and deploy using the Cloud Code extension, it is not using my cloudbuild.yaml at all. I know this because the</p> <p>1.) The build succeeds even with the failing unit test</p> <p>2.) No logging from the unit test appears in GCP logging</p> <p>3.) I completely deleted cloudbuild.yaml and the build / deploy still succeeded, which seems to imply Cloud Code is using Dockerfile</p> <p>What do I need to do to ensure Cloud Code uses cloudbuild.yaml for its build/deploy to a local instance of kubernetes?</p> <p>Thanks!</p> <p><strong>cloudbuild.yaml</strong></p> <pre class="lang-yaml prettyprint-override"><code>steps: - name: node entrypoint: npm args: ['install'] - id: &quot;test&quot; name: node entrypoint: npm args: ['test'] options: logging: CLOUD_LOGGING_ONLY </code></pre> <p><strong>scaffold.yaml</strong></p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: skaffold/v2beta19 kind: Config build: tagPolicy: sha256: {} artifacts: - context: . image: genesys-gencloud-dev deploy: kubectl: manifests: - kubernetes-manifests/** profiles: - name: cloudbuild build: googleCloudBuild: {} </code></pre> <p><strong>launch.json</strong></p> <pre class="lang-json prettyprint-override"><code>{ &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Kubernetes: Run/Debug - cloudbuild&quot;, &quot;type&quot;: &quot;cloudcode.kubernetes&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;skaffoldConfig&quot;: &quot;${workspaceFolder}\\skaffold.yaml&quot;, &quot;profile&quot;: &quot;cloudbuild&quot;, &quot;watch&quot;: true, &quot;cleanUp&quot;: false, &quot;portForward&quot;: true, &quot;internalConsoleOptions&quot;: &quot;neverOpen&quot;, &quot;imageRegistry&quot;: &quot;gcr.io/my-gcp-project&quot;, &quot;debug&quot;: [ { &quot;image&quot;: &quot;my-image-dev&quot;, &quot;containerName&quot;: &quot;my-container-dev&quot;, &quot;sourceFileMap&quot;: { &quot;${workspaceFolder}&quot;: &quot;/WORK_DIR&quot; } } ] } ] } </code></pre>
Dshiz
<p>You will need to edit your <code>skaffold.yaml</code> file to use Cloud Build:</p> <pre><code>build: googleCloudBuild: {} </code></pre> <p>See <a href="https://skaffold.dev/docs/pipeline-stages/builders/#remotely-on-google-cloud-build" rel="nofollow noreferrer">https://skaffold.dev/docs/pipeline-stages/builders/#remotely-on-google-cloud-build</a> for more details.</p> <p><strong>EDIT</strong>: It looks like your skaffold.yaml enables cloud build for the <code>cloudbuild</code> profile, but that the profile isn't active.</p> <p>Some options:</p> <ul> <li>Add <code>&quot;profile&quot;: &quot;cloudbuild&quot;</code> to your <code>launch.json</code> for 'Run on Kubernetes'. <a href="https://i.stack.imgur.com/atgOw.png" rel="nofollow noreferrer">Screenshot</a></li> <li>Move the <code>googleCloudBuild: {}</code> to the top-level <code>build:</code> section. (In other words, skip using the profile)</li> <li>Activate the profile using one of the other methods from <a href="https://skaffold.dev/docs/environment/profiles/#activation" rel="nofollow noreferrer">https://skaffold.dev/docs/environment/profiles/#activation</a></li> </ul> <p><strong>UDPATE</strong> (from asker)</p> <p>I needed to do the following:</p> <ol> <li>Update <code>skaffold.yaml</code> as follows. In particular note the <code>image</code>, field under build &gt; artifacts, and <code>projectId</code> field under profiles &gt; build.</li> </ol> <pre class="lang-yaml prettyprint-override"><code>apiVersion: skaffold/v2beta19 kind: Config build: tagPolicy: sha256: {} artifacts: - context: . image: gcr.io/my-project-id/my-image deploy: kubectl: manifests: - kubernetes-manifests/** profiles: - name: cloudbuild build: googleCloudBuild: projectId: my-project-id </code></pre> <ol start="2"> <li>Run this command to activate the profile: <code>skaffold dev -p cloudbuild</code></li> </ol>
Chris Wilson
<p>After updating the config-map with kubectl, anyone can't access to the cluster. I tried with several users and trying to switching the role, but with no luck. I read similar case <a href="https://stackoverflow.com/questions/59085639/mistakenly-updated-configmap-aws-auth-with-rbac-lost-access-to-the-cluster">here</a> but:</p> <ul> <li>As one comment states, I get a token from another service, but that token hasn't the permissions to update the config-map in kube-system namespace</li> <li>I'm trying to login as the root user, but I'm not able to find the user creator, I tried with some existing user, but it continues to give me access denied. How can I find the arn of the user creator?</li> </ul> <p>I don't know if there's some other solutions. Any help would be appreciated.</p> <hr /> <p><strong>UPDATE</strong></p> <p>After spend several days facing the issue I didn't find any solution, I resolved opening a ticket support to AWS. They told me the cluster's owner and fixed the config-map (take in consideration that the IT support will not resolve the issue immediately).</p>
stuzzo
<p>As stated in your update, cloudtrail will not have the cluster creator if the EKS cluster is &gt;90 days old.</p> <p>Preferably, you used an IAM role (instead of a user) to create the cluster. If you're using an IAM user to create the cluster, it's your responsibility to keep track of the IAM user's ARN in the event that access for all other roles is lost.</p> <p>To restore access to the cluster, you would need to recreate the cluster creator's IAM user, gain access to it, and restore the aws-auth configmap using that user's access keys.</p>
conmanworknor
<p>I have this</p> <pre><code>kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;24&quot;, GitVersion:&quot;v1.24.2&quot;, GitCommit:&quot;f66044f4361b9f1f96f0053dd46cb7dce5e990a8&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-06-15T14:22:29Z&quot;, GoVersion:&quot;go1.18.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Kustomize Version: v4.5.4 Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;18&quot;, GitVersion:&quot;v1.18.2&quot;, GitCommit:&quot;52c56ce7a8272c798dbc29846288d7cd9fbae032&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-04-30T20:19:45Z&quot;, GoVersion:&quot;go1.13.9&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} WARNING: version difference between client (1.24) and server (1.18) exceeds the supported minor version skew of +/-1 </code></pre> <p>I upgraded my client version using official <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="nofollow noreferrer">install kubectl</a> docs.</p> <p>I installed kubeadm with snap</p> <pre><code>kubeadm version kubeadm version: &amp;version.Info{Major:&quot;1&quot;, Minor:&quot;24&quot;, GitVersion:&quot;v1.24.2&quot;, GitCommit:&quot;f66044f4361b9f1f96f0053dd46cb7dce5e990a8&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-06-17T22:34:44Z&quot;, GoVersion:&quot;go1.18.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>How to upgrade server?</p>
Richard Rublev
<p>As you asked how to upgrade K8s : This document contains a step by step process to <a href="https://www.golinuxcloud.com/kubernetes-upgrade-version/" rel="nofollow noreferrer">update K8’s</a></p> <p>When your cluster is using the version 1.18.6, you could upgrade to 1.18.p where p &gt;= 7 and to 1.19.x (whatever the value of x), but not to 1.20.x. If you plan to upgrade from 1.18.x to 1.20.x directly then the kubeadm upgrade plan command would fail so to overcome that you would first have to upgrade from 1.18.x to 1.19.x and from 1.19.x to 1.20.x. Post this needs to upgrade a Kubernetes cluster from <a href="https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="nofollow noreferrer">version 1.19.x to version 1.20.x to 1.21 to 1.22 to 1.23</a>.</p>
Hemanth Kumar
<p>I am replacing a Kubernetes secret and I want to make sure I am catching all places in the cluster which use it.</p> <p>Is there a way to tell without reading all deployment YAMLs using K8s or helm?</p> <p>We have multiple services deployed on the same cluster and sharing secrets. Some using Helm, some don't.</p>
Tomer Amir
<p>You can use secrets in several different ways, it's not always bound as volume. So the most convenient way is to check the secret's namespace for all objects that could use secret in their specs.</p> <p>For manual check here are two commands, one for checking for the certain secret name references among k8s objects, the second one helps to find the object that contains the secret reference.</p> <pre><code>kubectl get deployments,statefulsets,daemonsets,cronjobs,jobs,pods -n namespace-name -o yaml | grep secret_name kubectl get deployments,statefulsets,daemonsets,cronjobs,jobs,pods -n namespace-name -o yaml | grep -i -e &quot;^ name:&quot; -e &quot;^ kind&quot; -e secret_name </code></pre> <p>Annotation can be removed by <code>grep -v annotation -v last-applied</code> or probably even easier <code>grep -v &quot;\&quot;kind&quot;</code>.</p>
Jakub Siemaszko
<p>I am experimenting with running jenkins on kubernetes cluster. I have achieved running jenkins on the cluster using helm chart. However, I'm unable to run any test cases since my code base requires python, mongodb</p> <p>In my JenkinsFile, I have tried the following</p> <p>1.</p> <pre><code>withPythonEnv('python3.9') { pysh 'pip3 install pytest' } </code></pre> <ol start="2"> <li></li> </ol> <pre><code>stage('Test') { sh 'python --version' } </code></pre> <p>But it says <code>java.io.IOException: error=2, No such file or directory</code>. It is not feasible to always run the python install command and have it hardcoded into the JenkinsFile. After some research I found out that I have to declare kube to install python while the pod is being provisioned but there seems to be no PreStart hook/lifecycle for the pod, there is only PostStart and PreStop.</p> <p>I'm not sure how to install python and mongodb use it as a template for kube pods.</p> <p>This is the default YAML file that I used for the helm chart - <a href="https://raw.githubusercontent.com/jenkinsci/helm-charts/main/charts/jenkins/values.yaml" rel="nofollow noreferrer">jenkins-values.yaml</a> Also I'm not sure if I need to use helm.</p>
Vaibhav
<p>You should create a new container image with the packages installed. In this case, the Dockerfile could look something like this:</p> <pre><code>FROM jenkins/jenkins RUN apt install -y appname </code></pre> <p>Then build the container, push it to a container registry, and replace the &quot;Image: jenkins/jenkins&quot; in your helm chart with the name of the container image you built plus the container registry you uploaded it to. With this, your applications are installed on your container every time it runs.</p> <p>The second way, which works but isn't perfect, is to run environment commands, with something like what is described here: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/</a> the issue with this method is that some deployments already use the startup commands, and by redefining the entrypoint, you can stop the starting command of the container from ever running, thus causing the container to fail. (This should work if added to the helm chart in the deployment section, as they should share roughly the same format)</p> <p>Otherwise, there's a really improper way of installing programs in a running pod - use <code>kubectl exec -it deployment.apps/jenkins -- bash</code> then run your installation commands in the pod itself.</p> <p>That being said, it's a poor idea to do this because if the pod restarts, it will revert back to the original image without the required applications installed. If you build a new container image, your apps will remain installed each time the pod restarts. This should basically never be used, unless it is a temporary pod as a testing environment.</p>
Paradoc
<p>We are using Spring Boot in Kubernetes in our project. Recently we decided to add graceful shutdown for our application in order to ensure seamless deployments and make sure no requests are failed during pod terminations for whatever reason. Even though Spring Boot provides graceful shutdown, it seems there is still a probability that requests may fail due to the fact that Kubernetes starts removing a pod from endpoints and sends the SIGTERM signal to the pod at the same time in parallel. Quote from kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>At the same time as the kubelet is starting graceful shutdown, the control plane removes that shutting-down Pod from Endpoints (and, if enabled, EndpointSlice) objects where these represent a Service with a configured selector.</p> </blockquote> <p>This is aldso described in more detail <a href="https://youtu.be/wBtUIkMgzU8?t=3063" rel="nofollow noreferrer">here</a>. The solution is provided there as well, which is</p> <blockquote> <p>you can add a sleep in a preStop hook of the pod spec and, of course, <em><strong>configure that sleep to be whatever makes sense for your use case</strong></em> :</p> </blockquote> <p><a href="https://i.stack.imgur.com/IjAcq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IjAcq.png" alt="enter image description here" /></a></p> <p>In the example provided this sleep is configured to be 10 seconds, but I'm wondering what is the reasonable value for that, so the pod's termination is not unnecessarily delayed?</p> <p>Thanks.</p>
ZeeG
<p>It seems like this might be up to your preferences but it looks like the 5-10 seconds sleep is a recommended range:</p> <blockquote> <p>In “Kubernetes in Action”, Lukša recommends 5–10 seconds <a href="https://blog.gruntwork.io/delaying-shutdown-to-wait-for-pod-deletion-propagation-445f779a8304" rel="nofollow noreferrer">https://blog.gruntwork.io/delaying-shutdown-to-wait-for-pod-deletion-propagation-445f779a8304</a></p> </blockquote>
Jakub Siemaszko
<p>I am trying to create an AKS cluster in azure using the <code>terraform</code>. My requirements are as follows:</p> <ol> <li><p>Create a site-to-site VPN connection where the gateway in the subnet of range <code>172.30.0.0/16</code> - <strong>This is done</strong></p> </li> <li><p>Install Azure AKS cluster with Azure CNI and pod's should be in the range of VPN CIDR (<code>172.30.0.0/16</code>).</p> </li> </ol> <p>Here's my terraform code. I read that if you use <code>azure</code> as your <code>network_policy</code> and <code>network_plugin</code>, you can't set the <code>pod_cidr</code> - <a href="https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#network_plugin" rel="nofollow noreferrer">source</a></p> <p><strong>Then how can I do this so my PODs can reach the on-premise network through the site-to-site vpn?</strong></p> <pre><code> resource &quot;azurerm_kubernetes_cluster&quot; &quot;k8s_cluster&quot; { lifecycle { ignore_changes = [ default_node_pool[0].node_count ] prevent_destroy = false } name = var.cluster_name location = var.location resource_group_name = var.rg_name dns_prefix = var.dns_prefix kubernetes_version = var.kubernetes_version # node_resource_group = var.resource_group_name default_node_pool { name = var.default_node_pool.name node_count = var.default_node_pool.node_count max_count = var.default_node_pool.max_count min_count = var.default_node_pool.min_count vm_size = var.default_node_pool.vm_size os_disk_size_gb = var.default_node_pool.os_disk_size_gb # vnet_subnet_id = var.vnet_subnet_id max_pods = var.default_node_pool.max_pods type = var.default_node_pool.agent_pool_type enable_node_public_ip = var.default_node_pool.enable_node_public_ip enable_auto_scaling = var.default_node_pool.enable_auto_scaling tags = merge(var.common_tags) } identity { type = var.identity } network_profile { network_plugin = var.network_plugin #azure network_policy = var.network_policy #&quot;azure&quot; load_balancer_sku = var.load_balancer_sku #&quot;standard&quot; # pod_cidr = var.pod_cidr | When network_plugin is set to azure - the vnet_subnet_id field in the default_node_pool block must be set and pod_cidr must not be set. } tags = merge(var.common_tags) } </code></pre> <pre><code># AKS cluster related variables cluster_name = &quot;test-cluster&quot; dns_prefix = &quot;testjana&quot; kubernetes_version = &quot;1.22.15&quot; default_node_pool = { name = &quot;masternp&quot; # for system pods node_count = 1 vm_size = &quot;standard_e4bds_v5&quot; # 4 vcpu and 32 Gb of memory enable_auto_scaling = false enable_node_public_ip = false min_count = null max_count = null max_pods = 100 os_disk_size_gb = 80 agent_pool_type = &quot;VirtualMachineScaleSets&quot; } admin_username = &quot;jananathadmin&quot; ssh_public_key = &quot;public_key&quot; identity = &quot;SystemAssigned&quot; network_plugin = &quot;azure&quot; network_policy = &quot;azure&quot; load_balancer_sku = &quot;standard&quot; </code></pre>
Jananath Banuka
<p>Default, all PODs in AKS will communicate each other, when we want to restrict the traffic, network policies can be used to allow or deny traffic between pods.</p> <p>Here is the tutorial <a href="https://learn.microsoft.com/en-us/azure/vpn-gateway/tutorial-site-to-site-portal" rel="nofollow noreferrer">link</a></p> <p>Reproduced the same via terraform using below code snippet to connect cluster with azure CNI and a vnet gateway which links our on-prem environment to azure via a site-to-site VPN.</p> <p><strong>Step1:</strong> <strong>main tf file as follows</strong></p> <pre><code>resource &quot;azurerm_resource_group&quot; &quot;example&quot; { name = &quot;*****-****&quot; location = &quot;East US&quot; } resource &quot;azurerm_role_assignment&quot; &quot;role_acrpull&quot; { scope = azurerm_container_registry.acr.id role_definition_name = &quot;AcrPull&quot; principal_id = azurerm_kubernetes_cluster.demo.kubelet_identity.0.object_id } resource &quot;azurerm_container_registry&quot; &quot;acr&quot; { name = &quot;acrswarna&quot; resource_group_name = azurerm_resource_group.example.name location = azurerm_resource_group.example.location sku = &quot;Standard&quot; admin_enabled = false } resource &quot;azurerm_virtual_network&quot; &quot;puvnet&quot; { name = &quot;Publics_VNET&quot; resource_group_name = azurerm_resource_group.example.name location = azurerm_resource_group.example.location address_space = [&quot;10.19.0.0/16&quot;] } resource &quot;azurerm_subnet&quot; &quot;example&quot; { name = &quot;GatewaySubnet&quot; resource_group_name = azurerm_resource_group.example.name virtual_network_name = azurerm_virtual_network.puvnet.name address_prefixes = [&quot;10.19.3.0/24&quot;] } resource &quot;azurerm_subnet&quot; &quot;osubnet&quot; { name = &quot;Outer_Subnet&quot; resource_group_name = azurerm_resource_group.example.name address_prefixes = [&quot;10.19.1.0/24&quot;] virtual_network_name = azurerm_virtual_network.puvnet.name } resource &quot;azurerm_kubernetes_cluster&quot; &quot;demo&quot; { name = &quot;demo-aksnew&quot; location = azurerm_resource_group.example.location resource_group_name = azurerm_resource_group.example.name dns_prefix = &quot;demo-aks&quot; default_node_pool { name = &quot;default&quot; node_count = 2 vm_size = &quot;standard_e4bds_v5&quot; type = &quot;VirtualMachineScaleSets&quot; enable_auto_scaling = false min_count = null max_count = null max_pods = 100 //vnet_subnet_id = azurerm_subnet.osubnet.id } identity { type = &quot;SystemAssigned&quot; } network_profile { network_plugin = &quot;azure&quot; load_balancer_sku = &quot;standard&quot; network_policy = &quot;azure&quot; } tags = { Environment = &quot;Development&quot; } } resource &quot;azurerm_public_ip&quot; &quot;example&quot; { name = &quot;pips-firewall&quot; resource_group_name = azurerm_resource_group.example.name location = azurerm_resource_group.example.location allocation_method = &quot;Static&quot; sku = &quot;Standard&quot; } resource &quot;azurerm_virtual_network_gateway&quot; &quot;example&quot; { name = &quot;test&quot; location = azurerm_resource_group.example.location resource_group_name = azurerm_resource_group.example.name type = &quot;Vpn&quot; vpn_type = &quot;RouteBased&quot; active_active = false enable_bgp = false sku = &quot;VpnGw1&quot; ip_configuration { name = &quot;vnetGatewayConfig&quot; public_ip_address_id = azurerm_public_ip.example.id private_ip_address_allocation = &quot;Dynamic&quot; subnet_id = azurerm_subnet.example.id } vpn_client_configuration { address_space = [&quot;172.30.0.0/16&quot;] root_certificate { name = &quot;******-****-ID-Root-CA&quot; public_cert_data = &lt;&lt;EOF **Use certificate here** EOF } revoked_certificate { name = &quot;*****-Global-Root-CA&quot; thumbprint = &quot;****************&quot; } } } </code></pre> <p>NOTE: Update root certificate configuration by own on above code.</p> <p><strong>Provider tf file as follow</strong></p> <pre><code>terraform { required_providers { azurerm = { source = &quot;hashicorp/azurerm&quot; version = &quot;=3.0.0&quot; } } } provider &quot;azurerm&quot; { features {} skip_provider_registration = true } </code></pre> <p>upon running</p> <pre><code> terraform plan terraform apply -auto-approve </code></pre> <p><img src="https://i.stack.imgur.com/2fqKs.png" alt="enter image description here" /></p> <p><img src="https://i.stack.imgur.com/42Cf1.png" alt="enter image description here" /></p> <p><img src="https://i.stack.imgur.com/tDSEE.png" alt="enter image description here" /></p> <p>Vnet and SubNet configurations <img src="https://i.stack.imgur.com/MIwEg.png" alt="enter image description here" /></p> <p>Virtual Network Gateway configuraiton as follows. <img src="https://i.stack.imgur.com/4KgP7.png" alt="enter image description here" /></p> <p>Deployed sample pods on cluster <img src="https://i.stack.imgur.com/gJSjP.png" alt="enter image description here" /></p>
Swarna Anipindi
<p>when i use docker as CRI:</p> <pre><code>{&quot;log&quot;:&quot;I0421 14:23:18.944348 1 node.go:172] Successfully retrieved node IP: 192.168.49.2\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:18.944635198Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:18.944724 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:18.944838628Z&quot;} {&quot;log&quot;:&quot;W0421 14:23:19.008388 1 server_others.go:578] Unknown proxy mode \&quot;\&quot;, assuming iptables proxy\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.008544314Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.008581 1 server_others.go:185] Using iptables Proxier.\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.008653777Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.008904 1 server.go:650] Version: v1.20.0\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.008963124Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.009762 1 config.go:315] Starting service config controller\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.009986673Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.009867 1 shared_informer.go:240] Waiting for caches to sync for service config\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.009999075Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.009973 1 config.go:224] Starting endpoint slice config controller\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.010041688Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.009979 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config\n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.01004961Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.110110 1 shared_informer.go:247] Caches are synced for endpoint slice config \n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.110273306Z&quot;} {&quot;log&quot;:&quot;I0421 14:23:19.110120 1 shared_informer.go:247] Caches are synced for service config \n&quot;,&quot;stream&quot;:&quot;stderr&quot;,&quot;time&quot;:&quot;2023-04-21T14:23:19.110525274Z&quot;} </code></pre> <p>And</p> <p>when i use containerd as CRI:</p> <pre><code>2022-07-19T11:13:32.228206987+08:00 stderr F I0719 11:13:32.228176 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288 2022-07-19T11:13:32.228229469+08:00 stderr F I0719 11:13:32.228203 1 conntrack.go:52] Setting nf_conntrack_max to 524288 2022-07-19T11:13:32.228460095+08:00 stderr F I0719 11:13:32.228431 1 conntrack.go:83] Setting conntrack hashsize to 131072 2022-07-19T11:13:32.237601129+08:00 stderr F I0719 11:13:32.237568 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 2022-07-19T11:13:32.237638594+08:00 stderr F I0719 11:13:32.237613 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 300 2022-07-19T11:13:32.237787689+08:00 stderr F I0719 11:13:32.237756 1 config.go:315] Starting service config controller </code></pre> <p>Any body know why the stream of log file is always stderr?</p> <p>i expect that info messages is writing to stdout and err messages is writing to stderr.</p>
magiceses
<p>STDOUT is usually a command’s normal output, and STDERR is typically used to output error messages. By default, docker/container logs show the command’s STDOUT and STDERR. This information that is logged and the format of the log depends almost entirely on the container’s endpoint command. <em><strong>When you use docker/ container as CRI to write the logs in Json it shows the default as STDERR and this needs to be parsed before moving to the logging agent.</strong></em></p> <p>Refer to this <a href="https://github.com/fluent/fluent-bit/issues/522" rel="nofollow noreferrer">GIT LINK</a> by stevenAcoffman for more information and here they used fluentbit as log parser.</p> <p>You can also use <a href="https://sematext.com/docs/logagent/installation-docker/#kubernetes-with-containerd-and-ibm-cloud" rel="nofollow noreferrer">sematext</a> as a Log agent which supports CRI-O log format as well and install its <a href="https://sematext.com/docs/logagent/input-filter-containerd/" rel="nofollow noreferrer">Plugin</a> to parse the logs.</p> <blockquote> <p>By default, Docker captures the standard output (and standard error) of all your containers, and writes them in files using the JSON format. The JSON format annotates each line with its origin (stdout or stderr) and its timestamp. Each log file contains information about only one container. <code>{&quot;log&quot;:&quot;Log line is here\n&quot;,&quot;stream&quot;:&quot;stdout&quot;,&quot;time&quot;:&quot;2019-01-01T11:11:11.111111111Z&quot;}</code></p> </blockquote> <p>Along with the above, you need to install <a href="https://docs.docker.com/config/containers/logging/json-file/" rel="nofollow noreferrer">JSON Logging file driver</a></p>
Hemanth Kumar
<p>I heard that Dockershim has been removed from Kubernetes 1.24. I have not used minikube for a while as I was using kind and k3s. But right now I see Minikube 1.30.1 says starting k8s 1.26 on kvm VM using docker 20.10.23. How is this possible?</p>
Alex K
<p>As per the <a href="https://kubernetes.io/blog/2022/02/17/dockershim-faq/" rel="nofollow noreferrer">official doc</a>, dockershim was deprecated and you can use CRI (Container Run Time) as an interface to docker instead of dockershim. CRI-compatible runtimes like containerd and CRI-O are already being used and supported hence it has become the recommended way to run Docker with Kubernetes.</p> <p>As mentioned in official docs you can also use another tool that support dockershim :</p> <blockquote> <pre><code>Is there any tooling that can help me find dockershim in use? Yes! The Detector for Docker Socket (DDS) is a kubectl plugin that you can install and then use to check your cluster. DDS can detect if active Kubernetes workloads are mounting the Docker Engine socket (docker.sock) as a volume. </code></pre> </blockquote> <p>Link for: <a href="https://github.com/aws-containers/kubectl-detector-for-docker-socket" rel="nofollow noreferrer">Detector for Docker Socket (DDS)</a></p>
Hemanth Kumar
<p>I have the following <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">Kubernets CronJob</a> definition:</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: myCronJob spec: schedule: &quot;*/1 * * * *&quot; failedJobsHistoryLimit: 1 successfulJobsHistoryLimit: 1 jobTemplate: spec: backoffLimit: 2 parallelism: 1 template: metadata: annotations: linkerd.io/inject: disabled spec: containers: - name: myCronJob image: curlimages/curl:7.72.0 env: - name: CURRENT_TIME value: $(date +&quot;%Y-%m-%dT%H:%M:%S.%sZ&quot;) args: - /bin/sh - -ec - &quot;curl --request POST --header 'Content-type: application/json' -d '{\&quot;Label\&quot;:\&quot;myServiceBusLabel\&quot;,\&quot;Data\&quot;:\&quot;{\'TimeStamp\':\'$(echo $CURRENT_TIME)\'}\&quot;,\&quot;QueueName\&quot;:\&quot;myServiceBusQueue\&quot;}' https://mycronjobproxy.net/api/HttpTrigger?code=mySecretCode&quot; restartPolicy: OnFailure </code></pre> <p>Notice I pass a dynamic date to <code>curl</code> via an environment variable, as described <a href="https://stackoverflow.com/a/58192064/195964">here</a>.</p> <p>However, this produces an error at runtime (copied from K9s):</p> <pre><code>curl: (3) unmatched close brace/bracket in URL position 26: +&quot;%Y-%m-%dT%H:%M:%S.%sZ&quot;)}&quot;,&quot;QueueName&quot;:&quot;myServiceBusQueue&quot;} ^ </code></pre> <p>I suspect this is likely an issue with combining double- and single quotes and escape characters. The <code>curl</code> command runs fine locally on macOS, but not when deployed using the <code>curlimages/curl:7.72.0</code>. There seems to be some difference in behavior.</p> <p>On macOS, on my local dev machine, I can run the command like so:</p> <pre><code>curl --request POST --header &quot;Content-type: application/json&quot; -d &quot;{'Label':'myServiceBusLabel','Data':{'TimeStamp':'$(echo $CURRENT_TIME)'},'QueueName':'myServiceBusQueue'}&quot; https://mycronjobproxy.net/api/HttpTrigger?code=mySecretCode </code></pre> <p>Output:</p> <pre><code>Message was successfully sent using the following params: Label = myServiceBusLabel | Data = {&quot;TimeStamp&quot;: &quot;2023-05-15T15:44:45.1684158285Z&quot;} | QueueName = myServiceBusQueue% </code></pre> <p>But when I use that version in my K8s CronJob YAML file, my IDE (JetBrains Rider) says: &quot;Scalar value expected.&quot; It seems like the whole command must be enclosed in double quotes.</p> <p>What is the correct way to quote/escape this <code>curl</code> command?</p>
leifericf
<p>I have made the necessary changes to your curl below can try it once and let me know :</p> <pre><code>- “curl --request POST --header ‘Content-type: application/json’ -d \&quot;{\&quot;Label\&quot;:\&quot;myServiceBusLabel\&quot;,\&quot;Data\&quot;:\&quot;{\'TimeStamp\':\'$(echo $CURRENT_TIME)\'}\&quot;,\&quot;QueueName\&quot;:\&quot;myServiceBusQueue\&quot;}\&quot; https://mycronjobproxy.net/api/HttpTrigger?code=mySecretCode” </code></pre> <p>AS checked in the YAML Lint it is working fine, attached the screenshot. Can you have a check once and try to replicate it.</p> <p><a href="https://i.stack.imgur.com/MGad6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MGad6.png" alt="working Yaml" /></a></p>
Hemanth Kumar
<p>I am trying to adapt the quickstart guide for Mongo Atlas Operator here <a href="https://www.mongodb.com/blog/post/introducing-atlas-operator-kubernetes" rel="nofollow noreferrer">Atlas Operator Quickstart</a> to use secure env variables set in TravisCI.</p> <p>I want to put the quickstart scripts into my deploy.sh, which is triggered from my travis.yaml file.</p> <p>My travis.yaml already sets one global variable like this:</p> <pre><code>env: global: - SHA=$(git rev-parse HEAD) </code></pre> <p>Which is consumed by the deploy.sh file like this:</p> <pre><code>docker build -t mydocker/k8s-client:latest -t mydocker/k8s-client:$SHA -f ./client/Dockerfile ./client </code></pre> <p>but I'm not sure how to pass vars set in the Environment variables bit in the travis Settings to deploy.sh</p> <p><a href="https://i.stack.imgur.com/C3Pyi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C3Pyi.png" alt="env vars" /></a></p> <p>This is the section of script I want to pass variables to:</p> <pre><code> kubectl create secret generic mongodb-atlas-operator-api-key \ --from-literal=&quot;orgId=$MY_ORG_ID&quot; \ --from-literal=&quot;publicApiKey=$MY_PUBLIC_API_KEY&quot; \ --from-literal=&quot;privateApiKey=$MY_PRIVATE_API_KEY&quot; \ -n mongodb-atlas-system </code></pre> <p>I'm assuming the --from-literal syntax will just put in the literal string &quot;orgId=$MY_ORG_ID&quot; for example, and I need to use pipe syntax - but can I do something along the lines of this?:</p> <pre><code>echo &quot;$MY_ORG_ID&quot; | kubectl create secret generic mongodb-atlas-operator-api-key --orgId-stdin </code></pre> <p>Or do I need to put something in my travis.yaml before_install script?</p>
Davtho1983
<p>Looks like the <code>echo</code> approach is fine, I've found a similar use-case to yours, have a look <a href="https://drew-buckman.medium.com/use-travis-ci-to-automate-the-deployment-of-a-python-app-to-ibm-code-engine-4b534b503055" rel="nofollow noreferrer">here</a>.</p>
Jakub Siemaszko
<p>I am new to Docker and Kubernetes, and trying to deploy my ASP.Net Core 6.0 web application on Kubernetes with Docker image. I can see the service running with <code>type: NodePort</code> as in the last line of the screenshot <a href="https://i.stack.imgur.com/PsgTM.png" rel="nofollow noreferrer">1</a>, but I cannot access this port on my browser at all.</p> <p><a href="https://i.stack.imgur.com/PsgTM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PsgTM.png" alt="enter image description here" /></a></p> <p>I can also see the Docker container created by Kubernetes Pod running on Docker Desktop Windows application as in screenshot <a href="https://i.stack.imgur.com/cje91.png" rel="nofollow noreferrer">2</a>, but I don't know how to access my deployed application from the browser. Any suggestion or solution would be appreciated.</p> <p><a href="https://i.stack.imgur.com/cje91.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cje91.png" alt="enter image description here" /></a></p>
edwp
<p>Seems to be you need to expose the service , so that it will allow external traffic. In order to expose the service use : <code>kubectl expose deployment &lt;deployment&gt; --type=&quot;Loadbalancer&quot;--port=8080</code>, this will create an external IP.</p> <p>Check the created external IP by using Kubectl get services command. If not visible, wait for a few minutes to get the service exposed. So, wait for a few minutes and check again the External IP will be visible .</p> <p>Now access the service using <code>http://&lt;EXTERNAL_IP&gt;:8080</code>in the browser.</p> <p>For more information Refer to this <a href="https://codelabs.developers.google.com/codelabs/cloud-kubernetes-aspnetcore#0" rel="nofollow noreferrer">Lab</a> on how to Deploy ASP.NET Core app on Kubernetes.</p>
Hemanth Kumar
<p>I have a mailhog pod and a mailer service running in Kubernetes. I am on an M1 machine, mailhog/mailhog was not running properly, so I changed it to jcalonso/mailhog, could this be a problem? The mailer service is unable to dial a TCP connection to the mailhog pod. The pod itself is running, and I am able to view the HTTP site with an ingress to port 8025. Here is my mailer service's yaml file:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mail-service spec: replicas: 1 selector: matchLabels: app: mail-service template: metadata: labels: app: mail-service spec: containers: - name: mail-service image: &quot;papaya147/mail-service:latest&quot; resources: requests: memory: &quot;64Mi&quot; cpu: &quot;250m&quot; limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; env: - name: MAIL_DOMAIN value: &quot;&quot; - name: MAIL_HOST value: &quot;localhost&quot; - name: MAIL_PORT value: &quot;1025&quot; - name: MAIL_ENCRYPTION value: &quot;none&quot; - name: MAIL_USERNAME value: &quot;&quot; - name: MAIL_PASSWORD value: &quot;&quot; - name: FROM_NAME value: &quot;John Smith&quot; - name: FROM_ADDRESS value: &quot;[email protected]&quot; ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: mail-service spec: selector: app: mail-service ports: - protocol: TCP name: mail-port port: 80 targetPort: 80 </code></pre> <p>And here is the mailhog deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: mailhog spec: selector: matchLabels: app: mailhog template: metadata: labels: app: mailhog spec: containers: - name: mailhog image: jcalonso/mailhog resources: requests: memory: &quot;64Mi&quot; cpu: &quot;250m&quot; limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; ports: - containerPort: 1025 - containerPort: 8025 --- apiVersion: v1 kind: Service metadata: name: mailhog spec: selector: app: mailhog ports: - protocol: TCP name: mailhog-smtp-port port: 1025 targetPort: 1025 - protocol: TCP name: mailhog-web-port port: 8025 targetPort: 8025 </code></pre> <p>While testing with Docker Swarm, these environment variables worked. Below is a screenshot of the pods logs: <a href="https://i.stack.imgur.com/Wnu7S.png" rel="nofollow noreferrer">image link.</a></p>
Papaya
<p>You are using <code>localhost</code> to access the <code>mailhog</code> service from your <code>mail-service</code> pod. You should be using a fully qualified domain name (FQDN) of the <code>mailhog</code> service in your mail-server's <code>MAIL_HOST</code> env variable.</p> <p>FQDN of any service by default is typically in the format: <code>&lt;service-name&gt;.&lt;namespace&gt;.svc.cluster.local</code>. In this case, the FQDN of the mailhog service would be <code>mailhog.default.svc.cluster.local</code> or simply <code>mailhog.svc.cluster.local</code> since both the pods are in the same namespace (default)</p> <p>Ref: <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-aaaa-records</a></p>
Vamsi Kiran
<h1>Background</h1> <p>I'm installing Elastic Search into my cluster with a helm chart with the following command:</p> <pre class="lang-bash prettyprint-override"><code>helm -n elasticsearch upgrade --install -f values_elasticsearch.yaml elasticsearch elastic/elasticsearch </code></pre> <p>This allows me to override the values which is nice, but I'd also like to add an <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">Istio virtual service</a>. Which I believe would require I add a template to the helm chart.</p> <h2>Options</h2> <p>I've considered the following thee options, but not sure what is best practice.</p> <ol> <li>Download the elastic search maintained helm chart and add a template for the additional yamls I need. This could create an issue when they upgrade their helm, I'm gonging to have to keep merging in their changes.</li> <li>Add the template to a separate helm chart. I don't love this solution because I like to have one helm chart for a single namespace and application.</li> <li>Create a helm subchart. I don't know much about these or if this is the right scenario to use them in.</li> </ol> <p>So I'm wondering if there is a better way to do this or which of my options is best.</p>
MikeSchem
<p>helm subchart is ideal way</p> <ul> <li>You can create a helm chart and add the elasticsearch as subchart</li> <li>You can add required istio yaml files inside your template folder</li> <li>First you should run <code>helm dep update</code> then run the install/upgrade command</li> </ul>
Phani Kumar
<p>Crontab to run a job every minute from 11pm to 7:30am</p> <p>I have this so far which is every minute from 11pm to 7:00am the problem is the half hour.</p> <pre><code>* 23,0-7 * * * </code></pre> <p>You can play around with it here <a href="https://crontab.guru/#*_23,0-7_*_*_*" rel="nofollow noreferrer">crontab_guru</a> Any ideas?</p>
Dunski
<p>@Dunski : I have checked in many ways this <code>*,0-30 23,0-7 * * *</code> expression could stop at 07:59 min only but not yet 07:30 am.</p> <p>As @jordanm suggested we have only a way to run two jobs from :</p> <p>11 pm to 7 am expression <code>* 23,0-7 * * *</code> (“At every minute past hour 23 and every hour from 0 through 7.”) and then</p> <p>7 am to 7:30 am <code>0-30 7 * * *</code> (“At every minute from 0 through 30 past hour 7.”).</p>
Hemanth Kumar
<p>im having my application deployed in openshift, for file transfer we're using sftp and have configured sftp private key via secret but on making the api call via swagger , getting the response as invalid private key any help on how i can include this private key which is of multiple lines in the secret yaml file</p> <p>below is the error im getting</p> <pre><code>------stack trace------- java.lang.IllegalStateException: failed to create SFTP Session at org.springframework.integration.sftp.session.DefaultSftpSessionFactory.getSession(DefaultSftpSessionFactory.java:404) Caused by: com.jcraft.jsch.JSchException: invalid privatekey: [B@50ae9b59 at com.jcraft.jsch.KeyPair.load(KeyPair.java:747) 2022-10-19 13:33:43,123 - [threadPoolTaskExecutor-2] ERROR - transactionId: - Encountered an error executing step Download 0145A files in job Download Job java.util.concurrent.CompletionException: org.springframework.messaging.MessagingException: Failed to execute on session; nested exception is java.lang.IllegalStateException: failed to create SFTP Session at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(Unknown Source) Caused by: org.springframework.messaging.MessagingException: Failed to execute on session; nested exception is java.lang.IllegalStateException: failed to create SFTP Session at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:461) Caused by: java.lang.IllegalStateException: failed to create SFTP Session at org.springframework.integration.sftp.session.DefaultSftpSessionFactory.getSession(DefaultSftpSessionFactory.java:404) Caused by: com.jcraft.jsch.JSchException: invalid privatekey: [B@7204aa68 </code></pre> <p>below is the secret file that i used</p> <pre><code>secret-test.yaml apiVersion: xx kind: Secret metadata: name: xxxxx namespace: xxxxxxxx type: Opaque stringData: key_name: &gt; PuTTY-User-Key-File-2: ssh-rsa\r\ Encryption: none\r\ Comment: rsa-key-20210504\r\ Public-Lines: 12\r\ AAAAB3NzaC1yc2EAAAABJQAAAgEAhi7HxCYBA3gvK0UbFenUlQTGUsDfvCXbEg/Y\r\ As3jvPl6hIjHp2xAOyOQ5P6A8zx9prjk06Q5q44lKzZXgGzJS8ZxpsMWsPA/+x1M\r\ . . . 4s5A+20CflMMEwK/G6Kny7ZduVRDmULzbUjaTPyw8rHYI9Do/YIIskDlwbdy3alg\r\ 3/PYjrPEUq62yXZEvt7XOcSesrrVLLDMsOK3LJvQqZCrVFnRgTSoxDhGFNwb8De8\r\ jbdW1j/G+vPegA7yjI7r2QZx7gI23CX0XZkXud3LzhZn02RmdboxErrRMKrp/cgX\r\ zdWd2DM=\r\ Private-Lines: 28\r\ AAACACCjmGAk631ibFaiG1hbeOX6PhQhE9PR21droz7zz5yrYv2kuvFfhT7RTMIU\r\ ..... EwlRTPzhe070NNze7yNMp4zsTAG2I98PEXZYbl7oyUXkzJE/AmQqwgOomoWx8IEL\r\ U6E=\r\ Private-MAC: 87d58cb0e3e60ef943ee9396fe9\r </code></pre> <p>Things i tried:</p> <ul> <li>included |- , &gt;-, only |,only &gt;</li> <li>tried enclosing in double quotes with backslash as escape character</li> </ul> <p>something like below</p> <pre><code> &quot;PuTTY-User-Key-File-2: ssh-rsa\ Encryption: none\ Comment: rsa-key-20210504...&quot; still got the same error as above </code></pre>
Sushmitha
<p>i tried with type as kubernetes.io/ssh-auth instead of Opaque and it worked !! thanks for the suggestions provided</p>
Sushmitha
<p>I have a small NodeJS script that I want to run inside a container inside a kubernetes cluster as a CronJob. I'm having a bit of a hard time figuring out how to do that, given most examples are simple &quot;run this Bash command&quot; type deals.</p> <p>package.json:</p> <pre><code>{ ... &quot;scripts&quot;: { &quot;start&quot;: &quot;node bin/path/to/index.js&quot;, &quot;compile&quot;: &quot;tsc&quot; } } </code></pre> <p><code>npm run compile &amp;&amp; npm run start</code> works on the command-line. Moving on to the Docker container setup...</p> <p>Dockerfile:</p> <pre><code>FROM node:18 WORKDIR /working/dir/ ... RUN npm run compile CMD [ &quot;npm&quot;, &quot;run&quot;, &quot;start&quot; ] </code></pre> <p>When I build and then docker run this container on the command-line, the script runs successfully. This gives me confidence that most things above are correct and it must be a problem with my CronJob...</p> <p>my-cron.yaml:</p> <pre><code>apiVersion: batch/v1 kind: CronJob metadata: name: cron-foo spec: schedule: &quot;* * * * *&quot; jobTemplate: spec: template: spec: containers: - name: job-foo image: gcr.io/... imagePullPolicy: IfNotPresent restartPolicy: OnFailure </code></pre> <p>When I <code>kubectl apply -f my-cron.yaml</code> sure enough I get some pods that run, one per-minute, however they all error out:</p> <pre><code>% kubectl logs cron-foo-27805019-j8gbp &gt; [email protected] start &gt; node bin/path/to/index.js node:internal/modules/cjs/loader:998 throw err; ^ Error: Cannot find module '/working/dir/bin/path/to/index.js' at Module._resolveFilename (node:internal/modules/cjs/loader:995:15) at Module._load (node:internal/modules/cjs/loader:841:27) at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12) at node:internal/main/run_main_module:23:47 { code: 'MODULE_NOT_FOUND', requireStack: [] } Node.js v18.11.0 </code></pre> <p>The fact that it's trying to run the correct command means the correct Docker container is being pulled successfully, but I don't know why the script is not being found...</p> <p>Any help would be appreciated. Most CronJob examples I've seen have a <code>command:</code> list in the template spec...</p>
codedread
<p>The error you show about the path not being found should appear when you <code>docker run ...</code> - but it didn't!</p> <p>So, I assume it is related to the <code>imagePullPolicy</code>. Something is fixed, checked locally and then re-pushed to the given registry for your Kubernetes workloads to use. If it was re-pushed with the <strong>same</strong> tag, then don't forget to tell Kubernetes to query the registry download the new digest by changing the <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer"><code>imagePullPolicy</code></a> to <code>Always</code>.</p>
Ryan
<p>I manage a spring applications on K8s.</p> <p>Pods take about 2~3 minutes for the application to run, and the probe waiting time is set at 4 minutes. <a href="https://i.stack.imgur.com/wyZlC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wyZlC.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/PVCTD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PVCTD.png" alt="enter image description here" /></a></p> <p>After re-deploy an application, the CPU Spike symptom occurs while the Pod is being initialized. So, when Rolling Update is performed, all Pods are scaled up due to the CPU spike symptom. <a href="https://i.stack.imgur.com/K8AWp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K8AWp.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/bjPmh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bjPmh.png" alt="enter image description here" /></a></p> <p>I don't wanna collect a cpu while initialize application. Is there any way to delay collect cpu info?</p> <p><strong>K8s Version</strong> : 1.26.3</p> <p><strong>Deployment Info</strong></p> <pre><code>... livenessProbe: failureThreshold: 3 httpGet: path: /probex port: 9006 scheme: HTTP initialDelaySeconds: 240 periodSeconds: 60 successThreshold: 1 timeoutSeconds: 1 name: sample-container ports: - containerPort: 9006 name: http protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /probex port: 9006 scheme: HTTP initialDelaySeconds: 240 periodSeconds: 60 successThreshold: 1 timeoutSeconds: 1 resources: limits: memory: 8Gi requests: cpu: 500m memory: 8Gi ... </code></pre> <p><strong>Hpa Info</strong></p> <pre><code>... spec: maxReplicas: 5 metrics: - resource: name: memory target: averageUtilization: 70 type: Utilization type: Resource - resource: name: cpu target: averageUtilization: 70 type: Utilization type: Resource minReplicas: 1 ... </code></pre>
Daniel Lee
<p>You can read about <code>stabilizationWindowSeconds</code> and <code>periodSeconds</code> paramters in the hpa configuration and tune them as per your use case.</p>
Satyam Bhatia
<p>The version of Kubernetes in use is v1.26.3.</p> <p>I have cloned the Kubernetes source code to peruse the scheduler logic.</p> <p>Upon inspection of the framework.go file, I have observed the existence of a frameworkImpl struct. I am curious as to how I can ascertain which plugins have been registered.</p> <pre><code> I am curious as to how I can ascertain which plugins have been registered // NewFramework initializes plugins given the configuration and the registry. func NewFramework(r Registry, profile *config.KubeSchedulerProfile, stopCh &lt;-chan struct{}, opts ...Option) (framework.Framework, error) { options := defaultFrameworkOptions(stopCh) for _, opt := range opts { opt(&amp;options) } f := &amp;frameworkImpl{ registry: r, snapshotSharedLister: options.snapshotSharedLister, scorePluginWeight: make(map[string]int), waitingPods: newWaitingPodsMap(), clientSet: options.clientSet, kubeConfig: options.kubeConfig, eventRecorder: options.eventRecorder, informerFactory: options.informerFactory, metricsRecorder: options.metricsRecorder, extenders: options.extenders, PodNominator: options.podNominator, parallelizer: options.parallelizer, } if profile == nil { return f, nil } f.profileName = profile.SchedulerName f.percentageOfNodesToScore = profile.PercentageOfNodesToScore if profile.Plugins == nil { return f, nil } // get needed plugins from config pg := f.pluginsNeeded(profile.Plugins) pluginConfig := make(map[string]runtime.Object, len(profile.PluginConfig)) for i := range profile.PluginConfig { name := profile.PluginConfig[i].Name if _, ok := pluginConfig[name]; ok { return nil, fmt.Errorf(&quot;repeated config for plugin %s&quot;, name) } pluginConfig[name] = profile.PluginConfig[i].Args } outputProfile := config.KubeSchedulerProfile{ SchedulerName: f.profileName, PercentageOfNodesToScore: f.percentageOfNodesToScore, Plugins: profile.Plugins, PluginConfig: make([]config.PluginConfig, 0, len(pg)), } pluginsMap := make(map[string]framework.Plugin) for name, factory := range r { // initialize only needed plugins. if !pg.Has(name) { continue } args := pluginConfig[name] if args != nil { outputProfile.PluginConfig = append(outputProfile.PluginConfig, config.PluginConfig{ Name: name, Args: args, }) } p, err := factory(args, f) if err != nil { return nil, fmt.Errorf(&quot;initializing plugin %q: %w&quot;, name, err) } pluginsMap[name] = p // Update ClusterEventMap in place. fillEventToPluginMap(p, options.clusterEventMap) } // initialize plugins per individual extension points for _, e := range f.getExtensionPoints(profile.Plugins) { if err := updatePluginList(e.slicePtr, *e.plugins, pluginsMap); err != nil { return nil, err } } // initialize multiPoint plugins to their expanded extension points if len(profile.Plugins.MultiPoint.Enabled) &gt; 0 { if err := f.expandMultiPointPlugins(profile, pluginsMap); err != nil { return nil, err } } if len(f.queueSortPlugins) != 1 { return nil, fmt.Errorf(&quot;only one queue sort plugin required for profile with scheduler name %q, but got %d&quot;, profile.SchedulerName, len(f.queueSortPlugins)) } if len(f.bindPlugins) == 0 { return nil, fmt.Errorf(&quot;at least one bind plugin is needed for profile with scheduler name %q&quot;, profile.SchedulerName) } if err := getScoreWeights(f, pluginsMap, append(profile.Plugins.Score.Enabled, profile.Plugins.MultiPoint.Enabled...)); err != nil { return nil, err } // Verifying the score weights again since Plugin.Name() could return a different // value from the one used in the configuration. for _, scorePlugin := range f.scorePlugins { if f.scorePluginWeight[scorePlugin.Name()] == 0 { return nil, fmt.Errorf(&quot;score plugin %q is not configured with weight&quot;, scorePlugin.Name()) } } if options.captureProfile != nil { if len(outputProfile.PluginConfig) != 0 { sort.Slice(outputProfile.PluginConfig, func(i, j int) bool { return outputProfile.PluginConfig[i].Name &lt; outputProfile.PluginConfig[j].Name }) } else { outputProfile.PluginConfig = nil } options.captureProfile(outputProfile) } return f, nil } </code></pre> <p>I am curious as to how I can ascertain which plugins have been registered</p>
fuyou001
<p>actual, i'm trying to find out what plugins are used by default too.</p> <p>the kubernetes version i'm using is 1.20.6.</p> <p>i found pretty hard to get the answer from source code.</p> <p>but you can export the configuration used by scheduler isntance by adding an arugument to kube-scheduler.yaml:</p> <ul> <li>--write-config-to=/path/to/hostpath/config/file</li> </ul> <p>ATTENTIONS:</p> <ul> <li>--v should &gt;= 2</li> <li>if write-config-to success, scheduler will exit 0, so remove this argument after you export config file</li> </ul>
xixiwhisper
<p>I am able to capture the snapshots of resource-definition using below commands <code>linstor rd sp pvc-00 AutoSnapshot/RunEvery 15</code> which captures a snapshot of given resource-definition(pvc-00) after every 15 minutes..</p> <pre><code> ResourceName ┊ SnapshotName pvc-006 ┊ autoSnap00071 </code></pre> <p>the snapshot is by default getting captured with the name autoSnap00071 , is there a way where i can manually provide a name to snapshots?</p> <p>Thanks in advance</p>
AKSHAY KADAM
<p>As @ghernadi mentioned in this <a href="https://github.com/LINBIT/linstor-server/issues/44#issuecomment-451837782" rel="nofollow noreferrer">comment</a></p> <p>Currently Linstor does not support any kind of renaming action. If you really have to do this, you have two options:</p> <ul> <li>NOT RECOMMENDED: you do it manually by editing the database (NOT<br /> recommended, as you have to be very careful to update all references to the changed name)</li> <li>a bit &quot;more&quot; recommended: delete your current database (or perform &quot;lostNode&quot; commands for the affected nodes) and recreate using migration everything with the new storage pool names.</li> </ul> <p>Note: we do have internal discussions if and how we could provide renaming actions. That means this situation might be easier to resolve in (far) future</p> <p>This is an Open issue you can follow this <a href="https://github.com/LINBIT/linstor-server/issues/44#issuecomment-451837782" rel="nofollow noreferrer">github</a> link for more feature enhancements. You can also raise a comment under this link or else you can raise a new feature request .</p> <p>Refer to this official <a href="https://linbit.com/drbd-user-guide/linstor-guide-1_0-en/" rel="nofollow noreferrer">Linstor documentation</a> for more information.</p>
Hemanth Kumar
<p>I am running the following command to start a k8s pod</p> <p><code>kubectl run api --image=corina1998/api --env=&quot;PORT=3000&quot; </code></p> <p>And I get the following error in the pods' logs of my api service. This is the output of kubectl logs api command:</p> <pre><code>&gt; [email protected] start-docker /usr/src/app &gt; node src/start.js internal/fs/utils.js:332 throw err; ^ Error: EISDIR: illegal operation on a directory, read at Object.readSync (fs.js:617:3) at tryReadSync (fs.js:382:20) at Object.readFileSync (fs.js:419:19) at /usr/src/app/node_modules/docker-secret/dist/index.js:16:18 at Array.forEach (&lt;anonymous&gt;) at getSecrets (/usr/src/app/node_modules/docker-secret/dist/index.js:12:15) at Object.&lt;anonymous&gt; (/usr/src/app/node_modules/docker-secret/dist/index.js:30:19) at Module._compile (internal/modules/cjs/loader.js:1085:14) at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10) at Module.load (internal/modules/cjs/loader.js:950:32) { errno: -21, syscall: 'read', code: 'EISDIR' } npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! [email protected] start-docker: `node src/start.js` npm ERR! Exit status 1 npm ERR! npm ERR! Failed at the [email protected] start-docker script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. </code></pre> <p>What could be the problem and how could I debug?</p>
Corina M
<p>I solved this weird problem by removing package-lock.json, rerun npm install to generate a new package-lock.json file, then I rebuilt the image and now it works. I use previously a combination of yarn install and npm install with affected my package-lock.json file.</p>
Corina M
<p>I've created a Kubernetes cluster in Azure using the following Terraform</p> <pre><code># Locals block for hardcoded names locals { backend_address_pool_name = &quot;appgateway-beap&quot; frontend_port_name = &quot;appgateway-feport&quot; frontend_ip_configuration_name = &quot;appgateway-feip&quot; http_setting_name = &quot;appgateway-be-htst&quot; listener_name = &quot;appgateway-httplstn&quot; request_routing_rule_name = &quot;appgateway-rqrt&quot; app_gateway_subnet_name = &quot;appgateway-subnet&quot; } data &quot;azurerm_subnet&quot; &quot;aks-subnet&quot; { name = &quot;aks-subnet&quot; virtual_network_name = &quot;np-dat-spoke-vnet&quot; resource_group_name = &quot;ipz12-dat-np-connect-rg&quot; } data &quot;azurerm_subnet&quot; &quot;appgateway-subnet&quot; { name = &quot;appgateway-subnet&quot; virtual_network_name = &quot;np-dat-spoke-vnet&quot; resource_group_name = &quot;ipz12-dat-np-connect-rg&quot; } # Create Resource Group for Kubernetes Cluster module &quot;resource_group_kubernetes_cluster&quot; { source = &quot;./modules/resource_group&quot; count = var.enable_kubernetes == true ? 1 : 0 #name_override = &quot;rg-aks-spoke-dev-westus3-001&quot; app_or_service_name = &quot;aks&quot; # var.app_or_service_name subscription_type = var.subscription_type # &quot;spoke&quot; environment = var.environment # &quot;dev&quot; location = var.location # &quot;westus3&quot; instance_number = var.instance_number # &quot;001&quot; tags = var.tags } resource &quot;azurerm_user_assigned_identity&quot; &quot;identity_uami&quot; { location = var.location name = &quot;appgw-uami&quot; resource_group_name = module.resource_group_kubernetes_cluster[0].name } # Application Gateway Public Ip resource &quot;azurerm_public_ip&quot; &quot;test&quot; { name = &quot;publicIp1&quot; location = var.location resource_group_name = module.resource_group_kubernetes_cluster[0].name allocation_method = &quot;Static&quot; sku = &quot;Standard&quot; } resource &quot;azurerm_application_gateway&quot; &quot;network&quot; { name = var.app_gateway_name resource_group_name = module.resource_group_kubernetes_cluster[0].name location = var.location sku { name = var.app_gateway_sku tier = &quot;Standard_v2&quot; capacity = 2 } identity { type = &quot;UserAssigned&quot; identity_ids = [ azurerm_user_assigned_identity.identity_uami.id ] } gateway_ip_configuration { name = &quot;appGatewayIpConfig&quot; subnet_id = data.azurerm_subnet.appgateway-subnet.id } frontend_port { name = local.frontend_port_name port = 80 } frontend_port { name = &quot;httpsPort&quot; port = 443 } frontend_ip_configuration { name = local.frontend_ip_configuration_name public_ip_address_id = azurerm_public_ip.test.id } backend_address_pool { name = local.backend_address_pool_name } backend_http_settings { name = local.http_setting_name cookie_based_affinity = &quot;Disabled&quot; port = 80 protocol = &quot;Http&quot; request_timeout = 1 } http_listener { name = local.listener_name frontend_ip_configuration_name = local.frontend_ip_configuration_name frontend_port_name = local.frontend_port_name protocol = &quot;Http&quot; } request_routing_rule { name = local.request_routing_rule_name rule_type = &quot;Basic&quot; http_listener_name = local.listener_name backend_address_pool_name = local.backend_address_pool_name backend_http_settings_name = local.http_setting_name priority = 100 } tags = var.tags depends_on = [azurerm_public_ip.test] lifecycle { ignore_changes = [ backend_address_pool, backend_http_settings, request_routing_rule, http_listener, probe, tags, frontend_port ] } } # Create the Azure Kubernetes Service (AKS) Cluster resource &quot;azurerm_kubernetes_cluster&quot; &quot;kubernetes_cluster&quot; { count = var.enable_kubernetes == true ? 1 : 0 name = &quot;aks-prjx-${var.subscription_type}-${var.environment}-${var.location}-${var.instance_number}&quot; location = var.location resource_group_name = module.resource_group_kubernetes_cluster[0].name # &quot;rg-aks-spoke-dev-westus3-001&quot; dns_prefix = &quot;dns-aks-prjx-${var.subscription_type}-${var.environment}-${var.location}-${var.instance_number}&quot; #&quot;dns-prjxcluster&quot; private_cluster_enabled = false local_account_disabled = true default_node_pool { name = &quot;npprjx${var.subscription_type}&quot; #&quot;prjxsyspool&quot; # NOTE: &quot;name must start with a lowercase letter, have max length of 12, and only have characters a-z0-9.&quot; vm_size = &quot;Standard_B8ms&quot; vnet_subnet_id = data.azurerm_subnet.aks-subnet.id # zones = [&quot;1&quot;, &quot;2&quot;, &quot;3&quot;] enable_auto_scaling = true max_count = 3 min_count = 1 # node_count = 3 os_disk_size_gb = 50 type = &quot;VirtualMachineScaleSets&quot; enable_node_public_ip = false enable_host_encryption = false node_labels = { &quot;node_pool_type&quot; = &quot;npprjx${var.subscription_type}&quot; &quot;node_pool_os&quot; = &quot;linux&quot; &quot;environment&quot; = &quot;${var.environment}&quot; &quot;app&quot; = &quot;prjx_${var.subscription_type}_app&quot; } tags = var.tags } ingress_application_gateway { gateway_id = azurerm_application_gateway.network.id } # Enabled the cluster configuration to the Azure kubernets with RBAC azure_active_directory_role_based_access_control { managed = true admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids azure_rbac_enabled = true #false } network_profile { network_plugin = &quot;azure&quot; network_policy = &quot;azure&quot; outbound_type = &quot;userDefinedRouting&quot; } identity { type = &quot;SystemAssigned&quot; } oms_agent { log_analytics_workspace_id = module.log_analytics_workspace[0].id } timeouts { create = &quot;20m&quot; delete = &quot;20m&quot; } depends_on = [ azurerm_application_gateway.network ] } </code></pre> <p>and provided the necessary permissions</p> <pre><code># Get the AKS Agent Pool SystemAssigned Identity data &quot;azurerm_user_assigned_identity&quot; &quot;aks-identity&quot; { name = &quot;${azurerm_kubernetes_cluster.kubernetes_cluster[0].name}-agentpool&quot; resource_group_name = &quot;MC_${module.resource_group_kubernetes_cluster[0].name}_aks-prjx-spoke-dev-eastus-001_eastus&quot; } # Get the AKS SystemAssigned Identity data &quot;azuread_service_principal&quot; &quot;aks-sp&quot; { display_name = azurerm_kubernetes_cluster.kubernetes_cluster[0].name } # Provide ACR Pull permission to AKS SystemAssigned Identity resource &quot;azurerm_role_assignment&quot; &quot;acrpull_role&quot; { scope = module.container_registry[0].id role_definition_name = &quot;AcrPull&quot; principal_id = data.azurerm_user_assigned_identity.aks-identity.principal_id skip_service_principal_aad_check = true depends_on = [ data.azurerm_user_assigned_identity.aks-identity ] } resource &quot;azurerm_role_assignment&quot; &quot;aks_id_network_contributor_subnet&quot; { scope = data.azurerm_subnet.aks-subnet.id role_definition_name = &quot;Network Contributor&quot; principal_id = data.azurerm_user_assigned_identity.aks-identity.principal_id depends_on = [data.azurerm_user_assigned_identity.aks-identity] } resource &quot;azurerm_role_assignment&quot; &quot;akssp_network_contributor_subnet&quot; { scope = data.azurerm_subnet.aks-subnet.id role_definition_name = &quot;Network Contributor&quot; principal_id = data.azuread_service_principal.aks-sp.object_id depends_on = [data.azuread_service_principal.aks-sp] } resource &quot;azurerm_role_assignment&quot; &quot;aks_id_contributor_agw&quot; { scope = data.azurerm_subnet.appgateway-subnet.id role_definition_name = &quot;Network Contributor&quot; principal_id = data.azurerm_user_assigned_identity.aks-identity.principal_id depends_on = [data.azurerm_user_assigned_identity.aks-identity] } resource &quot;azurerm_role_assignment&quot; &quot;akssp_contributor_agw&quot; { scope = data.azurerm_subnet.appgateway-subnet.id role_definition_name = &quot;Network Contributor&quot; principal_id = data.azuread_service_principal.aks-sp.object_id depends_on = [data.azuread_service_principal.aks-sp] } resource &quot;azurerm_role_assignment&quot; &quot;aks_ingressid_contributor_on_agw&quot; { scope = azurerm_application_gateway.network.id role_definition_name = &quot;Contributor&quot; principal_id = azurerm_kubernetes_cluster.kubernetes_cluster[0].ingress_application_gateway[0].ingress_application_gateway_identity[0].object_id depends_on = [azurerm_application_gateway.network,azurerm_kubernetes_cluster.kubernetes_cluster] skip_service_principal_aad_check = true } resource &quot;azurerm_role_assignment&quot; &quot;aks_ingressid_contributor_on_uami&quot; { scope = azurerm_user_assigned_identity.identity_uami.id role_definition_name = &quot;Contributor&quot; principal_id = azurerm_kubernetes_cluster.kubernetes_cluster[0].ingress_application_gateway[0].ingress_application_gateway_identity[0].object_id depends_on = [azurerm_application_gateway.network,azurerm_kubernetes_cluster.kubernetes_cluster] skip_service_principal_aad_check = true } resource &quot;azurerm_role_assignment&quot; &quot;uami_contributor_on_agw&quot; { scope = azurerm_application_gateway.network.id role_definition_name = &quot;Contributor&quot; principal_id = azurerm_user_assigned_identity.identity_uami.principal_id depends_on = [azurerm_application_gateway.network,azurerm_user_assigned_identity.identity_uami] skip_service_principal_aad_check = true } </code></pre> <p>and deployed the below mentioned application</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: aks-helloworld spec: replicas: 1 selector: matchLabels: app: aks-helloworld-two template: metadata: labels: app: aks-helloworld-two spec: containers: - name: aks-helloworld-two image: mcr.microsoft.com/azuredocs/aks-helloworld:v1 ports: - containerPort: 80 env: - name: TITLE value: &quot;AKS Ingress Demo&quot; --- apiVersion: v1 kind: Service metadata: name: aks-helloworld spec: type: LoadBalancer ports: - port: 80 selector: app: aks-helloworld-two </code></pre> <p><a href="https://i.stack.imgur.com/4tU3n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4tU3n.png" alt="enter image description here" /></a></p> <p>External IP got assigned</p> <p><a href="https://i.stack.imgur.com/YOOZQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YOOZQ.png" alt="enter image description here" /></a></p> <p>however I am not able to access the External IP</p> <p><a href="https://i.stack.imgur.com/bjuDM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bjuDM.png" alt="enter image description here" /></a></p> <p><strong>Note:</strong> I have not deployed any Ingress controller separately like mentioned in the <a href="https://NAMESPACE=ingress-basic%20%20helm%20repo%20add%20ingress-nginx%20https://kubernetes.github.io/ingress-nginx%20helm%20repo%20update%20%20helm%20install%20ingress-nginx%20ingress-nginx/ingress-nginx%20%5C%20%20%20--create-namespace%20%5C%20%20%20--namespace%20$NAMESPACE%20%5C%20%20%20--set%20controller.service.annotations.%22service%5C.beta%5C.kubernetes%5C.io/azure-load-balancer-health-probe-request-path%22=/healthz" rel="nofollow noreferrer">Microsoft Article</a> as I am not sure this is required</p>
One Developer
<p><strong>I tried to reproduce the same in my environment to create Kubernetes Service Cluster with Application Gateway:</strong></p> <p>Follow the <a href="https://stackoverflow.com/questions/75154385/aks-loadbalancer-external-ip-stuck-on-pending/75159961#75159961">Stack link</a> to create <strong>Kubernetes Service Cluster</strong> with <strong>Ingress Application Gateway</strong>.</p> <p>If you are unable to access your application using external Load balancer IP after deployment in <strong>Azure Kubernetes Service (AKS)</strong>, Verify the below setting in <strong>AKS Cluster</strong>.</p> <p>1.check the status of the load balancer using the below cmd.</p> <pre><code>kubectl get service &lt;your service name&gt; </code></pre> <p>Make sure that the <strong>External -IP</strong> field is not set to <strong>Pending</strong> state.</p> <p><img src="https://i.stack.imgur.com/x3a4A.png" alt="enter image description here" /></p> <ol start="2"> <li>Verify the security group associated with the load balancer. Make sure that the security group allows traffic on the desired port.</li> </ol> <p>Kindly follow the below steps to check <strong>NSG Security</strong> rules in <strong>AKS</strong> cluster.</p> <p><strong>Go to Azure Portal &gt; Kubernetes services &gt; Select your Kubernetes services&gt; Properties &gt; Select your resource group under Infrastructure resource group &gt; overview &gt; Select your NSG Group.</strong></p> <p><img src="https://i.stack.imgur.com/u661X.png" alt="enter image description here" /></p> <p>I have disabled inbound <strong>http</strong> rule in <strong>Network Security Group</strong> for testing, got the same error.</p> <p><img src="https://i.stack.imgur.com/vkI9h.png" alt="enter image description here" /></p> <p>Application status, once disable the <strong>Port</strong> 80 in <strong>NSG</strong>.</p> <p><img src="https://i.stack.imgur.com/GELUH.png" alt="enter image description here" /></p> <ol start="3"> <li>check the routing rules on your virtual network. Make sure that traffic is being forwarded from the load balancer.</li> </ol> <p><img src="https://i.stack.imgur.com/4QjNX.png" alt="enter image description here" /></p>
Venkat V
<p>We are integrating Eclipse Ditto into a digital twin platform, but we have encountered a problem while testing and we don't really know how to fix it.</p> <p>We made a question related to this one time ago and it worked. Here you have the link to that question: <a href="https://stackoverflow.com/questions/76044974/eclipse-ditto-does-not-send-all-things-events-over-target-connection">Eclipse Ditto does not send all things events over target connection</a></p> <p>Unfortunately it started failling again but we dont think that the problem is the same as before.</p> <p>We are in the same scenario, the goal is to receive in 593 twins (Ditto Thing) the result of a simulation. The idea is to be able to do several simulation runs simultaneously and that each simulation run sends 593 messages to a Kafka topic. For example, for 6 runs we will have 3558 messages in the topic.</p> <p>We upgraded all fields and values that were given to us deleted javascript mapping and tested with the maximun amount of simulations, 360. It worked with 360 simulations that send a total of 213480 messages. No messages were droped in any of the tests that we carried out. Perfect!.</p> <p>So we decided to make some test over all the platform to measure latency. The workflow of the data is the following:</p> <p><strong>Simulation --&gt; Kafka --&gt; Ditto --&gt; MQTT (Mosquitto) --&gt; Database</strong></p> <p>We made a script that sended 1 simulation, waited the data to be stored into the database and then retrieved timestamps. When al 593 messages arrived, the script sended 2 simulations, waited the all 1186 messages to arrive to the db and then sended a run with 3 simulations. The script should stop when it reached 360 simulations simulataneously.</p> <p>We found that ditto was not capable of processing data from 200 simulations even when it was previously capable of supporting 360. We tried giving Ditto and its components more resources, don't worry we are still free resources, but nothing changed. It even got worse.</p> <p>We decided to reinstall every component with the configuration that worked previously but now we found some problems:</p> <ul> <li>Sometimes some messages remains in Kafka and Ditto don't read them.</li> <li>Sometimes all data is read from Kafka but no messages are sent to MQTT.</li> <li>Sometimes it read some messages from Kafka but not all and then, Ditto sends the read data to MQTT multiple times.</li> </ul> <p>The funny thing is that <strong>all those unread/unsended messages sometimes are sent after 1 or 2 hours to the MQTT broker</strong>, even though we delete all messages from the Kafka topic. We though that Ditto stores some data in a cache, but we dont know how to clear it or stop it.</p> <p>Furthermore, despite all these problems, we have 5 twins receiving data every 15 minutes and sending it over MQTT via other connections. These twins are working properly at all times.</p> <p>On the other hand we are a little bit confused about resources management cause we are using Kubernetes. We dont know exactly the amount of resources that Ditto need for a specific amount of connections, things, etc, or even if we need to give arguments to the JVM. Sometimes connections pods are restarted due to an <em>AskTimeoutException</em> error.</p> <p>Here are the connections we have established, their logs and metrics, along with the Helm's values.yaml.</p> <ul> <li><p>Before executions:</p> <ul> <li>Source connection status : <a href="https://pastebin.com/xgtqFZab" rel="nofollow noreferrer">https://pastebin.com/xgtqFZab</a></li> <li>Target connection status : <a href="https://pastebin.com/YMJE3xs2" rel="nofollow noreferrer">https://pastebin.com/YMJE3xs2</a></li> </ul> </li> <li><p>After executing 1 simulation (593 messages):</p> <ul> <li><p>Source connection status : <a href="https://pastebin.com/jaxB7LQ0" rel="nofollow noreferrer">https://pastebin.com/jaxB7LQ0</a></p> </li> <li><p>Target connection status : <a href="https://pastebin.com/RZ4p0Mq9" rel="nofollow noreferrer">https://pastebin.com/RZ4p0Mq9</a></p> </li> <li><p>Source connection metrics : <a href="https://pastebin.com/mGKPDr8V" rel="nofollow noreferrer">https://pastebin.com/mGKPDr8V</a></p> </li> <li><p>Target connection metrics : <a href="https://pastebin.com/kwTZHmiK" rel="nofollow noreferrer">https://pastebin.com/kwTZHmiK</a></p> </li> <li><p>Source connection logs : <a href="https://pastebin.com/dfaDyUS5" rel="nofollow noreferrer">https://pastebin.com/dfaDyUS5</a></p> </li> <li><p>Target connection logs : <a href="https://pastebin.com/TxRVHfjq" rel="nofollow noreferrer">https://pastebin.com/TxRVHfjq</a></p> </li> </ul> </li> </ul> <p>When executing just one simulation at the begining of the morning it works correctly, but when executing simulations after that, it start failing.</p> <ul> <li>After executing 11 simulations (6.523 messages) <ul> <li>Source connection status : <a href="https://pastebin.com/G9mYpmnT" rel="nofollow noreferrer">https://pastebin.com/G9mYpmnT</a></li> <li>Target connection status : <a href="https://pastebin.com/0ij6pDYn" rel="nofollow noreferrer">https://pastebin.com/0ij6pDYn</a></li> <li>Source connection metrics : <a href="https://pastebin.com/QjTDwBmL" rel="nofollow noreferrer">https://pastebin.com/QjTDwBmL</a></li> <li>Target connection metrics : <a href="https://pastebin.com/P5MVFTJu" rel="nofollow noreferrer">https://pastebin.com/P5MVFTJu</a></li> <li>Source connection logs : <a href="https://pastebin.com/Kpft7Tme" rel="nofollow noreferrer">https://pastebin.com/Kpft7Tme</a></li> <li>Target connection logs : <a href="https://pastebin.com/wMe4DYnA" rel="nofollow noreferrer">https://pastebin.com/wMe4DYnA</a></li> </ul> </li> </ul> <p><strong>Source</strong> connection:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;name&quot;: &quot;connection-for-pivot-simulation-with-idSimulationRun&quot;, &quot;connectionType&quot;: &quot;kafka&quot;, &quot;connectionStatus&quot;: &quot;open&quot;, &quot;uri&quot;: &quot;tcp://KAFKAIP&quot;, &quot;sources&quot;: [ { &quot;addresses&quot;: [ &quot;riego&quot; ], &quot;consumerCount&quot;: 1, &quot;qos&quot;: 1, &quot;authorizationContext&quot;: [ &quot;nginx:ditto&quot; ], &quot;headerMapping&quot;: { &quot;correlation-id&quot;: &quot;{{header:correlation-id}}&quot;, &quot;namespace&quot;: &quot;{{ entity:namespace }}&quot;, &quot;content-type&quot;: &quot;{{header:content-type}}&quot;, &quot;connection&quot;: &quot;{{ connection:id }}&quot;, &quot;id&quot;: &quot;{{ entity:id }}&quot;, &quot;reply-to&quot;: &quot;{{header:reply-to}}&quot; }, &quot;replyTarget&quot;: { &quot;address&quot;: &quot;{{header:reply-to}}&quot;, &quot;headerMapping&quot;: { &quot;content-type&quot;: &quot;{{header:content-type}}&quot;, &quot;correlation-id&quot;: &quot;{{header:correlation-id}}&quot; }, &quot;expectedResponseTypes&quot;: [ &quot;response&quot;, &quot;error&quot; ], &quot;enabled&quot;: true } } ], &quot;targets&quot;: [], &quot;clientCount&quot;: 5, &quot;failoverEnabled&quot;: true, &quot;validateCertificates&quot;: true, &quot;processorPoolSize&quot;: 1, &quot;specificConfig&quot;: { &quot;saslMechanism&quot;: &quot;plain&quot;, &quot;bootstrapServers&quot;: &quot;KAFKAIP&quot; }, &quot;tags&quot;: [] } </code></pre> <p><strong>Target</strong> connection:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;name&quot;: &quot;mqtt-connection-for-telegraf-pivot&quot;, &quot;connectionType&quot;: &quot;mqtt-5&quot;, &quot;connectionStatus&quot;: &quot;open&quot;, &quot;uri&quot;: &quot;tcp://MQTTIP&quot;, &quot;sources&quot;: [], &quot;targets&quot;: [ { &quot;address&quot;: &quot;opentwins/{{ topic:channel }}/{{ topic:criterion }}/{{ thing:namespace }}/{{ thing:name }}&quot;, &quot;topics&quot;: [ &quot;_/_/things/twin/events?namespaces=pivot&amp;extraFields=thingId,attributes/_parents,features/idSimulationRun/properties/value&quot;, &quot;_/_/things/live/messages&quot;, &quot;_/_/things/live/commands&quot; ], &quot;qos&quot;: 1, &quot;authorizationContext&quot;: [ &quot;nginx:ditto&quot; ], &quot;headerMapping&quot;: {} } ], &quot;clientCount&quot;: 5, &quot;failoverEnabled&quot;: true, &quot;validateCertificates&quot;: true, &quot;processorPoolSize&quot;: 1, &quot;tags&quot;: [] } </code></pre> <p><strong>Values:</strong></p> <pre class="lang-yaml prettyprint-override"><code> swaggerui: enabled: false mongodb: enabled: false global: prometheus: enabled: true dbconfig: connectivity: uri: mongodb://dt-mongodb:27017/connectivity things: uri: mongodb://dt-mongodb:27017/things searchDB: uri: mongodb://dt-mongodb:27017/search policies: uri: mongodb://dt-mongodb:27017/policies connectivity: replicaCount: 5 extraEnv: - name: MQTT_CONSUMER_THROTTLING_ENABLED value: &quot;false&quot; - name: MQTT_CONSUMER_THROTTLING_LIMIT value: &quot;100000&quot; - name: KAFKA_CONSUMER_THROTTLING_ENABLED value: &quot;false&quot; - name: KAFKA_CONSUMER_THROTTLING_LIMIT value: &quot;100000&quot; - name: KAFKA_SESSION_TIMEOUT_MS value: &quot;60000&quot; - name: CONNECTIVITY_MQTT_MAX_QUEUE_SIZE value: &quot;100000&quot; - name: CONNECTIVITY_KAFKA_MAX_QUEUE_SIZE value: &quot;100000&quot; - name: CONNECTIVITY_SIGNAL_ENRICHMENT_BUFFER_SIZE value: &quot;300000&quot; - name: CONNECTIVITY_MESSAGE_MAPPING_MAX_POOL_SIZE value: &quot;10&quot; resources: requests: cpu: 2000m limits: memory: 3Gi gateway: resources: requests: cpu: 1000m limits: memory: 768Mi nginx: replicaCount: 2 service: type: NodePort nodePort: 30525 resources: requests: cpu: 500m limits: cpu: 1000m memory: 768Mi policies: resources: requests: cpu: 1000m limits: memory: 768Mi things: replicaCount: 1 resources: requests: cpu: 1000m limits: memory: 8192Mi thingsSearch: resources: requests: cpu: 1000m limits: memory: 768Mi </code></pre>
Julia Robles
<p>The behaviour described on OP was a product of multiple errors on Ditto 3.1.2</p> <ul> <li>AskTimeoutError: Caused by some badly designed NetworkPolicies. For more information, see <a href="https://github.com/eclipse-ditto/ditto/issues/1649" rel="nofollow noreferrer">this issue</a></li> <li>Kafka redelivers: Caused by connectivity sending messages again and again on error. See this <a href="https://github.com/eclipse-ditto/ditto/pull/1657" rel="nofollow noreferrer">PR</a></li> <li>High memory usage: Due to Akka's nature, disabling throttling is a terrible idea. Connectivity will continue to feed the system with messages that will eventually bring down the system. Throttling isn't ideal either, but is better than having a cluster crash.</li> </ul> <p>We believe version 3.3.0 has solved most of our issues, but further testing is needed.</p>
Altair Bueno
<p>I want to deploy a MariaDB Galera instance onto a local Minikube cluster with 3 nodes via Helm. I used the following command for that:</p> <pre><code>helm install my-release bitnami/mariadb-galera --set rootUser.password=test --set db.name=test </code></pre> <p>The problem is, if I do that I get the following error in the log:</p> <pre><code>mariadb 10:27:41.60 mariadb 10:27:41.60 Welcome to the Bitnami mariadb-galera container mariadb 10:27:41.60 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb-galera mariadb 10:27:41.60 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb-galera/issues mariadb 10:27:41.61 mariadb 10:27:41.61 INFO ==&gt; ** Starting MariaDB setup ** mariadb 10:27:41.64 INFO ==&gt; Validating settings in MYSQL_*/MARIADB_* env vars mariadb 10:27:41.67 INFO ==&gt; Initializing mariadb database mkdir: cannot create directory '/bitnami/mariadb/data': Permission denied </code></pre> <p>The site of the image lists the possibility to use an extra init container to fix that (<a href="https://artifacthub.io/packages/helm/bitnami/mariadb-galera#extra-init-containers" rel="nofollow noreferrer">Link</a>).</p> <p>So I came up with the following configuration:</p> <p>mariadb-galera-init-config.yaml</p> <pre><code>extraInitContainers: - name: initcontainer image: bitnami/minideb command: [&quot;chown -R 1001:1001 /bitnami/mariadb/&quot;] </code></pre> <p>The problem is that when I run the command with this configuration:</p> <pre><code>helm install my-release bitnami/mariadb-galera --set rootUser.password=test --set db.name=test -f mariadb-galera-init-config.yaml </code></pre> <p>I get the following error on the Minikube dashboard:</p> <pre><code>Error: failed to start container &quot;initcontainer&quot;: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: &quot;chown -R 1001:1001 /bitnami/mariadb/&quot;: stat chown -R 1001:1001 /bitnami/mariadb/: no such file or directory: unknown </code></pre> <p>I don't know how to fix this configuration file, or if there is some other better way to get this working...</p>
Relative_Programming
<p>In any case someone has issues with this, may I suggest running a initContainer before.</p> <pre><code> initContainers: - name: mariadb-create-directory-structure image: busybox command: [ &quot;sh&quot;, &quot;-c&quot;, &quot;mkdir -p /bitnami/mariadb/data &amp;&amp; chown -R 1001:1001 /bitnami&quot;, ] volumeMounts: - name: data mountPath: /bitnami </code></pre>
ventsislav_rs
<p>I am trying to run a windows container on Azure kubernetes service, for same I need to increase <strong>conatiner</strong> disk-size which is currently 20Gb. As per Microsoft <a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-storage" rel="nofollow noreferrer">official document</a>, we can increase disk-size using storage-opt.</p> <p>Could anyone tell how we can provide same option to container as not able to find same?</p>
Anuj Nautiyal
<blockquote> <p>Increase pod disk-size in kubernetes.</p> </blockquote> <p>Thanks to <code>Rick Rackow</code> for suggesting the same point.</p> <p>You can use <code>Persistent Volumes</code> to manage storage in <code>Azure Kubernetes Service</code> and potentially increase disk space for your containers.</p> <p>You can follow the below steps.</p> <ol> <li>Create <strong>Storage Class</strong> by using below <code>yaml</code> code.</li> </ol> <pre class="lang-yaml prettyprint-override"><code> kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azuredisk-premium-retain provisioner: kubernetes.io/azure-disk reclaimPolicy: Retain volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: storageaccounttype: Premium_LRS kind: Managed </code></pre> <p>I created a <code>Storage Class</code> called <code>azuredisk-premium-retain</code> that uses <code>Azure Managed Disks</code> with the <code>Premium_LRS</code> storage type.</p> <pre><code>Kubectl apply -f storage.yaml </code></pre> <ol start="2"> <li><strong>Create a Persistent Volume Claim</strong>: Create a PVC that requests storage from the defined Storage Class.</li> </ol> <pre class="lang-yaml prettyprint-override"><code> apiVersion: v1 kind: PersistentVolumeClaim metadata: name: azure-managed-disk-pvc spec: accessModes: - ReadWriteOnce storageClassName: azuredisk-premium-retain resources: requests: storage: 50Gi # Specify the required storage size here as per your requirement </code></pre> <p>Created <code>PVC</code> using below command.</p> <pre><code> Kubectl apply -f pvc.yaml </code></pre> <ol start="3"> <li>Created <code>Pod</code> and mounted <code>PVC</code></li> </ol> <pre class="lang-yaml prettyprint-override"><code> kind: Pod apiVersion: v1 metadata: name: newpod spec: containers: - name: newpod image: nginx:latest volumeMounts: - mountPath: &quot;/mnt/azure&quot; name: volume volumes: - name: volume persistentVolumeClaim: claimName: azure-managed-disk-pvc </code></pre> <p>Created <code>Pod</code> using below command</p> <pre><code> Kubectl apply -f pod.yaml </code></pre> <p><img src="https://i.imgur.com/b56b24b.png" alt="enter image description here" /></p> <p>I created a <code>Pod</code> called <code>'newpod'</code> with a container that mounts the PVC at <code>/mnt/azure</code>. The volume is defined with the name <code>'volume',</code> and it references the <code>'pvc'</code> PVC you created earlier with a size of <code>50GB</code>.</p> <p>If needed, you can increase the size of the PVC by updating the <code>storage</code> field in its YAML definition and then reapplying the configuration.</p> <p>Reference : <a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/container-storage" rel="nofollow noreferrer">Container Storage Overview</a></p>
Venkat V
<p>So, dumb me has deleted a very important Statefulset, (I was working in the wrong environment) and need to recover it. I have the .yaml file and the description (what you get when you get when you click edit on OpenLens). This stateful set if used for the database and I can not lose the data. The pvc's and pv's are still there and I have not done anything in fear of losing data. As you can probably tell, I am not very familiar with Kubernetes and need help restoring my statefulset and not losing data in the process.</p> <p>As a sidenote, I tried just <code>kubectl apply -f &lt;file&gt;</code> in our dev environment and data gets lost.</p>
Stieg
<p>To restore the StatefulSet without losing data, you should first check the status of the PersistentVolumeClaims (PVCs) and PersistentVolumes (PVs) associated with the StatefulSet. You can do this by running the kubectl get pvc and kubectl get pv commands. Once you have verified that the PVCs and PVs are intact, you need the same statefulset yaml file for restoring, by using kubectl apply -f command you can recreate the StatefulSet. If you want to ensure that the StatefulSet is restored exactly as it was before it was deleted, you can use the kubectl replace -f command instead. This will replace the existing StatefulSet with the one defined in the .yaml file, without changing any of the data on the associated PVCs and PVs.</p> <p>To ensure that your data is not lost in the process, it is recommended that you create a backup of the StatefulSet before performing any of the above commands.</p>
Hemanth Kumar
<p>I want to monitor the disk usage% of a specific filesystem in a pod (in grafana). I was wondering if there’s a Prometheus query I can use to do so.</p>
Jay
<p>To monitor PVC, disk usage %, you can use something like this</p> <pre class="lang-js prettyprint-override"><code>100 - (kubelet_volume_stats_available_bytes / kubelet_volume_stats_capacity_bytes) * 100 </code></pre> <p>To monitor file system, disk usage %, you can use something like this</p> <pre class="lang-js prettyprint-override"><code>100 - (node_filesystem_avail_bytes / node_filesystem_size_bytes * 100) </code></pre> <p>Hope this helps.</p>
iamlucas
<p>I can't find any link or way to change it. Please let me know if this is possible, if yes please share the detailed steps.</p>
Susheel Bhatt
<p>At the time of writing, AKS allows setting the network plugin only at the time of cluster creation. So if you have an AKS cluster provisioned with <code>kubenet</code> network plugin and you want to switch to <code>Azure CNI</code>, then currently the only way to achieve that is by recreating the AKS cluster with network plugin set to Azure CNI.</p> <p>Please check <a href="https://learn.microsoft.com/en-us/azure/aks/aks-migration" rel="nofollow noreferrer">this article</a> for migration considerations and guidelines.</p>
Srijit_Bose-MSFT
<p>Because I have tried creating persistence volume but not getting any value in there as I am creating volume in my minikube vm hosted in virtualbox.</p> <p>yaml file for my deployment:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: python-flask-app-deployment spec: replicas: 1 selector: matchLabels: app: python-flask-app-dep template: metadata: labels: app: python-flask-app-dep spec: volumes: - name: mypod-pvc persistentVolumeClaim: claimName: mypvc containers: - name: python-app-container image: jackazjimmy/flask-test-demo-eks:v1 volumeMounts: - mountPath: /app/instance/ name: mypod-pvc ports: - containerPort: 31816 # Exposed port in your Dockerfile resources: limits: cpu: 500m requests: cpu: 200m --- apiVersion: v1 kind: Service metadata: name: python-app-service spec: selector: app: python-flask-app-dep ports: - protocol: TCP port: 8080 # External port targetPort: 31816 # Container port type: NodePort </code></pre> <p>Pvc yaml file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: storageClassName: &quot;&quot; accessModes: - ReadWriteOnce resources: requests: storage: 10Gi </code></pre> <p>Pv yaml file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: PersistentVolume metadata: name: mypv spec: storageClassName: &quot;&quot; capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/home/mydata&quot; </code></pre> <p>After applying or creating all the required resources, whenever I connect through minikube ssh not able to get the files stores in local volume. Any suggestion would be helpful:)</p>
jackazjimmy
<p>K8s almost support PV and PVC for all the apps runnable on K8s. As per your files, you left the storage class empty. It took the default storage class. There are high chance you don't have any storage class, or it might not be 10GB. Firstly you create a storage class where you want to store the volumes. It might be local or NFS outside of the cluster or using 3rd party solution like Minio or Rook. You also define this storage class's size (e.g., 10GB 100GB). Then you create a PV with this storage class with a unique name related to the app where you want to use it. Finally, you mount the PVC(Persistent Volume claim) with PV. Now your application will store data into PVC which is mounted with PV and that PV stores data into storage class. This is an example of a storage class yaml file.</p> <pre><code>apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/aws-ebs parameters: type: gp2 reclaimPolicy: Retain allowVolumeExpansion: true mountOptions: - debug volumeBindingMode: Immediate </code></pre> <p>I hope this help.</p>
tauqeerahmad24
<p>Recently some of the microservice AMQP applications are disconnected based on the solace event logs. However, the microservice AMQP applications did not detect any &quot;CONNECTION_CLOSE&quot; event. And, the applications did not trigger DISCONNECT action.</p> <p>Is there any documentation of the reasons and the explanation for the causes of them? How to troubleshoot to find more information?</p> <ul> <li>K8s : Using Microk8s (3 Nodes)</li> <li>Microservices Application : Using NodeJS (AMQP-PROMISE)</li> <li>Solace : Using Docker-Compose (Version 9.12.1.17) - Outside the K8s Cluster</li> </ul> <pre><code>2022-05-21T01:00:10.139+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/cc91953864d36d09/food-input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb Client (59) #amqp/client/cc91953864d36d09/food-in put-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb username testAccount Unbind to Flow Id (78), ForwardingMode(StoreAndForward), final statistics - flow(255, 0, 0, 0, 0, 0, 0, 0, 2997, 0), isActive(No), Reason(Client disconnected) 2022-05-21T01:00:10.141+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/cc91953864d36d09/food-input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb Client (59) #amqp/client/cc91953864d36d09/food- input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(104, 2955, 2951, 2997, 3055, 5952, 6915, 118733, 3991887, 1496290, 3998802, 1615023, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.105.161:60776, CLOSD, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher() 2022-05-21T01:00:10.613+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/26253499ce1fbce5/a2c7ad9b-540d-4116-85cf-e8dfe8d43d71 Client (4) #amqp/client/26253499ce1fbce5/a2c7ad9b-540d-4116-85cf -e8dfe8d43d71 username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(2, 2, 0, 0, 2, 2, 223, 401, 0, 0, 223, 401, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.105.161:38101, CLOSD, 0, 0, 0) zip(0, 0, 0, 0, 0. 00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher() 2022-05-21T01:00:13.141+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/cc91953864d36d09/food-input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb Client (59) #amqp/client/cc91953864d36d09/food- input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb username testAccount Pub flow session flow name 8b608e84651042bbaa485cdea5fd00ef (7), transacted session id -1, publisher id 6, last message id 4573, window size 247, final statistics - flow (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2951) 2022-05-21T01:01:10.158+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb/food-input-adaptor deleted, final statistics - sp ool(145051, 145023, 145051, 0, 0, 0, 1202584, 2997) bind(1, 1, 0, 0, 0) 2022-05-21T01:31:46.979+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: SYSTEM: SYSTEM_AUTHENTICATION_SESSION_OPENED: - - SEMP session 192.168.7.1-48 internal authentication opened for user localAccount (admin) 2022-05-21T01:35:13.360+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: SYSTEM: SYSTEM_AUTHENTICATION_SESSION_OPENED: - - CLI session pts/0 [10572] internal authentication opened for user appuser (appuser) </code></pre> <p>After a while</p> <pre><code>2022-05-21T01:51:36.127+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: VPN: VPN_AD_QENDPT_CREATE: default - Message VPN (0) Topic Endpoint food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4/food-input-adaptor created 2022-05-21T01:51:36.130+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_BIND_SUCCESS: default #amqp/client/4b2080ad03c99e16/food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 Client (87) #amqp/client/4b2080ad03c99e16/ food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 username testAccount Bind to Non-Durable Topic Endpoint food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4/food-input-adaptor Topic(caas/food/input), AccessType(Exclusive), Quota(5000M B), MaxMessageSize(10000000B), AllOthersPermission(No-Access), RespectTTL(No), RejectMsgToSenderOnDiscard(No), ReplayFrom(N/A), GrantedPermission(Read|Consume|Modify-Topic|Delete), FlowType(Consumer-Redelivery), FlowId(85), ForwardingMod e(StoreAndForward), MaxRedelivery(0), TransactedSessionId(-1) completed 2022-05-21T01:51:39.692+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/4b2080ad03c99e16/food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 Client (87) #amqp/client/4b2080ad03c99e16/food-in put-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 username testAccount Unbind to Flow Id (85), ForwardingMode(StoreAndForward), final statistics - flow(1, 0, 0, 0, 0, 0, 0, 0, 0, 0), isActive(No), Reason(Client disconnected) 2022-05-21T01:51:39.693+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/4b2080ad03c99e16/food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 Client (87) #amqp/client/4b2080ad03c99e16/food- input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(5, 4, 0, 0, 5, 4, 417, 693, 0, 0, 417, 693, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.7.1:63510, C LOSD, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher() 2022-05-21T01:51:42.693+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/4b2080ad03c99e16/food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 Client (87) #amqp/client/4b2080ad03c99e16/food- input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 username testAccount Pub flow session flow name 9ad69d02d736443489115e34529e6e68 (22), transacted session id -1, publisher id 21, last message id 1544, window size 247, final statistics - fl ow(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) 2022-05-21T01:51:42.818+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: SYSTEM: SYSTEM_AUTHENTICATION_SESSION_CLOSED: - - CLI session pts/0 [10572] closed for user appuser (appuser) 2022-05-21T01:52:39.712+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4/food-input-adaptor deleted, final statistics - sp ool(0, 0, 0, 0, 0, 0, 0, 0) bind(1, 1, 0, 0, 0) 2022-05-21T01:53:54.892+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CONNECT: default #amqp/client/eb13bf695e4cdc3a Client (86) #amqp/client/eb13bf695e4cdc3a username testAccount OriginalClientUsername(testAccount) WebSessionId (N/A) connected to 172.22.0.2:5672 from 192.168.7.1:63572 version(Unknown) platform(Unknown) SslVersion() SslCipher() APIuser(Unknown) authScheme(Basic) authorizationGroup() clientProfile(default) ACLProfile(default) SSLDowngradedToPlain Text(No) SSLNegotiatedTo() SslRevocation(Not Checked) Capabilities() SslValidityNotAfter(-) 2022-05-21T01:53:54.919+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_NAME_CHANGE: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/f pl-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount changed name from #amqp/client/eb13bf695e4cdc3a 2022-05-21T01:53:54.964+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_OPEN_FLOW: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/food-i nput-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount Pub flow session flow name f18e140a02a9453ab6eb03425ee7d3f9 (23), transacted session id -1, publisher id 22, last message id 1291, window size 247 2022-05-21T01:53:54.979+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: VPN: VPN_AD_QENDPT_CREATE: default - Message VPN (0) Topic Endpoint food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88/food-input-adaptor created 2022-05-21T01:53:54.983+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_BIND_SUCCESS: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/ food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount Bind to Non-Durable Topic Endpoint food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88/food-input-adaptor Topic(caas/food/input), AccessType(Exclusive), Quota(5000M B), MaxMessageSize(10000000B), AllOthersPermission(No-Access), RespectTTL(No), RejectMsgToSenderOnDiscard(No), ReplayFrom(N/A), GrantedPermission(Read|Consume|Modify-Topic|Delete), FlowType(Consumer-Redelivery), FlowId(86), ForwardingMod e(StoreAndForward), MaxRedelivery(0), TransactedSessionId(-1) completed 2022-05-21T01:53:56.502+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/food-in put-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount Unbind to Flow Id (86), ForwardingMode(StoreAndForward), final statistics - flow(1, 0, 0, 0, 0, 0, 0, 0, 0, 0), isActive(No), Reason(Client disconnected) 2022-05-21T01:53:56.503+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/food- input-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(5, 4, 0, 0, 5, 4, 417, 693, 0, 0, 417, 693, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.7.1:63572, C LOSD, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher() 2022-05-21T01:53:59.503+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/food- input-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount Pub flow session flow name f18e140a02a9453ab6eb03425ee7d3f9 (23), transacted session id -1, publisher id 22, last message id 1291, window size 247, final statistics - fl ow(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) 2022-05-21T01:54:56.521+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88/food-input-adaptor deleted, final statistics - sp ool(0, 0, 0, 0, 0, 0, 0, 0) bind(1, 1, 0, 0, 0) 2022-05-21T01:55:15.625+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CONNECT: default #amqp/client/5d9a55a247c9c1bf Client (68) #amqp/client/5d9a55a247c9c1bf username testAccount OriginalClientUsername(testAccount) WebSessionId (N/A) connected to 172.22.0.2:5672 from 192.168.7.1:63584 version(Unknown) platform(Unknown) SslVersion() SslCipher() APIuser(Unknown) authScheme(Basic) authorizationGroup() clientProfile(default) ACLProfile(default) SSLDowngradedToPlain Text(No) SSLNegotiatedTo() SslRevocation(Not Checked) Capabilities() SslValidityNotAfter(-) 2022-05-21T01:55:15.649+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_NAME_CHANGE: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/f pl-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount changed name from #amqp/client/5d9a55a247c9c1bf 2022-05-21T01:55:15.696+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_OPEN_FLOW: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/food-i nput-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount Pub flow session flow name e216e8c8310d499083f1684609ef573f (24), transacted session id -1, publisher id 23, last message id 1542, window size 247 2022-05-21T01:55:15.711+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: VPN: VPN_AD_QENDPT_CREATE: default - Message VPN (0) Topic Endpoint food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80/food-input-adaptor created 2022-05-21T01:55:15.713+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_BIND_SUCCESS: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/ food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount Bind to Non-Durable Topic Endpoint food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80/food-input-adaptor Topic(caas/food/input), AccessType(Exclusive), Quota(5000M B), MaxMessageSize(10000000B), AllOthersPermission(No-Access), RespectTTL(No), RejectMsgToSenderOnDiscard(No), ReplayFrom(N/A), GrantedPermission(Read|Consume|Modify-Topic|Delete), FlowType(Consumer-Redelivery), FlowId(87), ForwardingMod e(StoreAndForward), MaxRedelivery(0), TransactedSessionId(-1) completed 2022-05-21T01:55:16.569+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/food-in put-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount Unbind to Flow Id (87), ForwardingMode(StoreAndForward), final statistics - flow(1, 0, 0, 0, 0, 0, 0, 0, 0, 0), isActive(No), Reason(Client disconnected) 2022-05-21T01:55:16.570+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/food- ---Press any key to continue, or `q' to quit--- input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(5, 4, 0, 0, 5, 4, 417, 693, 0, 0, 417, 693, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.7.1:63584, C LOSD, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher() 2022-05-21T01:55:19.570+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/food- input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount Pub flow session flow name e216e8c8310d499083f1684609ef573f (24), transacted session id -1, publisher id 23, last message id 1542, window size 247, final statistics - fl ow(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) 2022-05-21T01:56:16.589+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80/food-input-adaptor deleted, final statistics - sp ool(0, 0, 0, 0, 0, 0, 0, 0) bind(1, 1, 0, 0, 0) 2022-05-21T01:56:27.372+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CONNECT: default #amqp/client/b45f6c87b0e76616 Client (67) #amqp/client/b45f6c87b0e76616 username testAccount OriginalClientUsername(testAccount) WebSessionId (N/A) connected to 172.22.0.2:5672 from 192.168.7.1:63599 version(Unknown) platform(Unknown) SslVersion() SslCipher() APIuser(Unknown) authScheme(Basic) authorizationGroup() clientProfile(default) ACLProfile(default) SSLDowngradedToPlain Text(No) SSLNegotiatedTo() SslRevocation(Not Checked) Capabilities() SslValidityNotAfter(-) 2022-05-21T01:56:27.397+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_NAME_CHANGE: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/f pl-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount changed name from #amqp/client/b45f6c87b0e76616 2022-05-21T01:56:27.441+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_OPEN_FLOW: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/food-i nput-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount Pub flow session flow name 04c9bfb86bac4f3dba9dba89b4724cde (25), transacted session id -1, publisher id 24, last message id 1290, window size 247 2022-05-21T01:56:27.454+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: VPN: VPN_AD_QENDPT_CREATE: default - Message VPN (0) Topic Endpoint food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae/food-input-adaptor created 2022-05-21T01:56:27.456+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_BIND_SUCCESS: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/ food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount Bind to Non-Durable Topic Endpoint food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae/food-input-adaptor Topic(caas/food/input), AccessType(Exclusive), Quota(5000M B), MaxMessageSize(10000000B), AllOthersPermission(No-Access), RespectTTL(No), RejectMsgToSenderOnDiscard(No), ReplayFrom(N/A), GrantedPermission(Read|Consume|Modify-Topic|Delete), FlowType(Consumer-Redelivery), FlowId(88), ForwardingMod e(StoreAndForward), MaxRedelivery(0), TransactedSessionId(-1) completed 2022-05-21T01:56:32.434+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/food-in put-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount Unbind to Flow Id (88), ForwardingMode(StoreAndForward), final statistics - flow(1, 0, 0, 0, 0, 0, 0, 0, 0, 0), isActive(No), Reason(Client disconnected) 2022-05-21T01:56:32.435+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/food- input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount WebSessionId (N/A) reason(Client Disconnect Received) final statistics - dp(7, 4, 0, 0, 7, 4, 493, 693, 0, 0, 493, 693, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168. 8.1:63599, ESTAB, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher() 2022-05-21T01:56:35.436+00:00 &lt;local3.info&gt; 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/food- input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount Pub flow session flow name 04c9bfb86bac4f3dba9dba89b4724cde (25), transacted session id -1, publisher id 24, last message id 1290, window size 247, final statistics - fl ow(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) 2022-05-21T01:57:32.453+00:00 &lt;local3.notice&gt; 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae/food-input-adaptor deleted, final statistics - sp ool(0, 0, 0, 0, 0, 0, 0, 0) bind(1, 1, 0, 0, 0) </code></pre>
Jack
<p>See the discussion on this in the Solace Developer Community at <a href="https://solace.community/discussion/1342/reason-for-solace-client-client-disconnect" rel="nofollow noreferrer">https://solace.community/discussion/1342/reason-for-solace-client-client-disconnect</a></p>
sflow
<p>We are Using AKS cluster 1-19.11 and our userpools where application podsrunning are under utlization (only 30% of consumption). So we were thinking of cost optimization by reducing the node counts of nodepools.</p> <p>So would like to get more details considered while planning for node count decrease.</p> <ul> <li><p>Assume that the node utlisation can be estimated and calculated using the pods requests value and no need to consider the limit range as auto scaller is enabled</p> </li> <li><p>Also is it possible to modify the autoscaler profile of cluster property &quot;scaleDownUtilizationThreshold&quot;: &quot;0.5&quot;, to more %.. and whether its recommeneded to increase to 70%. ?</p> </li> </ul>
Vowneee
<p>The assumption,</p> <blockquote> <p>node utlisation can be estimated and calculated using the pods requests value and no need to consider the limit range as auto scaler is enabled</p> </blockquote> <p>will hold good as long as you don't care about what process/container gets evicted in case of node resource starvation (if controlled by a deployment or a replica set or stateful set the workloads will be resurrected in a new node that is scaled out by the auto scaler).</p> <p>However, in most cases you would have some kind of priority for your workloads and you would want to set thresholds (limits) accordingly so that you don't have to deal with the kernel evicting important processes (maybe not the one that caused starvation but was using highest resources right at the time when the evaluation happened).</p> <blockquote> <p>Also is it possible to modify the autoscaler profile of cluster property &quot;scaleDownUtilizationThreshold&quot;: &quot;0.5&quot;, to more %.. and whether its recommeneded to increase to 70%. ?</p> </blockquote> <p>Yes, the value of Cluster Autoscaler Profile <code>scale-down-utilization-threshold</code> can be updated using the command:</p> <pre><code>az aks update \ --resource-group myResourceGroup \ --name myAKSCluster \ --cluster-autoscaler-profile scale-down-utilization-threshold=&lt;percentage value&gt; </code></pre> <p>[<a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler#set-the-cluster-autoscaler-profile-on-an-existing-aks-cluster" rel="nofollow noreferrer">Reference</a>]</p> <p>AKS uses node resources to help the node function as part of your cluster. This usage can create a discrepancy between your node's total resources and the <strong>allocatable</strong> resources in AKS. [<a href="https://learn.microsoft.com/en-us/azure/aks/concepts-clusters-workloads" rel="nofollow noreferrer">Reference</a>]</p> <p>Now <code>scale-down-utilization-threshold</code> is the node utilization level, defined as sum of requested resources divided by <strong>allocatable</strong> capacity, below which a node can be considered for scale down.</p> <p>So, ultimately there can be no best practice shared on this as it is the user's use case, architectural design and requirements that dictate what should be the <code>scale-down-utilization-threshold</code> for the cluster auto scaler.</p>
Srijit_Bose-MSFT
<p>I'm running sonarqube server on a kubernetes cluster. I want to install <a href="https://github.com/uartois/sonar-golang" rel="nofollow noreferrer">sonar-golang</a> plugin to my server to use its quality profile.</p> <p>However the installation guide only has it for non k8 sonarqube installations. Is there a way to install these plugins for <a href="https://SonarSource.github.io/helm-chart-sonarqube" rel="nofollow noreferrer">sonarqube on kubernetes</a>?</p>
Red Bottle
<p>First exec into the Sonarqube pod and look for the Sonarqube home path e.g $sonarqube/extensions/plugins once you know the exact path then exit from the pod and run this command in your terminal.</p> <pre><code>kubectl cp path-to-your-plugin/plugin.xyz &lt;pod-name&gt;:&lt;pligin.xyz&gt; -c &lt;container-name&gt; </code></pre> <p>If you don't know the container name you can check the pods of Sonarqube with this command <code>kubectl get pods -n yournamespace</code> in the result it should show you 2/2 running or 3/3 running. By using the describe command <code>kubectl describe pod-name -n namespace</code> you can find the correct name of the Sonarqube container to use in the above command. Once you upload the file successfully restart the Sonarqube deployment and the rest of the steps can be followed as normal installation.</p>
tauqeerahmad24
<p>I want to copy folder and its subdirectory to MinIO server which is deployed using MinIO Helm.</p> <p>Is there any way to copy folder to MinIO bucket which is deployed on cloud using helm after postdeployemtn of Helm service using Helm?</p>
Harsh Gajjar
<p>You can copy a folder and its subdirectories to a MinIO server which is deployed using MinIO Helm. Use the MinIO Helm Chart to deploy the MinIO server on a cloud platform, and then use the mc command line tool to copy the folder and its subdirectories to the MinIO bucket. For example, you can use the following command to copy a folder named ‘myfolder’ and its subdirectories to the ‘mybucket’</p> <p>MinIO bucket:</p> <pre><code>mc cp -r myfolder mybucket </code></pre> <p>You can also use the mc mirror command to mirror the contents of the folder to the MinIO bucket. For example:</p> <pre><code>mc mirror -r myfolder mybucket </code></pre> <p>Refer to this <a href="https://min.io/docs/minio/linux/reference/minio-mc/mc-cp.html" rel="nofollow noreferrer">MINI IO official doc</a> for more information on this copy folder to minio</p>
Hemanth Kumar
<p>I'm trying to reach Azure Blob Container from kubernetes pod without success. When I tried with FileShare it works as expected(when I change to Blob Container it fails) My kubernetes settings:</p> <pre><code>spec: volumes: - name: test-data-proccessing azureFile: secretName: azure-secret-file-share shareName: fileshare-test readOnly: false # In the container spec part template: - name: some-deployment container: volumeMounts: mountPath: /data name: test-data-proccessing </code></pre> <p>Did you face similar issue?</p>
shock_in_sneakers
<p>You can use <a href="https://github.com/kubernetes-sigs/blob-csi-driver" rel="nofollow noreferrer">Azure Blob Storage CSI driver for Kubernetes</a>. <a href="https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/deploy/example/e2e_usage.md" rel="nofollow noreferrer">Here</a> is a basic usage guide.</p>
Srijit_Bose-MSFT
<p>We have a deployment with different pods which are installed using helm install. In the next release, one of the pods needs to be removed as it is obsoleted. How to stop the pod during helm upgrade ? Thanks</p>
Chandu
<p>When you upgrade helm deployment it proceeds one by one. First, one pod is destroyed and created new pod with new changes if the upgraded deployment is successful and the new pod is running then it proceeds with the 2nd pod, and so on till the desired number of Pods is completed. If you still see an old pod running along with a newly upgraded pod it means something went wrong and the upgrade is not successful. Run <code>kubectl get pods -n namespace</code> and check the ready state 1/1 or 2/2 if any container is not running or missing. If all the pods have the desired container and everything is okay then look for the logs by <code>kubectl logs -f pod --tail=10</code> and see if there is any log error. If all is okay then simply delete the old pod and if it is stuck in a terminating state then delete it by force like <code>kubectl delete pod &lt;pod-name&gt; --force --grace-period=0</code> This command will delete the old pod and keep the new pods.</p>
tauqeerahmad24
<p>We are reviewing Managed Anthos Service Mesh(istio) in GCP, their is no straight forward setup for Lightstep, so we are trying to push traces from envoy to otel collector process and export it to lightstep, the otel deployment config is as below</p> <pre><code>--- apiVersion: v1 kind: ConfigMap metadata: name: otel-collector-conf labels: app: opentelemetry component: otel-collector-conf data: otel-collector-config: | receivers: zipkin: endpoint: processors: batch: memory_limiter: # 80% of maximum memory up to 2G limit_mib: 400 # 25% of limit up to 2G spike_limit_mib: 100 check_interval: 5s extensions: zpages: {} memory_ballast: # Memory Ballast size should be max 1/3 to 1/2 of memory. size_mib: 165 exporters: logging: loglevel: debug otlp: endpoint: 10.x.x.19:8184 insecure: true headers: &quot;lightstep-access-token&quot;: &quot;xxx&quot; service: extensions: [zpages, memory_ballast] pipelines: traces: receivers: [zipkin] processors: [memory_limiter, batch] exporters: [otlp] --- apiVersion: v1 kind: Service metadata: name: otel-collector labels: app: opentelemetry component: otel-collector spec: ports: - name: otlp-grpc # Default endpoint for OpenTelemetry gRPC receiver. port: 4317 protocol: TCP targetPort: 4317 - name: otlp-http # Default endpoint for OpenTelemetry HTTP receiver. port: 4318 protocol: TCP targetPort: 4318 - name: metrics # Default endpoint for querying metrics. port: 8888 - name: zipkin # Default endpoint for OpenTelemetry HTTP receiver. port: 9411 protocol: TCP targetPort: 9411 selector: component: otel-collector --- apiVersion: apps/v1 kind: Deployment metadata: name: otel-collector labels: app: opentelemetry component: otel-collector spec: selector: matchLabels: app: opentelemetry component: otel-collector minReadySeconds: 5 progressDeadlineSeconds: 120 replicas: 1 #TODO - adjust this to your own requirements template: metadata: labels: app: opentelemetry component: otel-collector spec: containers: - command: - &quot;/otelcol&quot; - &quot;--config=/conf/otel-collector-config.yaml&quot; image: otel/opentelemetry-collector:latest name: otel-collector resources: limits: cpu: 1 memory: 2Gi requests: cpu: 200m memory: 400Mi ports: - containerPort: 55679 # Default endpoint for ZPages. - containerPort: 4317 # Default endpoint for OpenTelemetry receiver. - containerPort: 14250 # Default endpoint for Jaeger gRPC receiver. - containerPort: 14268 # Default endpoint for Jaeger HTTP receiver. - containerPort: 9411 # Default endpoint for Zipkin receiver. - containerPort: 8888 # Default endpoint for querying metrics. volumeMounts: - name: otel-collector-config-vol mountPath: /conf volumes: - configMap: name: otel-collector-conf items: - key: otel-collector-config path: otel-collector-config.yaml name: otel-collector-config-vol </code></pre> <p>Exposing the otel collector service on 9411 and configuring Anthos Mesh to send traces to the service and export it to Ligthstep, the otel pod is all up, but i dont see any traces on lightstep. Infact I'm not certain if the input from envoy is coming into otel, as the logs for otel is empty.</p> <pre><code>apiVersion: v1 data: mesh: |- extensionProviders: - name: jaeger zipkin: service: zipkin.istio-system.svc.cluster.local port: 9411 - name: otel zipkin: service: otel-collector.otel.svc.cluster.local port: 9411 </code></pre> <p>Also deployed a jaegar all in one deployment and sending traces to it, which works fine and i can view traces on the jaegar UI. Not certain on the otel part. Kindly assist.</p>
Sanjay M. P.
<p>Take a look at this link, <a href="https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ExtensionProvider" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ExtensionProvider</a></p> <p>I think you should have your config as follows, otel doesn't listen on port 9411 by default:</p> <pre><code>apiVersion: v1 data: mesh: |- extensionProviders: - name: jaeger zipkin: service: zipkin.istio-system.svc.cluster.local port: 9411 - name: otel opencensus: service: otel-collector.otel.svc.cluster.local port: 55678 </code></pre> <p>Tried this out on my cluster today and it works. However, you can only have one tracing tool configured in the the Telemetry resource, so I'm only able to use Jaeger or Otel. That config looks like:</p> <pre><code>apiVersion: telemetry.istio.io/v1alpha1 kind: Telemetry metadata: name: mesh-default namespace: istio-system spec: tracing: - providers: - name: otel randomSamplingPercentage: 100</code></pre>
user19208942
<p>I know that deployment uses replicaset underneath it, has revision control, creates another replicaset during rolling upgrade/downgrade.</p> <p>I want to know what is the scenario in which only replicaset can be used and deployment can't be used.</p>
Sushil Kumar Sah
<p>ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time and it checks how many pods need to maintain bases on which it creates or deletes the pods. ReplicaSet then fulfills its purpose by creating and deleting Pods as needed to reach the desired number. ReplicaSets can be used independently. With ReplicaSet you define the number of replicas you want to run for a particular service. You would have those many replicas running.</p> <p>Whereas Deployment is the advancement of replica sets. When you use Deployments you don't have to worry about managing the ReplicaSets that they create. Deployments own and manage their ReplicaSets. As such, it is recommended to use Deployments when you want ReplicaSets. As a replica looks only on creating and deleting the pods. Deployment is recommended for application services and With deployment you should be able to do rolling upgrades or rollbacks. You can update images from v1 to v2.</p> <p>Refer to this <a href="https://stackoverflow.com/questions/69448131/kubernetes-whats-the-difference-between-deployment-and-replica-set">SO1</a> , <a href="https://stackoverflow.com/questions/55437390/k8s-why-we-need-replicaset-when-we-have-deployments">SO2</a> and official documentation of <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">Replicasets</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployment</a>.</p>
Hemanth Kumar
<p>Consider I have 10 containers in my POD.</p> <p>I have added startUpProbe for 3 containers.</p> <p>If I delete my POD, before Probe is completed sucessfully (Which means those containers are not in READY state)</p> <p>Deleting Pod should send SIGTERM signal to all the containers to down it gracefully.</p> <p>Will SIGTERM send to 3 containers where am using probes that are not yet completed ?</p>
sethu ram
<p>The blog <strong><a href="https://www.airplane.dev/blog/sigterm-signal-15-linux-graceful-termination-exit-code-143" rel="nofollow noreferrer">Troubleshooting SIGTERM: graceful termination of Linux containers</a></strong> written by <strong>James Walker</strong> details about <strong>SIGTERM</strong> and how to leverage it in your application so that it can terminate without corrupting any data.</p> <blockquote> <p><strong>SIGTERM</strong> is a Linux signal that Unix-based operating systems issue when they want to terminate a running process. In normal circumstances, your application should respond to a <strong>SIGTERM</strong> by running cleanup procedures that facilitate graceful termination. If processes aren’t ready to terminate, they may elect to ignore or block the signal.</p> </blockquote> <p><em>So, when a pod is deleted the <strong>SIGTERM</strong> signal is sent to all the containers irrespective of their state and the pod will wait till the graceful termination period(which is 30s by default), once the graceful period ends <strong>SIGKILL</strong> signal will be sent which will forcefully terminates all the processes. Since your containers are having <strong>startUpProbe</strong> enabled they will be in the pending state when the <strong>SIGTERM</strong> signal is generated and can continue the process till the graceful termination period ends, then they will get terminated by the <strong>SIGKILL</strong> signal. (<strong>reference:</strong> <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#:%7E:text=When%20a%20Pod%20is%20being%20deleted%2C%20it%20is%20shown%20as%20Terminating%20by%20some%20kubectl%20commands.%20This%20Terminating%20status%20is%20not%20one%20of%20the%20Pod%20phases.%20A%20Pod%20is%20granted%20a%20term%20to%20terminate%20gracefully%2C%20which%20defaults%20to%2030%20seconds." rel="nofollow noreferrer">kubernetes docs</a>)</em></p>
Kranthiveer Dontineni
<p>I've got this issue in Google cloud once tried to create an ingress resource.</p> <p>Usually when ingress is created, other GCP resources are automatically created too (resources such as target-https-proxies-list , urlmap , forwarding-rules , etc. ). Nothing was created this time and the Ingress error is here:</p> <pre><code>k describe ingress my-service-internal Name: my-service-internal Namespace: my-namespace Address: Default backend: default-http-backend:80 (10.0.0.5:8080) Rules: Host Path Backends ---- ---- -------- my-service.example.com /* my-service-internal:80 (10.251.1.108:8080,10.251.1.134:8080) Annotations: ingress.gcp.kubernetes.io/pre-shared-cert: my-certificate-202107140101 kubernetes.io/ingress.allow-http: false kubernetes.io/ingress.class: gce-internal Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 97s (x2 over 97s) loadbalancer-controller Scheduled for sync Warning Sync 16s (x14 over 73s) loadbalancer-controller Error syncing to GCP: error running backend syncing routine: cloud armor security policies not supported for regional backend service k8s1-263259a6-my-namespace-my-service-in-8-151d5ee9 </code></pre> <p>Any advice what to check first or any guesses what could be the issue?</p>
laimison
<p>To elaborate a bit more on Dawids answer:</p> <p>The reason for this is that cloud armor policies are only usable by <strong>external HTTP</strong> loadBalancers as explained on <a href="https://cloud.google.com/armor/docs/security-policy-overview?hl=en#requirements" rel="nofollow noreferrer">Google's documentation.</a></p> <p>So if you are configuring an <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balance-ingress?hl=en#requirements" rel="nofollow noreferrer">internal ingress on GKE</a> this creates an <strong>Internal HTTP</strong> LoadBalancer, which is not compatible with Cloud armor security policies</p>
alrashid villanueva
<p>I have a GKE cluster.</p> <p>I used <code>kubectl apply</code> to apply the following YAML from my local machine:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: flask-app-svc namespace: myapp spec: ports: - port: 5000 targetPort: 5000 selector: component: flask-app </code></pre> <p>Got applied. All Good. ✅</p> <hr /> <p>Then I used <code>kubectl get service</code> to get back the YAML from the cluster. It returned this:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: annotations: cloud.google.com/neg: '{&quot;ingress&quot;:true}' cloud.google.com/neg-status: '{&quot;network_endpoint_groups&quot;:{&quot;5000&quot;:&quot;k8s1-5fe0c3c1-myapp-flask-app-svc-5000-837dba94&quot;},&quot;zones&quot;:[&quot;asia-southeast1-a&quot;]}' kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;Service&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;flask-app-svc&quot;,&quot;namespace&quot;:&quot;myapp&quot;},&quot;spec&quot;:{&quot;ports&quot;:[{&quot;port&quot;:5000,&quot;targetPort&quot;:5000}],&quot;selector&quot;:{&quot;component&quot;:&quot;flask-app&quot;}}} creationTimestamp: &quot;2021-10-29T14:40:49Z&quot; name: flask-app-svc namespace: myapp resourceVersion: &quot;242820340&quot; uid: ad80f634-5aab-4147-8f71-11ccc44fd867 spec: clusterIP: 10.22.52.180 clusterIPs: - 10.22.52.180 ports: - port: 5000 protocol: TCP targetPort: 5000 selector: component: flask-app sessionAffinity: None type: ClusterIP status: loadBalancer: {} </code></pre> <hr /> <h3>1. What kubernetes &quot;concept&quot; is at play here?</h3> <h3>2. Why are the 2 YAMLs SO DIFFERENT from each other?</h3> <h3>3. What is happening under the hood?</h3> <h3>4. Is this specific to GKE, or would any k8s cluster behave this way?</h3> <h3>5. Where can I find some info/articles to learn more about this concept?</h3> <hr /> <p>Thank you in advance.</p> <p>I've been trying to wrap my head around this for a while. Appreciate any help you can advise and suggest here.</p>
Rakib
<p>The Kubernetes API server validates and configures data for the API objects which include pods, services, replication controllers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through which all other components interact. The API server takes the definitions provided by the user to create all the detailed definitions needed to create the objects required. In <a href="https://cloud.google.com/kubernetes-engine/docs/reference/rest" rel="nofollow noreferrer">this document</a> you can find an overview of the GKE API server engine.</p> <p>You can find an <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#create-deployment-v1-apps" rel="nofollow noreferrer">example on this document about a create operation</a>. There you can switch between the request input and the response generated for the API server to create the complete definition of the objects required, its parameters, metadata and all the related configuration parameters that you see in the &quot;extended&quot; version of the original yaml file. In the same document you can find additional information about this topic.</p>
Jesus Huesca
<p>I have created pod with below pod definition which uses mongo official docker image. The expected result here is mongo docker creates user and pwd with env variables <code>MONGO_INITDB_ROOT_USERNAME</code> and <code>MONGO_INITDB_ROOT_PASSWORD</code> and then it will use <code>/etc/mongo/mongod.conf</code> provided to it from volume. Instead what happens is - on first connection - I am unable to connect saying user does not exist.</p> <p>The error disappears if I remove the <code>command</code> section. Any Idea how to resolve this issue.</p> <p>The equivalent docker command works well, but in kubernetes auth does not work if I provide a custom configuration file.</p> <pre><code>docker run -d -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=mongoadmin -e MONGO_INITDB_ROOT_PASSWORD=secret --name some-mongo -v /etc/mongo:/etc/mongo -v /etc/ssl/keyfile:/data/db/keyfile mongo:4.2.23 --config /etc/mongo/mongod.conf </code></pre> <pre><code>apiVersion: v1 kind: Pod metadata: name: mongodb labels: db: mongodb spec: containers: - name: mongodb image: mongo:4.2.23 command: - mongod - &quot;--config&quot; - &quot;/etc/mongo/mongod.conf&quot; env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: mongosecret key: user - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: mongosecret key: password volumeMounts: - name: mongodb-keyfile mountPath: /etc/ssl - name: mongodb-config mountPath: /etc/mongo readOnly: true volumes: - name: mongodb-keyfile secret: secretName: mongodb-keyfile defaultMode: 0600 - name: mongodb-config configMap: name: mongodb-config </code></pre>
Gowtham
<p>As per this <a href="https://stackoverflow.com/questions/62018646/unable-to-authenticate-mongodb-deployed-in-kubernetes">SO</a> , As you said Post removing “command” from deployment file and it is working Because when you set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD ENV variables in your manifest. Mongo container will enable --auth by itself. So, you don't need to specify explicitly and check here.</p> <p>Refer to this <a href="https://stackoverflow.com/questions/34559557/how-to-enable-authentication-on-mongodb-through-docker">SO1</a> , <a href="https://stackoverflow.com/questions/51815216/authentication-mongo-deployed-on-kubernetes">SO2</a> for more information. You can also pass the username and password as <a href="https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets" rel="nofollow noreferrer">secrets</a> .</p>
Hemanth Kumar
<p>I am working on a'go' based application and using ephemeral-storage requests and limits in my resource yaml. I have started with requests 64Mi and limits 128Mi but is there a way to calculate the ephemeral storage requirements for a pod?. Also, if my image size is 15MiB do I need to add that also in my resource requests?.</p>
Parth Soni
<p><strong>For greater visibility to the community I am posting this as a solution :</strong></p> <p>As far as i know, there is no way of calculating the ephemeral storage required for a pod and this depends on how much storage you are going to use. Depending on this we can <a href="https://learnk8s.io/setting-cpu-memory-limits-requests" rel="nofollow noreferrer">set the right requests and limits in kubernetes</a>. This document also covers how <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" rel="nofollow noreferrer">ephemeral storage can be managed</a>.</p> <p>This <a href="https://stackoverflow.com/questions/59045293/how-to-determine-kubernetes-pod-ephemeral-storage-request-and-limit">SO</a> also might be helpful to your question.</p>
Hemanth Kumar
<p>I did kubeadm init on one machine. I followed all the instructions on network etc and end up with this:</p> <p><code>kubectl get nodes</code>:</p> <pre class="lang-bash prettyprint-override"><code>NAME STATUS ROLES AGE VERSION slchvdvcybld001 Ready control-plane 140m v1.24.2 slchvdvcydtb001 Ready &lt;none&gt; 136m v1.24.2 slchvdvcytst001 Ready &lt;none&gt; 137m v1.24.2 </code></pre> <p>As you can see, no nodes are Master or worker or similar.</p> <p>I don't have some special setup, all I did is install it and did init.</p> <p>There are no errors in logs file. Dashboard is in GREEN and everything is in green.</p> <p>These are versions of kubectl and so on:</p> <pre class="lang-bash prettyprint-override"><code>Client Version: v1.24.2 Kustomize Version: v4.5.4 Server Version: v1.24.2 </code></pre>
pregmatch
<p>Labelling of master node is deprecated. That's where when using <code>kubectl get nodes</code> its showing role as &quot;control-plane&quot; instead of &quot;control-plane,master&quot;</p> <p>More details are in following link Kubeadm: <a href="http://git.k8s.io/enhancements/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md" rel="noreferrer">http://git.k8s.io/enhancements/keps/sig-cluster-lifecycle/kubeadm/2067-rename-master-label-taint/README.md</a></p>
Nataraj Medayhal
<p>I read official and non official documentation, googled it many times, but still don't understand.</p> <p>How API Kubernetes version corresponds to Kubernetes version? So, we have Kubernetes 1.22 version and which API does it have? If we upgrade K8s version, will API version be upgraded as well or k8s upgrades API version regardless of server version?</p>
user15824359
<p>Each k8’s versions have different kubernetes <a href="https://kubernetes.io/docs/reference/using-api/_print/" rel="nofollow noreferrer">API versions</a>, <a href="https://kubernetes.io/docs/reference/using-api/#api-versioning" rel="nofollow noreferrer">API Versioning</a>. If we update the k8’s versions then we need to update its new API versions too then only we don’t have any breakages in the service or else we can roll back to previous k8’s versions.</p> <p>We have Kubernetes 1.22 version and which API does it have? In v1.22 some APIs got deprecated for detailed information of APIs.Refer this doc for <a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/" rel="nofollow noreferrer">API changes in v1.22</a>.</p>
Hemanth Kumar
<p>I'm setting up Docker Engine on my local machine using Minikube. There are two tutorials I've considered, with slight differences between them. I'd love to understand the difference. Can anyone clarify whether these commands would have any different outcome?</p> <p>From this <a href="https://dhwaneetbhatt.com/blog/run-docker-without-docker-desktop-on-macos" rel="nofollow noreferrer">blog post</a>, which I found first:</p> <pre><code># Install hyperkit and minikube brew install hyperkit brew install minikube # Install Docker CLI brew install docker brew install docker-compose # Start minikube minikube start # Tell Docker CLI to talk to minikube's VM eval $(minikube docker-env) # Save IP to a hostname echo &quot;`minikube ip` docker.local&quot; | sudo tee -a /etc/hosts &gt; /dev/null # Test docker run hello-world </code></pre> <p>Or from this <a href="https://minikube.sigs.k8s.io/docs/tutorials/docker_desktop_replacement/" rel="nofollow noreferrer">tutorial</a> (on the minikube website, which I'm inclined to believe is authoritative):</p> <pre><code># Install the Docker CLI brew install docker # Start minikube with a VM driver and `docker` container runtime if not already running. minikube start --container-runtime=docker --vm=true # Use the `minikube docker-env` command to point your terminal's Docker CLI to the Docker instance inside minikube. eval $(minikube -p &lt;profile&gt; docker-env) </code></pre> <p>Context: I'm on MacOS, Ventura 13.0 (22A380)</p> <p>Note: This is a more general question related to the specific one <a href="https://stackoverflow.com/questions/75416727/command-eval-minikube-docker-env-vs-using-eval-minikube-p-minikube-dock">here</a>.</p>
whoopscheckmate
<p>As eloborated by <code>Bijendra</code>, both tutorials are same with very minimal difference. The <code>echo &quot;minikube ip docker.local&quot; | sudo tee -a /etc/hosts &gt; /dev/null</code> command is fetching the IP address and is making an entry in your /etc/hosts file by doing this you can ping your machine using hostname instead of using IP address every time.</p> <p>Since you are saying that you are a beginner. I hope the links below might help you. Happy learning.</p> <p>[1]<a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="nofollow noreferrer">https://www.kubernetes.io/docs/tutorials/hello-minikube</a></p> <p>[2]<a href="https://devopscube.com/kubernetes-minikube-tutorial/" rel="nofollow noreferrer">https://www.devopscube.com/kubernetes-minikube-tutorial</a></p> <p>[3]<a href="https://www.youtube.com/watch?v=E2pP1MOfo3g" rel="nofollow noreferrer">https://www.youtube.com/watch?v=E2pP1MOfo3g</a></p>
Kranthiveer Dontineni
<p>The build is throwing error:- did not find the expected key on line &quot;advanced&quot; field Below is the yaml file snipped that i have written</p> <pre><code>apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: {{ .Values.service.name }}-sqs spec: scaleTargetRef: name: {{ .Values.service.name }} minReplicaCount: {{ .Values.hpa.minReplicaCount }} maxReplicaCount: {{ .Values.hpa.maxReplicaCount }} fallback: failureThreshold: 3 replicas: {{ .Values.hpa.minReplicaCount }} advanced: behavior: scaleDown: stabilizationWindowSeconds: {{ .Values.hpa.scaleDown.stabilizationWindowSeconds}} policies: {{ toYaml .Values.hpa.scaleDown.policies | indent 6 }} scaleUp: stabilizationWindowSeconds: {{ .Values.hpa.scaleUp.stabilizationWindowSeconds}} policies: {{ toYaml .Values.hpa.scaleUp.policies | indent 6 }} selectPolicy: {{ .Values.hpa.selectPolicyForScaleUp}} ``` </code></pre>
Abhishek Khaiwale
<p>There are some formatting and indentation issues in the yaml file you have provided as said by <code>Rafał Leszko</code>. You can use <strong><a href="https://helm.sh/docs/helm/helm_lint/" rel="nofollow noreferrer">helm lint</a></strong> for validating your charts before deploying them. Helm Linter will validate your chart syntax and pints out all the errors, warnings and info which will break your code from working.(Source: Helm docs)</p>
Kranthiveer Dontineni
<p>we are testing out the Ambassador Edge Stack and started with a brand new GKE private cluster in autopilot mode.</p> <p>We installed from scratch following the quick start tour to get a feeling of it and ended up with the following error</p> <pre><code>Error from server: error when creating &quot;mapping-test.yaml&quot;: conversion webhook for getambassador.io/v3alpha1, Kind=Mapping failed: Post &quot;https://emissary-apiext.emissary-system.svc:443/webhooks/crd-convert?timeout=30s&quot;: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) </code></pre> <p>We did a few rounds of DNS testing and deployed a few different test pods in different namespaces to validate that kube-dns is working properly, everything looks good at that end. Also the resolv.conf looks good.</p> <p>Ambassador is using the hostname <code>emissary-apiext.emissary-system.svc:443</code> (without the cluster.local) which should resolve fine. Doing a lookup with the FQN (with cluster.local) works fine btw.</p> <p>Any clues?</p> <p>Thanks a lot and take care.</p>
Sebastian
<p>I think i found the solution, posting here if someone come across this later on.</p> <p>So i followed <a href="https://www.getambassador.io/docs/edge-stack/latest/tutorials/getting-started/" rel="noreferrer">this</a> to deploy Ambassador Edge Stack in a Autopilot private cluster. I was getting the same error when i was trying to deploy the Mapping object (step 2.2).</p> <p>The issue is that the control plane (API Server) is trying to call emissary-apiext.emissary-system.svc:443 but the pods behind it are listening on port 8443 (figured that out by describing the Service).</p> <p>So i added a firewall rule to allow the GKE control plane to talk to the nodes on port 443.</p> <p>The firewall rule in question is called <strong>gke-gke-ap-xxxxx-master</strong>. The xxxx is called the cluster hash and is different for each cluster. To make sure you are editing the proper rule, double check that source IP Range matches the &quot;Control plane address range&quot; from the cluster details page. And that it's the rule that has a name ending with master.</p> <p>Just edit that rule and add 8443 to the tcp ports. It should work</p>
boredabdel
<p>I have some simple deployments, pods, services and nginx ingress in Kubernetes. I want to use ingress to route to the services (cluster-ip).</p> <p>However, there are 2 services for 2 pods with the same path (i.e /abc/def). After I applied the ingress.yaml file, I got an error message saying &quot;nginx: [emerg] duplicate location &quot;/abc/def/&quot; in /tmp/nginx/nginx-cfg728288520:2893&quot;.</p> <p>May I know how to let ingress accepts the same paths with different service and different port?</p> <p>Here is the ingress.yaml file:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/use-regex: &quot;true&quot; name: ingress-nginx-default namespace: default spec: rules: - host: http: paths: - path: /abc/def/ pathType: Prefix backend: service: name: service1 port: number: 8090 - path: /abc/def backend: service: name: service2 port: number: 8068 pathType: Prefix ingressClassName: nginx </code></pre>
maantarng
<p>&quot;nginx: [emerg] duplicate location &quot;/abc/def/&quot; in /tmp/nginx/nginx-cfg728288520:2893&quot;. This error indicates the same host with two same paths which is a duplicate location.</p> <p>You can use simple fanout based on path or name based virtual hosting. In order to do this you need to have two hosts that need to be mentioned in the ingress.</p> <p>based on your example you'd most likely want to have something like <code>foo.bar.com and bar.foo.com</code>. Here's the example from the Kubernetes docs:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: name-virtual-host-ingress spec: rules: - host: foo.bar.com http: paths: #path_name - backend: serviceName: service1 servicePort: 80 - host: bar.foo.com http: paths: #path_name - backend: serviceName: service2 servicePort: 80 </code></pre>
Hemanth Kumar
<p><a href="https://i.stack.imgur.com/czJgf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/czJgf.png" alt="enter image description here" /></a></p> <p>Is there any documentation that describes this &quot;m&quot; and &quot;k&quot;. In the k8s documentation, I could see for &quot;Mi&quot;, &quot;Gi&quot; etc, but not this. Any help would be greatly appreciated.</p>
Arun Prakash Nagendran
<p>After some research i got this : Those are Kubernetes-style quantities, Instead of displaying things as a decimal, it will display with SI suffixes. For instance, 1.5 becomes 1500m. When Kubernetes displays a quantity, it will tend to use milli-units if there would be a decimal point, and plain units otherwise. So, if it had to display 500 on the dot, it would just display 500, but if it's 500.5, it would display 500500m. So m means divide by 1000, k means times 1000. There's no documentation really except <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">this</a>. Search for &quot;Kubernetes-style quantities&quot; if you want to see more examples.</p> <p>1m/750m = (1/1000)/(750/1000) = 0.001/.75 = .1%/75%</p> <p>refer this <a href="https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/api/resource/suffix.go#L108" rel="nofollow noreferrer">link</a> also</p>
Hemanth Kumar
<p>I am using springboot microservices and Kubernetes for our application development and deployments. We are using stateful sets for deployments. We have a need to generate unique identifiers that can be used in the services across pods deployed on nodes across clusters.</p> <p>I am using twitter snowflake format for the generation of unique Id with K8s stateful sets. Format: 64 bit java long: 1 bit reserved + 41 bit timestamp + 10 bits nodeId + 12 bits counter</p> <p>Its a simple problem to solve if we are talking about say 5 replicas deployed on the same cluster (across different nodes). I could easily generate unique orderId that will serve as nodeId in the format above. But the moment I deploy the pods on multiple clusters, it duplicates the nodeIds since it generated similar orders across the clusters.</p> <p>example: 5 replicas in a single cluster create pods like: service-0, service-1, service-2, service-3 service-4</p> <p>I could get the unique pod numbers (0,1,2,3..) that will serve as nodeId in my uniqueId generator and hence I could derive a unique identifier. But if I deploy the application (via stateful set) to 2 clusters, it will generate similar services in the second cluster too:</p> <p>service-0, service-1, service-2, service-3 service-4</p> <p>now if my GLB routes my calls randomly across the 10 pods in the 2 cluster, the probability of generating duplicate Unique Id (for calls made during the same milisecond) is pretty high and hence my solution might not work.</p> <p>Need inputs on how to solve the problem. Any help will be highly appreciated</p>
whizKid
<p>Hi <code>WhizKid</code> I can give a work around for this try to include your cluster name included in the naming convention (e.g: cluster1-service-0) instead of just going with nodeID. Since we have generated unique IDs for each cluster now even if they lie on the same nodes or same servers it won’t become a problem. Hope this helps you.</p>
Kranthiveer Dontineni
<p>I have a Deployment object with Container A and Container B in OpenShift. The requirement here is that whenever container B exits, container A should follow or otherwise the entire pod should be replaced.</p> <p>Is there any way to achieve this?</p> <p>This is my YAML.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: app-redis-bundle labels: app: app-redis-bundle spec: replicas: 1 selector: matchLabels: app: app-redis-bundle template: metadata: labels: app: app-redis-bundle spec: containers: - name: cache-store-2 image: redis resources: limits: cpu: 500m memory: 1500Mi requests: cpu: 250m memory: 1Gi ports: - containerPort: 6379 livenessProbe: tcpSocket: port: 6379 - name: app-server-2 image: 'app:latest' resources: limits: cpu: '1' memory: 1Gi requests: cpu: 500m memory: 512Mi ports: - containerPort: 8443 livenessProbe: tcpSocket: port: 8443 imagePullSecrets: - name: mysecret </code></pre>
Mayank Singh Rathore
<p>Thanks for your comments.</p> <p>The issue is resolved now.</p> <p>Allow me to elaborate it and explain the solution.</p> <h2>Problem Statement</h2> <p>I was needed to deploy a Pod in OpenShift with 2 containers A and B in a manner such that whenever container A terminates, container B should automatically kill itself as well.</p> <p>Initially, my approach was to run a termination script in container B from container A. However, this approach didn't help.</p> <h2>Solution</h2> <p>Using a liveness probe to send a TCP Socket connection from B to A on the running port did the magic and now B is terminating as soon as A exits.</p> <p>Job done.</p> <p>Below is the working yaml.</p> <pre><code>apiVersion: v1 kind: Pod metadata: labels: test: liveness name: live spec: restartPolicy: Never containers: - name: nginx image: nginx:latest args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600; livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 - name: liveness image: registry.k8s.io/busybox ports: - containerPort: 80 livenessProbe: tcpSocket: port: 80 initialDelaySeconds: 5 periodSeconds: 5 </code></pre>
Mayank Singh Rathore
<p>I have installed Docker Desktop and Kubernetes on a <strong>Windows</strong> machine. <br /> When i run the <code>kubectl get nodes</code> command, I get the following output:</p> <pre><code>NAME STATUS ROLES AGE VERSION docker-desktop Ready control-plane 2d1h v1.24.0 </code></pre> <p>So my cluster/control-plane is running properly.</p> <p>I have a second Windows machine on the same network (in fact its a VM) and I'm trying to add this second machine to the existing cluster.<br /> From what I've seen the control-plane node has to have kubeadm installed but it seems it's only <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#:%7E:text=A%20compatible%20Linux%20host.%20The%20Kubernetes%20project%20provides%20generic%20instructions%20for%20Linux%20distributions%20based%20on%20Debian%20and%20Red%20Hat%2C%20and%20those%20distributions%20without%20a%20package%20manager." rel="nofollow noreferrer">available for Linux</a>.</p> <p>Is there another tool for Windows-based clusters or is it not possible to do this?</p>
HapaxLegomenon
<p>Below are details of docker desktop from docker documentation.</p> <p><a href="https://docs.docker.com/desktop/kubernetes/" rel="nofollow noreferrer">Docker Desktop</a> includes a standalone Kubernetes server and client, as well as Docker CLI integration that runs on your machine. The Kubernetes server runs locally within your Docker instance, is not configurable, and is a single-node cluster..</p> <p>You can refer <a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/" rel="nofollow noreferrer">kubernetes documentation</a> and create kubernetes cluster with all your windows machines.</p>
Nataraj Medayhal
<p>I have serious problems with the configuration of Ingress on a Google Kubernetes Engine cluster for an application which expects traffic over TLS. I have configured a FrontendConfig, a BackendConfig and defined the proper annotations in the Service and Ingress YAML structures.</p> <p>The Google Cloud Console reports that the backend is <strong>healthy</strong>, but if i connect to the given address, it returns 502 and in the Ingress logs appears a <code>failed_to_connect_to_backend</code> error.</p> <p>So are my configurations:</p> <p><strong>FrontendConfig.yaml</strong>:</p> <pre><code>apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: my-frontendconfig namespace: my-namespace spec: redirectToHttps: enabled: false sslPolicy: my-ssl-policy </code></pre> <p><strong>BackendConfig.yaml</strong>:</p> <pre><code>apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-backendconfig namespace: my-namespace spec: sessionAffinity: affinityType: &quot;CLIENT_IP&quot; logging: enable: true sampleRate: 1.0 healthCheck: checkIntervalSec: 60 timeoutSec: 5 healthyThreshold: 3 unhealthyThreshold: 5 type: HTTP requestPath: /health # The containerPort of the application in Deployment.yaml (also for liveness and readyness Probes) port: 8001 </code></pre> <p><strong>Ingress.yaml</strong>:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress namespace: my-namespace annotations: # If the class annotation is not specified it defaults to &quot;gce&quot;. kubernetes.io/ingress.class: &quot;gce&quot; # Frontend Configuration Name networking.gke.io/v1beta1.FrontendConfig: &quot;my-frontendconfig&quot; # Static IP Address Rule Name (gcloud compute addresses create epa2-ingress --global) kubernetes.io/ingress.global-static-ip-name: &quot;my-static-ip&quot; spec: tls: - secretName: my-secret defaultBackend: service: name: my-service port: number: 443 </code></pre> <p><strong>Service.yaml</strong>:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: my-service namespace: my-namespace annotations: # Specify the type of traffic accepted cloud.google.com/app-protocols: '{&quot;service-port&quot;:&quot;HTTPS&quot;}' # Specify the BackendConfig to be used for the exposed ports cloud.google.com/backend-config: '{&quot;default&quot;: &quot;my-backendconfig&quot;}' # Enables the Cloud Native Load Balancer cloud.google.com/neg: '{&quot;ingress&quot;: true}' spec: type: ClusterIP selector: app: my-application ports: - protocol: TCP name: service-port port: 443 targetPort: app-port # this port expects TLS traffic, no http plain connections </code></pre> <p>The Deployment.yaml is omitted for brevity, but it defines a liveness and readiness Probe on another port, the one defined in the BackendConfig.yaml.</p> <p>The interesting thing is, if I expose through the <code>Service.yaml</code> also this healthcheck port (mapped to port 80) and I point the default Backend to port 80 and simply define a rule with a path <code>/*</code> leading to port 443, everything seems to work just fine, but I don't want to expose the healthcheck port outside my cluster, since I have also some diagnostics information there.</p> <p><strong>Question</strong>: How can I be sure that if i connect to the Ingress point with ``https://MY_INGRESS_IP/`, the traffic is routed exactly as it is to the HTTPS port of the service/application, without getting the 502 error? Where do I fail to configure the Ingress?</p>
madduci
<p>There are few elements to your question, i'll try to answer them here.</p> <p><code> I don't want to expose the healthcheck port outside my cluster</code> The HealtCheck endpoint is technically not exposed outside the cluster, it's expose inside Google Backbone so that the the Google LoadBalancers (configured via Ingress) can reach it. You can try that by doing a curl against https://INGREE_IP/healthz, this will not work.</p> <p><code>The traffic is routed exactly as it is to the HTTPS port of the service/application</code> The reason why 443 in your Service Definition doesn't work but 80 does, its because when you expose the Service on port 443, the LoadBalancer will fail to connect to a backend without a proper certificate, your backend should also be configured to present a certificate to the Loadbalancer to encrypt traffic. The <code>secretName</code> configured at the Ingress is the certificate used by the clients to connect to the LoadBalancer. Google HTTP LoadBalancer terminate the SSL certificate and initiate a new connection to the backend using whatever port you specific in the Ingress. If that port is 443 but the backend is not configured with SSL certificates, that connection will fail.</p> <p>Overall you don't need to encrypt traffic between LoadBalancers and backends, it's doable but not needed as Google <a href="https://cloud.google.com/load-balancing/docs/ssl-certificates/encryption-to-the-backends" rel="nofollow noreferrer">encrypt</a> that traffic at the network level anyway</p>
boredabdel
<p>I am trying to use OPA Gatekeeper to modify certain Kubernetes deployments. In this example I want to change the display name of service accounts, regardless of what the user provided. So far I was following the documentation here: <a href="https://open-policy-agent.github.io/gatekeeper/website/docs/mutation/" rel="nofollow noreferrer">https://open-policy-agent.github.io/gatekeeper/website/docs/mutation/</a></p> <p>I have created the following yaml file:</p> <pre><code>apiVersion: mutations.gatekeeper.sh/v1alpha1 kind: Assign metadata: name: change-sa-name spec: applyTo: - groups: [&quot;&quot;] kinds: [&quot;IAMServiceAccount&quot;] versions: [&quot;v1beta1&quot;] location: &quot;spec.displayName&quot; parameters: assign: value: &quot;New Name&quot; </code></pre> <p>and used the following to deploy a service account:</p> <pre><code>apiVersion: iam.cnrm.cloud.google.com/v1beta1 kind: IAMServiceAccount metadata: labels: label-one: &quot;value-one&quot; name: iamserviceaccount-sample spec: displayName: Example Service Account </code></pre> <p>However, upon deploying it the display name still shows up as Example Service Account and not New Name. What exactly am I doing wrong or what should I be looking at?</p>
Beembo
<p>As per the <a href="https://cloud.google.com/config-connector/docs/reference/resource-docs/iam/iamserviceaccount#schema" rel="nofollow noreferrer">official docs</a> you need to give the display name as you are wishing to give at row displayName. Find below yaml and try it, if you get errors paste it here.</p> <pre><code>apiVersion: iam.cnrm.cloud.google.com/v1beta1 kind: IAMServiceAccount metadata: labels: label-one: &quot;value-one&quot; name: iamserviceaccount-sample spec: displayName: &lt;Give the display name that you are looking for&gt; </code></pre>
Hemanth Kumar
<p>I'm trying to running my project (that supposed to) deploy on GKE in my local environment with minikube (Docker-Engine) for sake of development and testing, however when I type <code>helm install</code> it tells me I need to install CRDs like <code>BackendConifg</code>, <code>PodMonitoring</code>, <code>ManagedCertificate</code>... etc. first in order to install charts with those kind</p> <p>Is there any place to get those CRD yaml file or any way for minikube to import those CRDs? Thanks!</p> <p>Possible related question (unanswered): <a href="https://stackoverflow.com/questions/73134236/where-can-i-find-gke-crds">Where can I find GKE CRDs?</a></p>
Compeador
<p>GKE CRD's cannot be deployed outside of GKE. Especially the ones you mentioned (BackendConfig, ManagedCertificates...) because they only made sense when used with GKE.</p>
boredabdel
<p>I was going through the documentation for creating backups for GKE cluster. My cluster version in <strong>1.21.14-gke.700</strong> and in the docs it is written</p> <blockquote> <p>Caution: Backup for GKE requires full privileges to read and write every object in the cluster. The Backup for GKE agent that runs in GKE cluster versions prior to 1.24 runs as a workload in the GKE user cluster. Users or workloads with root access to the underlying node on which the Backup for GKE pod is scheduled, such as through pod hostpath mounts or SSH, can gain these root-in-cluster privileges. To avoid this potential node to cluster escalation, we highly recommend that you run Backup for GKE in GKE clusters running version 1.24.4-gke.800 or higher, where the agent runs on an inaccessible host in the GKE control plane.</p> </blockquote> <p>Can anyone help me to explain the meaning of node to cluster escalation term in detail. Also, what's the harm of running backups for GKE cluster prior to 1.24.</p>
Shobit Jain
<p>If you use the GKE backup on GKE versions before 1.24. The backup agent runs as a privileged workloads on your cluster nodes. A privileged pod is a container that has root access on the node. that what we mean by &quot;node to cluster escalation&quot;.</p> <p>After 1.24 the agent runs in a dedicated namespace with a dedicated Service Account which has limited access.</p>
boredabdel
<p>Error when I try to apply a <code>kubectl ou edit</code>, for <code>kubectl get pods</code>, it works normally.</p> <p>&quot;error validating data: the server has asked for the client to provide credentials; if you choose to ignore these errors, turn validation off with <code>--validate=false</code>&quot;</p> <p>edit config and checked all pods on kube-system, all is working normally.</p>
David
<p>For me this issue appeared with the latest helm version <code>3.12.0</code> running against a cluster with Kubernetes <code>1.24.10</code> and using kubectl version <code>1.27.1</code>.</p> <pre><code> Error: unable to build kubernetes objects from release manifest: error validating &quot;&quot;: error validating data: the server has asked for the client to provide credentials </code></pre> <p>I was first thinking it's an issue with the kubectl version, so I downgraded to earlier versions as mentioned in the answer from David, but that did not help.</p> <p>The issue was caused by running <code>brew upgrade</code> on mac and I reverted to the older helm version <code>3.11.3</code> by running:</p> <pre><code># unlink the current version brew unlink helm # download and install the version v3.11.3 curl -O https://raw.githubusercontent.com/Homebrew/homebrew-core/1e9569bdecc40e762028cb441b8bfb7f595d3755/Formula/helm.rb &amp;&amp; brew install ./helm.rb </code></pre> <p>This solved the problem for me.</p>
anessi
<p>I installed JupyterHub with Helm on an EKS cluster, although the EKS service role can be correctly assumed by the hub pod (whose name starts with &quot;hub-&quot;), the user pods (starting with &quot;jupyter-USERNAME&quot;) seem can't assume the role. Because of this, when a user uses boto3 in her notebook, she is asked for her IAM user credentials, which is not ideal.</p> <p>All other pods in that namespace can assume the EKS role automatically except for the JupyterHub user pods. May I have your advice on this please? Thanks everyone for your time and consideration.</p>
user13451354
<p>You probably need to configure the <code>service_account</code> for your kube_spawner. <a href="https://jupyterhub-kubespawner.readthedocs.io/en/latest/spawner.html" rel="nofollow noreferrer">https://jupyterhub-kubespawner.readthedocs.io/en/latest/spawner.html</a></p>
Lydian
<p>We can scale the nodes in a Kubernetes cluster based on CPU or Memory, can we autoscale the nodes based on the pod pressure (number of pods per node)? In many of my cases, the CPU and Memory is not the most demanded resources, but number of pods per node is usually under pressure.</p>
xsqian
<p>In Kubernetes there is no option to offer an auto scaling mechanism based on the number of pods per node. Instead you can use <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">Cluster Autoscaler</a>. Cluster Autoscaler doesn't depend on load metrics. Instead, it's based on scheduling simulation and declared Pod requests. This can create new nodes when there is a demand for more pods and scale down nodes when there are no more pods to schedule on them, effectively balancing the number of pods per node. For more information on cluster autoscaling refer to this <a href="https://github.com/kubernetes/autoscaler" rel="nofollow noreferrer">Gitlink</a> and <a href="https://learnk8s.io/kubernetes-autoscaling-strategies" rel="nofollow noreferrer">blog</a> by Daniele Polencic</p> <p>If in case you are looking for this in GKE then refer to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler" rel="nofollow noreferrer">official doc- GKE</a> and <a href="https://cloud.google.com/architecture/best-practices-for-running-cost-effective-kubernetes-applications-on-gke#cluster_autoscaler" rel="nofollow noreferrer">Cluster Autoscaler</a></p> <p>If you need this feature to implement this in future, you can also raise an issue request <a href="https://cloud.google.com/support/docs/issue-trackers" rel="nofollow noreferrer">here</a>.</p> <p><strong>EDIT :</strong></p> <p>@xsqian : Based on your comments, you are using AWS <code>m5.2xlarge</code> instances. The maximum number of pods per EKS instance are actually listed in this <a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt" rel="nofollow noreferrer">eni-max-pods git link</a></p> <p>The <a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt" rel="nofollow noreferrer">formula</a> for defining the maximum number of pods per instance is as follows:</p> <blockquote> <p>N * (M-1) + 2</p> <p>Where:</p> <ul> <li>N is the number of Elastic Network Interfaces (ENI) of the instance type</li> <li>M is the number of IP addresses of a single ENI</li> </ul> <p>Values for <code>N</code> and <code>M</code> for each instance type in this <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerEN" rel="nofollow noreferrer">Elastic network interface doc</a></p> </blockquote> <p>Based on this as you are using <code>m5.2xlarge</code> the pod limit is 58 only and as you are using two nodes then the pod limit is 116 only. This is a soft limit only you can't add anymore pods beyond the 116, if you try to add then this will move into a pending state.</p> <p>But It is advised that as of August 2021 it's now possible to increase the max pods on a node using the latest AWS CNI plugin as described in this <a href="https://aws.amazon.com/blogs/containers/amazon-vpc-cni-increases-pods-per-node-limits/" rel="nofollow noreferrer">amazon-vpc-cni-increases-pods-per-node-limits</a> and refer to this <a href="https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html" rel="nofollow noreferrer">EKS user guide</a>.</p>
Hemanth Kumar
<p>In Kubernetes I have deployed RabbitMQ pod, service and ingress. I have access to RabbitMQ UI and I added a user that has access to virtual host &quot;/&quot; and a password.</p> <p>I tried to create a Python pod that connects to RabbitMQ and create a queue but I have the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/dump_notify/producer.py&quot;, line 40, in &lt;module&gt; main() File &quot;/dump_notify/producer.py&quot;, line 24, in main connection = pika.BlockingConnection(parameters) File &quot;/opt/app-root/lib64/python3.8/site-packages/pika/adapters/blocking_connection.py&quot;, line 360, in __init__ self._impl = self._create_connection(parameters, _impl_class) File &quot;/opt/app-root/lib64/python3.8/site-packages/pika/adapters/blocking_connection.py&quot;, line 451, in _create_connection raise self._reap_last_connection_workflow_error(error) pika.exceptions.AMQPConnectionError </code></pre> <p>My Python code is this:</p> <pre><code>import pika from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler class FileEventHandler(FileSystemEventHandler): def __init__(self, channel): self.channel = channel def on_created(self, event): if not event.is_directory: self.channel.basic_publish(exchange='', routing_key='myqueue', body='New file has been added to SFTP : %s' % event.src_path) def main(): credentials = pika.PlainCredentials('Lbeodeobf', 'test') print('Logged') parameters = pika.ConnectionParameters('X', 5672, '/', credentials) print(parameters) connection = pika.BlockingConnection(parameters) print(&quot;Connection...&quot;) channel = connection.channel() channel.queue_declare(queue='myqueue') event_handler = FileEventHandler(channel) observer = Observer() observer.schedule(event_handler, path='/mnt/', recursive=False) observer.start() observer.join() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Where can I be wrong ?</p>
jos97
<p>As you are getting <code>pika.exceptions.AMQPConnectionError</code> is not related to python code which you run, this is related to the connection error for RabbitMQ. So, you need to install <a href="https://www.rabbitmq.com/download.html" rel="nofollow noreferrer">RabbitMQ</a> on your PC or server . Post successful installation if it is a linux server then run this commands <code>sudo systemctl start rabbitmq-server</code> and <code>sudo systemctl enable rabbitmq-server</code> which will start and enable the rabbit server and then run the python code which may resolve your issue.</p>
Hemanth Kumar
<p>I encountered as below when executing sudo kubeadm init Help me~~</p> <pre><code>$ sudo kubeadm init [init] Using Kubernetes version: v1.24.1 [preflight] Running pre-flight checks [WARNING SystemVerification]: missing optional cgroups: blkio error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR CRI]: container runtime is not running: output: E0605 10:35:34.973561 12491 remote_runtime.go:925] &quot;Status from runtime service failed&quot; err=&quot;rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService&quot; time=&quot;2022-06-05T10:35:34+09:00&quot; level=fatal msg=&quot;getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService&quot; , error: exit status 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher </code></pre>
Jaehyun Lee
<p>work around found here : <a href="https://github.com/containerd/containerd/issues/4581" rel="noreferrer">https://github.com/containerd/containerd/issues/4581</a></p> <pre><code>rm /etc/containerd/config.toml systemctl restart containerd kubeadm init </code></pre>
benj
<p>I am new to kubernetes, so please bear with me. I have created a azure kubernetes private cluster, i have deployed the pods for a basic webapplication &amp; CLusterIP service , I have enabled App gateway ingress controller for the aks and deployed the ingress service that looks like below, in the ingress controller the backend is shown healthy, meaning it is able to reach the pod and get 200 ok response. However when i try to access my application by using the public IP of the ingress controller i get a 404 not found from the Application gateway. My aks cluster and ingress are in same Vnet &amp; I have verified that the route table of the aks cluster subnet has been added to ingress subnet.</p> <p>I am not sure if there is any special configuration needed for using AGIC with private AKS. Does anyone have any idea about this? Thank you!</p> <pre><code> </code></pre> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: apigw-ingress annotations: kubernetes.io/ingress.class: azure/application-gateway spec: rules: - host: backendpocwebapp.&lt;location&gt;.cloudapp.azure.com http: paths: - path: &quot;/&quot; pathType: Prefix backend: service: name: nextjspocapp port: number: 80 </code></pre> <pre><code> </code></pre> <p>Below is my clusterIp service exposing port 80:</p> <pre><code> </code></pre> <pre><code>apiVersion: v1 kind: Service metadata: name: nextjspocapp annotations: service.beta.kubernetes.io/azure-dns-label-name: backendpocwebapp labels: run: nextjspocapp spec: ports: - port: 80 protocol: TCP targetPort: 3000 selector: app: nextjspocapp </code></pre> <pre><code> </code></pre> <p>Below is the deployment:</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: name: nextjspocapp labels: app: nextjspocapp tier: poc spec: revisionHistoryLimit: 5 replicas: 2 selector: matchLabels: tier: poc template: metadata: name: nextjspocapp labels: app: nextjspocapp tier: poc spec: containers: - name: nextjspocapp image: &lt;imagename&gt;:tag ports: - containerPort: 3000 </code></pre> <p>Added ingress controller and ingress service to aks , expected to access the pods using ingress public IP</p>
inspired_sup
<p>Not an expert at all in AKS, but from the App Gateway perspective, it looks like the HTTP requests sent by your probes are the correct port and hostname, while your regular traffic isn't sending the correct hostname. Usually 404's are experienced when the site is alive, listening, but the requested hostname doesn't match any bindings. Your regular traffic might even be using the IP as a hostname.</p> <p>The hostname can either be specified in the backend HTTP settings by overriding the hostname received by App Gateway clients, or by making sure the hostname you are hitting the App Gateway with matches what the backend is expecting.</p> <p>This page has all the annotations but the anchor is for the backend hostname: <a href="https://azure.github.io/application-gateway-kubernetes-ingress/annotations/#backend-hostname" rel="nofollow noreferrer">https://azure.github.io/application-gateway-kubernetes-ingress/annotations/#backend-hostname</a></p>
DusDee
<p>After a long struggle I just created my cluster, deployed a sample container busybox now i am trying to run the command exec and i get the following error:</p> <p><strong>error dialing backend: x509: certificate signed by unknown authority</strong></p> <p>How do i solve this one: here is the command output with v=9 log level. kubectl exec -v=9 -ti busybox -- nslookup kubernetes I also noticed in the logs that this curl command that failed is actually the second command the first GET command passed and it return results without any issues.. ( <em><strong>GET <a href="https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox" rel="nofollow noreferrer">https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox</a> 200 OK</strong></em>)</p> <pre><code>curl -k -v -XPOST -H &quot;X-Stream-Protocol-Version: v4.channel.k8s.io&quot; -H &quot;X-Stream-Protocol-Version: v3.channel.k8s.io&quot; -H &quot;X-Stream-Protocol-Version: v2.channel.k8s.io&quot; -H &quot;X-Stream-Protocol-Version: channel.k8s.io&quot; -H &quot;User-Agent: kubectl/v1.19.0 (linux/amd64) kubernetes/e199641&quot; 'https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox/exec?command=nslookup&amp;command=kubernetes&amp;container=busybox&amp;stdin=true&amp;stdout=true&amp;tty=true' I1018 02:19:40.776134 129813 round_trippers.go:443] POST https://myloadbalancer.local:6443/api/v1/namespaces/default/pods/busybox/exec?command=nslookup&amp;command=kubernetes&amp;container=busybox&amp;stdin=true&amp;stdout=true&amp;tty=true 500 Internal Server Error in 43 milliseconds I1018 02:19:40.776189 129813 round_trippers.go:449] Response Headers: I1018 02:19:40.776206 129813 round_trippers.go:452] Content-Type: application/json I1018 02:19:40.776234 129813 round_trippers.go:452] Date: Sun, 18 Oct 2020 02:19:40 GMT I1018 02:19:40.776264 129813 round_trippers.go:452] Content-Length: 161 I1018 02:19:40.776277 129813 round_trippers.go:452] Cache-Control: no-cache, private I1018 02:19:40.777904 129813 helpers.go:216] server response object: [{ &quot;metadata&quot;: {}, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;error dialing backend: x509: certificate signed by unknown authority&quot;, &quot;code&quot;: 500 }] F1018 02:19:40.778081 129813 helpers.go:115] Error from server: error dialing backend: x509: certificate signed by unknown authority goroutine 1 [running]: </code></pre> <p>Adding more information: This is on UBUNTU 20.04. I went through step by step creating my cluster manually as a beginner I need that experience instead of spinning up with tools like kubeadm or minikube</p> <pre><code>xxxx@master01:~$ kubectl exec -ti busybox -- nslookup kubernetes Error from server: error dialing backend: x509: certificate signed by unknown authority xxxx@master01:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default busybox 1/1 Running 52 2d5h kube-system coredns-78cb77577b-lbp87 1/1 Running 0 2d5h kube-system coredns-78cb77577b-n7rvg 1/1 Running 0 2d5h kube-system weave-net-d9jb6 2/2 Running 7 2d5h kube-system weave-net-nsqss 2/2 Running 0 2d14h kube-system weave-net-wnbq7 2/2 Running 7 2d5h kube-system weave-net-zfsmn 2/2 Running 0 2d14h kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-dhcpn 1/1 Running 0 2d3h kubernetes-dashboard kubernetes-dashboard-665f4c5ff-6qnzp 1/1 Running 7 2d3h tinashe@master01:~$ kubectl logs busybox Error from server: Get &quot;https://worker01:10250/containerLogs/default/busybox/busybox&quot;: x509: certificate signed by unknown authority xxxx@master01:~$ xxxx@master01:~$ kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.3&quot;, GitCommit:&quot;1e11e4a2108024935ecfcb2912226cedeafd99df&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-10-14T12:50:19Z&quot;, GoVersion:&quot;go1.15.2&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.3&quot;, GitCommit:&quot;1e11e4a2108024935ecfcb2912226cedeafd99df&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-10-14T12:41:49Z&quot;, GoVersion:&quot;go1.15.2&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre>
tinashe.chipomho
<p>**Edited for simplicity:</p> <p>my cluster operator kube-apiserver was degraded, causing my certificate failures. Resolving that degradation was necessary to resolve the overarching problem, resulting in x509 errors. Validate that all masters are in READY, pods in your apiserver projects are also scheduled and ready. See below KCS for more information:</p> <p><a href="https://access.redhat.com/solutions/4849711" rel="nofollow noreferrer">https://access.redhat.com/solutions/4849711</a></p> <p>**removed below outdated/incorrect information about local cert pull/export.</p>
scotchman
<p>Im trying to use jq on kubernetes json output, to create new json object containing list of objects - container and image per pod, however im getting cartesian product.</p> <p>my input data (truncated from sensitive info):</p> <pre><code>{ &quot;apiVersion&quot;: &quot;v1&quot;, &quot;items&quot;: [ { &quot;apiVersion&quot;: &quot;v1&quot;, &quot;kind&quot;: &quot;Pod&quot;, &quot;metadata&quot;: { &quot;creationTimestamp&quot;: &quot;2021-06-30T12:45:40Z&quot;, &quot;name&quot;: &quot;pod-1&quot;, &quot;namespace&quot;: &quot;default&quot;, &quot;resourceVersion&quot;: &quot;757679286&quot;, &quot;selfLink&quot;: &quot;/api/v1/namespaces/default/pods/pod-1&quot; }, &quot;spec&quot;: { &quot;containers&quot;: [ { &quot;image&quot;: &quot;image-1&quot;, &quot;imagePullPolicy&quot;: &quot;Always&quot;, &quot;name&quot;: &quot;container-1&quot;, &quot;resources&quot;: {}, &quot;terminationMessagePath&quot;: &quot;/dev/termination-log&quot;, &quot;terminationMessagePolicy&quot;: &quot;File&quot;, &quot;volumeMounts&quot;: [ { &quot;mountPath&quot;: &quot;/var/run/secrets/kubernetes.io/serviceaccount&quot;, &quot;readOnly&quot;: true } ] }, { &quot;image&quot;: &quot;image-2&quot;, &quot;imagePullPolicy&quot;: &quot;Always&quot;, &quot;name&quot;: &quot;container-2&quot;, &quot;resources&quot;: {}, &quot;terminationMessagePath&quot;: &quot;/dev/termination-log&quot;, &quot;terminationMessagePolicy&quot;: &quot;File&quot;, &quot;volumeMounts&quot;: [ { &quot;mountPath&quot;: &quot;/var/run/secrets/kubernetes.io/serviceaccount&quot;, &quot;readOnly&quot;: true } ] } ], &quot;dnsPolicy&quot;: &quot;ClusterFirst&quot;, &quot;enableServiceLinks&quot;: true, &quot;priority&quot;: 0, &quot;restartPolicy&quot;: &quot;Always&quot;, &quot;schedulerName&quot;: &quot;default-scheduler&quot;, &quot;securityContext&quot;: {}, &quot;serviceAccount&quot;: &quot;default&quot;, &quot;serviceAccountName&quot;: &quot;default&quot;, &quot;terminationGracePeriodSeconds&quot;: 30, &quot;tolerations&quot;: [ { &quot;effect&quot;: &quot;NoExecute&quot;, &quot;key&quot;: &quot;node.kubernetes.io/not-ready&quot;, &quot;operator&quot;: &quot;Exists&quot;, &quot;tolerationSeconds&quot;: 300 }, { &quot;effect&quot;: &quot;NoExecute&quot;, &quot;key&quot;: &quot;node.kubernetes.io/unreachable&quot;, &quot;operator&quot;: &quot;Exists&quot;, &quot;tolerationSeconds&quot;: 300 } ], &quot;volumes&quot;: [ { &quot;name&quot;: &quot;default-token-b954f&quot;, &quot;secret&quot;: { &quot;defaultMode&quot;: 420, &quot;secretName&quot;: &quot;default-token-b954f&quot; } } ] }, &quot;status&quot;: { &quot;conditions&quot;: [ { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-06-30T12:45:40Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;Initialized&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-06-30T12:45:40Z&quot;, &quot;message&quot;: &quot;containers with unready status: [container-1 container-2]&quot;, &quot;reason&quot;: &quot;ContainersNotReady&quot;, &quot;status&quot;: &quot;False&quot;, &quot;type&quot;: &quot;Ready&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-06-30T12:45:40Z&quot;, &quot;message&quot;: &quot;containers with unready status: [container-1 container-2]&quot;, &quot;reason&quot;: &quot;ContainersNotReady&quot;, &quot;status&quot;: &quot;False&quot;, &quot;type&quot;: &quot;ContainersReady&quot; }, { &quot;lastProbeTime&quot;: null, &quot;lastTransitionTime&quot;: &quot;2021-06-30T12:45:40Z&quot;, &quot;status&quot;: &quot;True&quot;, &quot;type&quot;: &quot;PodScheduled&quot; } ], &quot;containerStatuses&quot;: [ { &quot;image&quot;: &quot;image-1&quot;, &quot;imageID&quot;: &quot;&quot;, &quot;lastState&quot;: {}, &quot;name&quot;: &quot;container-1&quot;, &quot;ready&quot;: false, &quot;restartCount&quot;: 0, &quot;started&quot;: false, &quot;state&quot;: { &quot;waiting&quot;: { &quot;message&quot;: &quot;Back-off pulling image \&quot;image-1\&quot;&quot;, &quot;reason&quot;: &quot;ImagePullBackOff&quot; } } }, { &quot;image&quot;: &quot;image-2&quot;, &quot;imageID&quot;: &quot;&quot;, &quot;lastState&quot;: {}, &quot;name&quot;: &quot;container-2&quot;, &quot;ready&quot;: false, &quot;restartCount&quot;: 0, &quot;started&quot;: false, &quot;state&quot;: { &quot;waiting&quot;: { &quot;message&quot;: &quot;Back-off pulling image \&quot;image-2\&quot;&quot;, &quot;reason&quot;: &quot;ImagePullBackOff&quot; } } } ], &quot;qosClass&quot;: &quot;BestEffort&quot;, &quot;startTime&quot;: &quot;2021-06-30T12:45:40Z&quot; } } ], &quot;kind&quot;: &quot;List&quot;, &quot;metadata&quot;: { &quot;resourceVersion&quot;: &quot;&quot;, &quot;selfLink&quot;: &quot;&quot; } } </code></pre> <p>my command:</p> <pre><code>jq '.items[] | { &quot;name&quot;: .metadata.name, &quot;containers&quot;: [{ &quot;name&quot;: .spec.containers[].name, &quot;image&quot;: .spec.containers[].image }]} ' </code></pre> <p>desired output:</p> <pre><code>{ &quot;name&quot;: &quot;pod_1&quot;, &quot;containers&quot;: [ { &quot;name&quot;: &quot;container_1&quot;, &quot;image&quot;: &quot;image_1&quot; }, { &quot;name&quot;: &quot;container_2&quot;, &quot;image&quot;: &quot;image_2&quot; } ] } </code></pre> <p>output I get:</p> <pre><code>{ &quot;name&quot;: &quot;pod-1&quot;, &quot;containers&quot;: [ { &quot;name&quot;: &quot;container-1&quot;, &quot;image&quot;: &quot;image-1&quot; }, { &quot;name&quot;: &quot;container-1&quot;, &quot;image&quot;: &quot;image-2&quot; }, { &quot;name&quot;: &quot;container-2&quot;, &quot;image&quot;: &quot;image-1&quot; }, { &quot;name&quot;: &quot;container-2&quot;, &quot;image&quot;: &quot;image-2&quot; } ] } </code></pre> <p>Could anyone explain what am I doing wrong?</p> <p>Best Regards, Piotr.</p>
Zegal
<p>The problem is <code>&quot;name&quot;: .spec.containers[].name</code> and <code>&quot;image&quot;: .spec.containers[].image</code>: Both expressions generate a sequence of each value for <code>name</code> and <code>image</code> which will than be combined.</p> <p>Simplified example of why you get a Cartesian product:</p> <p><code>jq -c -n '{name: (&quot;A&quot;, &quot;B&quot;), value: (&quot;C&quot;, &quot;D&quot;)}'</code></p> <p>outputs:</p> <pre class="lang-json prettyprint-override"><code>{&quot;name&quot;:&quot;A&quot;,&quot;value&quot;:&quot;C&quot;} {&quot;name&quot;:&quot;A&quot;,&quot;value&quot;:&quot;D&quot;} {&quot;name&quot;:&quot;B&quot;,&quot;value&quot;:&quot;C&quot;} {&quot;name&quot;:&quot;B&quot;,&quot;value&quot;:&quot;D&quot;} </code></pre> <hr /> <p>You get the desired output using this <code>jq</code> filter on your input:</p> <pre><code>jq ' .items[] | { &quot;name&quot;: .metadata.name, &quot;containers&quot;: .spec.containers | map({name, image}) }' </code></pre> <p>output:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;name&quot;: &quot;pod-1&quot;, &quot;containers&quot;: [ { &quot;name&quot;: &quot;container-1&quot;, &quot;image&quot;: &quot;image-1&quot; }, { &quot;name&quot;: &quot;container-2&quot;, &quot;image&quot;: &quot;image-2&quot; } ] } </code></pre>
jpseng
<p>I'm getting lots of TLS handshake errors in my pods:</p> <pre><code>2023/04/19 05:06:38 http: TLS handshake error from 10.21.152.134:36134: EOF </code></pre> <p>as a result of a known issue introduced in go 1.17:</p> <pre><code>https://github.com/kubernetes/kubernetes/issues/109022 https://github.com/golang/go/issues/50984 https://github.com/golang/go/issues/32406 </code></pre> <p>Is there a way at least to suppress the logs? Thanks!</p>
Tzvika Avni
<p>As this is a known issue, EOF errors seem to be related to a <a href="https://github.com/golang/go/issues/50984" rel="nofollow noreferrer">Go bug</a> and appear on <a href="https://github.com/kubernetes/kubernetes/issues/109022" rel="nofollow noreferrer">Kubernetes 1.23 and 1.24</a> . This is not affecting any functional issues and these are generated from core kubernetes. We need to wait until a fix is suggested by kubernetes. Can you try this work around to disable the TLS Handshake errors by following this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/tls/#server-side-https-enforcement-through-redirect" rel="nofollow noreferrer">doc</a> .</p> <blockquote> <p>This can be disabled using ssl-redirect: &quot;false&quot; in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: &quot;false&quot; annotation in the particular resource.</p> </blockquote> <p>Refer to this <a href="https://stackoverflow.com/a/34889113/19230181">SO</a> by Sathya for a workaround and <a href="https://github.com/kyverno/kyverno/issues/6287" rel="nofollow noreferrer">Gitlink</a> for any updates on the logs or Error disabling.</p>
Hemanth Kumar
<p>I am currently trying to deploy backstage to a GKE cluster using a postgres database in CloudSQL. I have deployed a sidecar to access the cloudsql database in my deployment and I have a deployment for the docker container. The backend deployment is unable to deploy because of the following error:</p> <pre><code>{&quot;level&quot;:&quot;info&quot;,&quot;message&quot;:&quot;Performing database migration&quot;,&quot;plugin&quot;:&quot;catalog&quot;,&quot;service&quot;:&quot;backstage&quot;,&quot;type&quot;:&quot;plugin&quot;} Backend failed to start up KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_PG.acquireConnection (/app/node_modules/knex/lib/client.js:307:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:287:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async listCompleted (/app/node_modules/knex/lib/migrations/migrate/migration-list-resolver.js:12:3) at async Promise.all (index 1) at async Migrator.latest (/app/node_modules/knex/lib/migrations/migrate/Migrator.js:63:29) at async applyDatabaseMigrations (/app/node_modules/@backstage/plugin-catalog-backend/dist/index.cjs.js:2020:3) at async CatalogBuilder.build (/app/node_modules/@backstage/plugin-catalog-backend/dist/index.cjs.js:4095:7) at async createPlugin$4 (/app/packages/backend/dist/index.cjs.js:84:40) at async main (/app/packages/backend/dist/index.cjs.js:276:29) { sql: undefined, bindings: undefined } </code></pre> <p>This is my deployment:</p> <pre><code> apiVersion: apps/v1 kind: Deployment metadata: name: backstage-deployment namespace: backstage spec: replicas: 1 selector: matchLabels: app: backstage template: metadata: labels: app: backstage spec: serviceAccountName: backstage-sa containers: - name: backstage image: us-central1-docker.pkg.dev/px-mike-project-hje/backstage/backstage imagePullPolicy: Always ports: - name: backstage containerPort: 7007 env: - name: POSTGRES_USER valueFrom: secretKeyRef: name: pg-db-ref key: username - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: pg-db-ref key: password - name: POSTGRES_HOST valueFrom: secretKeyRef: name: pg-db-ref key: endpoint - name: cloud-sql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.28.0 command: - &quot;/cloud_sql_proxy&quot; - &quot;-ip_address_types=PRIVATE&quot; - &quot;-log_debug_stdout&quot; - &quot;-instances=px-mike-project-hje:us-central1:pg-database=tcp:5432&quot; securityContext: runAsNonRoot: true resources: requests: memory: &quot;2Gi&quot; cpu: &quot;1&quot; </code></pre> <p>Here is my app-config for my database:</p> <pre><code> database: client: pg connection: host: ${POSTGRES_HOST} port: 5432 user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} database: pg-database ensureExists: false pluginDivisionMode: schema knexConfig: pool: min: 15 max: 30 acquireTimeoutMillis: 60000 idleTimeoutMillis: 60000 acquireConnectionTimeout: 10000 plugin: # catalog: # connection: # database: pg-database auth: client: better-sqlite3 connection: ':memory:' </code></pre> <p>I've tried to run the docker image locally and was successful. I am stuck running the deployment with the cloudsql postgres database successfully.</p>
Michael Foster
<p><strong>First</strong>, a cloud sql side car proxy is needed to access the database via gke cluster. Follow this tutorial and add to the deployment with this link: <a href="https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine</a></p> <p><strong>Second</strong>, Backstage uses knex for database management so the databases with in the cloudsql postgres database will be created via Backstage. Change the app config to the following (can change to env variables from deployment):</p> <pre><code> database: client: pg connection: host: localhost #${POSTGRES_HOST} port: 5432 user: postgres # ${POSTGRES_USER} password: password # ${POSTGRES_PASSWORD} </code></pre> <p>The rest of the database configuration should be deleted.</p>
Michael Foster
<p>As I am following the Kubernetes website regarding setting up ingress on minikube, it's all the more confusing... (<a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/</a>)</p> <p>However, here I am summarizing what I have understood so far and question will follow after:</p> <ol> <li><p>When you just use Minikube provided ingress controller (command: Minikube addons enable ingress), it deploys ingress-nginx-controller as '<strong>NodePort</strong>' type at namespace 'ingress-nginx' (huh..? Why such a specific namespace..? so the ingress controller can be at a different namespace than my services..? btw when I do 'kubectl apply -f my-services, they all deploy under namespace 'default')</p> </li> <li><p>You still need the manifest or rule created (command example: kubectl apply -f ingress.yaml) and <strong>IT'S OK</strong> to have the ingress controller to have different namespace as long as in this ingress.yaml you specify namespace to be the namespace where your services are deployed.</p> </li> <li><p>Once you deployed ingress-controller and created ingress object (or manifest) via yaml, then you can route external request to the routed services as long as these services are deployed as '<strong>NodePort</strong>'... wait a sec... didn't I already deploy nginx-controller as 'NodePort' type... ?</p> </li> </ol> <p>Why does the website above guide me to create services as NodePort type.. ? Wouldn't the ingress object (or manifest) take care of having to have to expose these services as NodePort ?? The manifest (such as ingress.yaml) already has all those port and service routing definitions..?</p>
swcraft
<p>As per the given <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">kubernetes doc</a>, when you try to deploy using the minikube, first it is deployed as a node port type with ingress-nginx namespace instead of the kube-system namespace due to ingress version updates.</p> <blockquote> <p>Why does the website above guide me to create services as NodePort type :</p> </blockquote> <p>The doc has taken service reference as NodePort type and hence it created a service as NodePort type. You can also use any other service type(like load balancer) as per your need.</p> <p>We need to expose the port number so that service can run on that particular port only. This will help in routing the request to the service.</p>
Hemanth Kumar
<p>I am starting on learning Containers and Kubernetes. When watching tutorials on Containers and Kubernetes, often the instructors says &quot;You could have thousands of containers&quot;.</p> <p>I am trying to understand how we can end up with thousands of containers?</p> <p>After deploying my single container app, how thousands of container instances are created?</p>
Qadri
<p>Adding to GSerg suggestion</p> <p>A cluster is a set of nodes running Kubernetes agents, managed by the control plane.You can scale your cluster by adding or removing nodes. The way you do this depends on how your cluster is deployed. The limit is 300000 containers.You can find this in this official <a href="https://kubernetes.io/docs/setup/best-practices/cluster-large/" rel="nofollow noreferrer">doc</a>.</p> <p>As per this <a href="https://www.netapp.com/devops-solutions/what-are-containers/" rel="nofollow noreferrer">doc</a></p> <blockquote> <p>Containerized applications can get complicated, however. When in production, many might require hundreds to thousands of separate containers in production. This is where container runtime environments such as Docker benefit from the use of other tools to orchestrate or manage all the containers in operation.</p> <p>One of the most popular tools for this purpose is Kubernetes, a container orchestrator that recognizes multiple container runtime environments, including Docker.</p> </blockquote> <p>The software that runs containers is called the container runtime. Kubernetes upholds compartment runtimes, for example, containerd, CRI-O, and some other execution of the Kubernetes CRI (Holder Runtime Point of interaction).</p> <p>Usually, you can let your cluster choose a Pod's default container runtime. You can specify the RuntimeClass for a Pod to ensure that Kubernetes runs those containers using a specific container runtime if you need to use more than one container runtime in your cluster. Using RuntimeClass, you can also use the same container runtime to run multiple Pods with different settings.</p>
Sai Chandini Routhu
<p>I know Minikube has an ingress add on which defaults on nginx. Can you create multiple ingress controller in a node? I tried this before and ran into some issue with the ingress add on.</p>
notaorb
<p>You can configure multiple ingress controllers in a node using the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#using-ingressclasses" rel="nofollow noreferrer">ingress classes</a> each handling a different set of configuration resources. Each ingress controller can be configured to use different ingress controllers or controllers provided by other vendors by ingress addons and these are the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#additional-controllers" rel="nofollow noreferrer">additional controllers</a>. Refer to this doc for information on <a href="https://docs.nginx.com/nginx-ingress-controller/installation/running-multiple-ingress-controllers/" rel="nofollow noreferrer">running multiple ingress controllers</a> and an example of <a href="https://www.suse.com/support/kb/doc/?id=000020160" rel="nofollow noreferrer">How to run multiple ingress controller</a> .</p>
Hemanth Kumar
<p>I install kubeflow and tried manual profile creation following to <a href="https://www.kubeflow.org/docs/components/multi-tenancy/getting-started/" rel="nofollow noreferrer">here</a>, but got this print</p> <pre><code>error: unable to recognize &quot;profile.yaml&quot;: no matches for kind &quot;Profile&quot; in version &quot;kubeflow.org/v1beta1&quot; </code></pre> <p>How can I solve it?</p> <p>Your valuable help is needed.</p> <p>my resource is <code>profile.yaml</code></p> <pre><code>apiVersion: kubeflow.org/v1beta1 kind: Profile metadata: name: tmp_namespace spec: owner: kind: User name: [email protected] resourceQuotaSpec: hard: cpu: &quot;2&quot; memory: 2Gi requests.nvidia.com/gpu: &quot;1&quot; persistentvolumeclaims: &quot;1&quot; requests.storage: &quot;5Gi&quot; </code></pre> <blockquote> <p>user infomation in dex:</p> <pre><code>- email: [email protected] hash: $2a$12$lRDeywzDl4ds0oRR.erqt.b5fmNpvJb0jdZXE0rMNYdmbfseTzxNW userID: &quot;example&quot; username: example </code></pre> <p>Of course I did restart dex</p> <pre><code>$ kubectl rollout restart deployment dex -n auth </code></pre> </blockquote> <pre><code>$ kubectl version --client &amp;&amp; kubeadm version </code></pre> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.13&quot;, GitCommit:&quot;a43c0904d0de10f92aa3956c74489c45e6453d6e&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-08-17T18:28:56Z&quot;, GoVersion:&quot;go1.16.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} kubeadm version: &amp;version.Info{Major:&quot;1&quot;, Minor:&quot;22&quot;, GitVersion:&quot;v1.22.13&quot;, GitCommit:&quot;a43c0904d0de10f92aa3956c74489c45e6453d6e&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2022-08-17T18:27:51Z&quot;, GoVersion:&quot;go1.16.15&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre>
TaeUk Noh
<p>I've found a way.</p> <p>If you see the message <code>no matches for kind &quot;Profile&quot; in version &quot;kubeflow.org/v1beta1</code>, you may not have done the two necessary installs.</p> <p>go <a href="https://github.com/kubeflow/manifests" rel="nofollow noreferrer">kubeflow/manifasts</a>, and follow command to install <code>Profiles + KFAM</code> and <code>User Namespace</code></p>
TaeUk Noh
<p>After installing Rancher Desktop on macOS 13.2.1 (Apple M1) I walk through the <a href="https://docs.rancherdesktop.io/how-to-guides/hello-world-example" rel="nofollow noreferrer">Hello World</a> documentation. During the &quot;Deploy to Kubernetes&quot; part I run into this problem:</p> <pre class="lang-bash prettyprint-override"><code>$ kubectl run hello-world --image=nginx-helloworld:latest --image-pull-policy=Never --port=80 I0306 08:42:03.954011 9988 versioner.go:58] Get &quot;https://127.0.0.1:6443/version?timeout=5s&quot;: x509: certificate signed by unknown authority E0306 08:42:04.038118 9988 memcache.go:238] couldn't get current server API group list: Get &quot;https://127.0.0.1:6443/api?timeout=32s&quot;: x509: certificate signed by unknown authority Unable to connect to the server: x509: certificate signed by unknown authority </code></pre> <p>I kind of understand the error but I don't know how to fix it properly. I followed all the steps in the documentation.</p>
wintermeyer
<p>The error you are getting is &quot;Unable to connect to the server: x509: certificate signed by unknown authority&quot;.</p> <p>As per this <a href="https://ranchermanager.docs.rancher.com/v2.5/getting-started/installation-and-upgrade/resources/custom-ca-root-certificates" rel="nofollow noreferrer">document</a></p> <blockquote> <p>Services that Rancher needs to access are sometimes configured with a certificate from a custom/internal CA root, also known as self signed certificate. If the presented certificate from the service cannot be validated by Rancher, the following error displays: <code>x509: certificate signed by unknown authority.</code></p> <p>To validate the certificate, the CA root certificates need to be added to Rancher. As Rancher is written in Go, we can use the environment variable <code>SSL_CERT_DIR</code> to point to the directory where the CA root certificates are located in the container. The CA root certificates directory can be mounted using the Docker volume option (<code>-v host-source-directory:container-destination-directory</code>) when starting the Rancher container.</p> </blockquote> <p>Refer to this official <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/#tls-certificate-errors" rel="nofollow noreferrer">document</a> for more information.</p>
Sai Chandini Routhu
<p>I'm using a k8s HPA template for CPU and memory like below:</p> <pre><code>--- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: {{.Chart.Name}}-cpu labels: app: {{.Chart.Name}} chart: {{.Chart.Name}} spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: {{.Chart.Name}} minReplicas: {{.Values.hpa.min}} maxReplicas: {{.Values.hpa.max}} targetCPUUtilizationPercentage: {{.Values.hpa.cpu}} --- apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: {{.Chart.Name}}-mem labels: app: {{.Chart.Name}} chart: {{.Chart.Name}} spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: {{.Chart.Name}} minReplicas: {{.Values.hpa.min}} maxReplicas: {{.Values.hpa.max}} metrics: - type: Resource resource: name: memory target: type: Utilization averageValue: {{.Values.hpa.mem}} </code></pre> <p>Having two different HPA is causing any new pods spun up for triggering memory HPA limit to be immediately terminated by CPU HPA as the pods' CPU usage is below the scale down trigger for CPU. It always terminates the newest pod spun up, which keeps the older pods around and triggers the memory HPA again, causing an infinite loop. Is there a way to instruct CPU HPA to terminate pods with higher usage rather than nascent pods every time?</p>
Ankit Sethi
<h2>Autoscaling based on multiple metrics/Custom metrics:-</h2> <pre><code>apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: nginx spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: nginx minReplicas: 1 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 - type: Resource resource: name: memory target: type: AverageValue averageValue: 100Mi </code></pre> <p>When created, the Horizontal Pod Autoscaler monitors the nginx Deployment for average CPU utilization, average memory utilization, and (if you uncommented it) the custom packets_per_second metric. The Horizontal Pod Autoscaler autoscales the Deployment based on the metric whose value would create the larger autoscale event.</p> <blockquote> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/horizontal-pod-autoscaling#kubectl-apply" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/horizontal-pod-autoscaling#kubectl-apply</a></p> </blockquote>
Rameez Sheraz
<p>I know how to measure needed RAM and CPU usage, but not how to measure used disk space of every container in a k8s cluster.</p> <p>I've tried cAdvisor, but only provides stats per node, not per container.</p> <p><a href="https://github.com/google/cadvisor/issues/198" rel="nofollow noreferrer">https://github.com/google/cadvisor/issues/198</a></p> <p>Any recommended tool/procedure? In this way I could specify in the <code>resources</code> section of a pod, sensible values.</p>
david.perez
<p>df command you need to run in the container and also go through this <a href="https://docs.openshift.com/container-platform/4.8/storage/understanding-ephemeral-storage.html#storage-ephemeral-storage-monitoring_understanding-ephemeral-storage" rel="nofollow noreferrer">doc</a>, /bin/df as a tool to monitor ephemeral storage usage on the volume where ephemeral container data is located. Refer to this <a href="https://www.airplane.dev/blog/kubernetes-ephemeral-storage" rel="nofollow noreferrer">blog</a> by Sudip Sengupta for more information.</p> <p><strong>Try the below commands to know kubernetes container's ephemeral-storage usage details:</strong></p> <ul> <li>Try <strong>du -sh / [run inside a container]</strong> : du -sh will give the space consumed by your container files. Which simply returns the amount of disk space the current directory and all those stuff in it are using as a whole, something like: 2.4G.</li> <li>Can you run <code>kubectl exec &lt;podname&gt; -c &lt;container_name&gt; –df -h</code> in each container you will get the used space in that particular container.</li> <li>Refer to this <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-emphemeralstorage-consumption" rel="nofollow noreferrer">ephemeral storage-consumption Management</a></li> </ul> <p><strong>Below is the Output</strong> : <a href="https://i.stack.imgur.com/uABEo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uABEo.png" alt="output" /></a></p> <p>Also you can check the complete file size using the <code>du -h someDir command</code>.</p> <ul> <li><p>Refer to this <a href="https://blog.px.dev/container-filesystems/#title" rel="nofollow noreferrer">blog</a> by Omid Azizi for inspecting the container file systems.</p> </li> <li><p>Seems to be you are going to specify in the resources section of a pod refer to this <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage" rel="nofollow noreferrer">Setting requests and limits for local ephemeral storage.</a></p> </li> </ul>
Hemanth Kumar
<p>We have inherited a kubernetes application on GCP with a Traefik ingress controller, running an old version of traefik (<code>traefik:v1.7.18-alpine</code>). Google cloud is forcing kubernetes upgrades to <code>v1.22</code> soon and we're getting GCP warnings that traefik is using deprecated k8s APIs.</p> <p><a href="https://i.stack.imgur.com/sxw6y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sxw6y.png" alt="enter image description here" /></a></p> <p>Without getting into the weeds, does anybody know what the minimal version of traefik is that would be compatible with kubernetes 1.22? According to this thread, it seems it's <code>traefik 2.X</code> at least - <a href="https://github.com/traefik/traefik/issues/8343" rel="nofollow noreferrer">https://github.com/traefik/traefik/issues/8343</a>.</p> <p>Wondering also if it's possible to configure traefik 1.X to not use this API somehow? Have no idea where/why this deprecated API is in use.</p>
Adam Hughes
<p>Kubernetes API v1.22 requires atleast Traefik v2.5 and above as <a href="https://cloud.google.com/kubernetes-engine/docs/deprecations/apis-1-22#ingressclass-v122" rel="nofollow noreferrer">some APIs are deprecated in the Kubernetes version 1.22</a>, as most of them were former Beta (v1beta1) APIs. In v2.5, the <a href="https://doc.traefik.io/traefik/reference/dynamic-configuration/kubernetes-crd/#definitions" rel="nofollow noreferrer">Traefik CRDs</a> have been updated to support the new API version apiextensions.k8s.io/v1 as the Beta API version of IngressClass is no longer served as of version 1.22.</p> <p>The support of the networking.k8s.io/v1beta1 API Version will stop in Kubernetes v1.22.</p> <p>You can refer to these documentation for more information about <a href="https://cloud.google.com/kubernetes-engine/docs/deprecations/apis-1-22#ingressclass-v122" rel="nofollow noreferrer">Deprecation Warnings</a> , <a href="https://doc.traefik.io/traefik/migration/v2/#v24-to-v25" rel="nofollow noreferrer">Traefik v2 minor migrations</a> &amp; <a href="https://doc.traefik.io/traefik/migration/v1-to-v2/#migration-guide-from-v1-to-v2" rel="nofollow noreferrer">Migration from v1 to v2</a></p> <p>I hope this answered your queries. Have a good day :-)</p>
Manish Bavireddy
<p>I have a kubeadm cluster. I modified the <code>.kube/config</code> file by exporting out hardcoded <code>certificate-authority-data</code> value (the base64 of ca certificate) to another file called <code>ca.b64.crt</code>. I modified also the <code>client-certificate</code> and <code>client-key</code> by having their values in another files in the disk.</p> <p>So the result <code>.kube/config</code> file is:</p> <pre><code>apiVersion: v1 clusters: - cluster: certificate-authority: /etc/kubernetes/pki/ca.b64.crt server: https://172.31.127.100:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate: /etc/kubernetes/pki/admins/admin.b64.crt client-key: /etc/kubernetes/pki/admins/admin.b64.key </code></pre> <p>The problem is that whenever I try to use kubectl (e.g to get pods), I got:</p> <pre><code>xxxx:~$ k get po error: unable to load root certificates: unable to parse bytes as PEM block </code></pre> <p>Any ideas?</p>
Khaled
<p><strong>Below troubleshooting steps will help you resolve issues related to the format or content of the certificate files</strong></p> <p>1.Check to see if the certificate files are at the expected locations.</p> <p>2.Check that the certificate and key files are in the correct format, particularly PEM.</p> <p>3.Check that the certificate and key files are read-only for the user using the 'kubectl' command.</p> <p>4.Try with enhanced privileges like ‘sudo kubectl'.</p> <p>Refer to the <a href="https://kodekloud.com/blog/certified-kubernetes-administrator-exam-security/#:%7E:text=kubectl%20get%20serviceaccount-,Introduction%20to%20TLS,-Transport%20Layer%20Security" rel="nofollow noreferrer">doc</a> Introduction to TLS written by Mumshad Mannambeth for more information.</p> <p>The error “Unable to load root certificate” seems like if we paste the cert from a browser.It is most likely that we may miss the CR and LF characters by not reading/loading the cert file.</p> <p>you can use something like <a href="https://www.base64encode.org/" rel="nofollow noreferrer">https://www.base64encode.org/</a>. Simply insert the PEM data and encode!</p>
Sai Chandini Routhu
<p>We're getting consistent node scaledowns in GKE Autopilot that makes our application unavailable for a few seconds. We have two replicas and a PDB stating that at least one needs to be available. We haven't set up any anti affinity (I'll be doing that next) and both replicas end up on the same node.</p> <p>According to <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#does-ca-work-with-poddisruptionbudget-in-scale-down" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#does-ca-work-with-poddisruptionbudget-in-scale-down</a> &quot;Before starting to terminate a node, CA makes sure that PodDisruptionBudgets for pods scheduled there allow <strong>for removing at least one replica</strong>. Then it deletes all pods from a node through the pod eviction API&quot; Do I understand correctly that if both replicas are on the same node this condition will be met because technically one replica <em>can</em> be removed? It just ignores the fact that both replicas will be gone in this case?</p> <p>For reference here's our PDB status</p> <pre><code> status: conditions: - lastTransitionTime: &quot;2023-07-28T16:03:34Z&quot; message: &quot;&quot; observedGeneration: 1 reason: SufficientPods status: &quot;True&quot; type: DisruptionAllowed currentHealthy: 2 desiredHealthy: 1 disruptionsAllowed: 1 expectedPods: 2 observedGeneration: 1 </code></pre>
Dawid Janczak
<p>In PDB, we are having a way to prevent the one replica from getting evicted when there are two replicas from the node by setting the target size which is called minimum availability for a particular type of pod. This indicates that atleast one replica should be running at any time. <code>If the number of running replicas is below the target size, Kubernetes will prevent further disruptions to the remaining replicas until the target size is met</code>.</p> <p>So, in your case if you set the target size to 2 then one replica will be disrupted and another will be running which makes a node to prevent it from disruption. When a disruption occurs, Kubernetes will attempt to gracefully evict pods from the affected node(s) in order to maintain the desired number of replicas specified in the PDB.</p> <p>Refer to this <a href="https://medium.com/geekculture/kubernetes-pod-disruption-budgets-pdb-b74f3dade6c1" rel="nofollow noreferrer">blog</a> by Ink Insight which clearly explained about Kubernetes | Pod Disruption Budgets (PDB) from the blog here’s an example of a PDB that sets the target size to 2 for a deployment named “my-deployment” in the “my-namespace”.</p> <blockquote> <pre><code>apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: my-pdb namespace: my-namespace spec: minAvailable: 2 selector: matchLabels: app: my-deployment </code></pre> </blockquote> <p>You can also refer to the <a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="nofollow noreferrer">official doc</a> on specifying a Disruption Budget for your Application and <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#scheduling-and-disruption" rel="nofollow noreferrer">Considering Pod scheduling and disruption .</a></p> <p><strong>EDIT</strong> : After going through more on this, i understood this :</p> <p>As you mentioned from the <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#does-ca-work-with-poddisruptionbudget-in-scale-down" rel="nofollow noreferrer">doc</a> <code>CA makes sure that PodDisruptionBudgets for pods scheduled there allow for removing at least one replica. Then it deletes all pods from a node through the pod eviction API&quot; </code></p> <p><em><strong>Actually it doesn't delete all the pods, Here PDB will check its condition and allow one POD to be on the node as per the condition defined (min available : 1) this will prevent pod from eviction.</strong></em></p> <p>Then as mentioned in the same doc, <code>If one of the eviction fails, the node is saved and it is not terminated</code> then this way the node will save from disruption.</p> <p>If we don't mention PDB condition properly then it will automatically delete all the pods and disrupt the node also. It didn't ignore the fact that both replicas will go.</p>
Hemanth Kumar
<p>Environment Details:</p> <pre><code>Kubernetes version: `v1.20.2` Master Node: `Bare Metal/Host OS: CentOS 7` Worker Node: `VM/Host OS: CentOS 7` </code></pre> <p>I have installed &amp; configured the Kubernetes cluster, the Master node on the bare metal server &amp; the worker node on windows server 2012 HyperV VM. Both master and worker nodes have the same Kubernetes version ( v1.20.2) &amp; centos7. Successfully joined worker node to master, below is the get nodes status.</p> <pre><code>$ kubectl get nodes **NAME STATUS ROLES AGE VERSION k8s-worker-node1 Ready &lt;none&gt; 2d2h v1.20.2 master-node Ready control-plane,master 3d4h v1.20.2** </code></pre> <p>While creating a deployment on the worker node I am getting the below error message.</p> <p>On worker node, I issued the following command.</p> <pre><code>$ kubectl create deployment nginx-depl --image=nginx Error message is: error: failed to create deployment: Post “http://localhost:8080/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create”: dial tcp: lookup localhost on 8.8.8.8:53: no such host </code></pre> <p>please help me to resolve this issue as I am not able to understand what is the problem.</p>
Imran Shaikh
<p>May you have to run <code>minikube start</code> before. I’m learning and between one class and another I forgot to run this command. I hope I have helped someone.</p> <p>This worked for me.</p>
Isaac Brian
<p>I am trying to add the boot disk size to the node auto-provisioned Kubernetes cluster as follows:</p> <pre><code>resource &quot;google_container_cluster&quot; &quot;gc-dev-kube-ds0&quot; { . . . cluster_autoscaling { enabled = true resource_limits { resource_type = &quot;cpu&quot; minimum = 4 maximum = 150 } resource_limits { resource_type = &quot;memory&quot; minimum = 4 maximum = 600 } resource_limits { resource_type = &quot;nvidia-tesla-v100&quot; minimum = 0 maximum = 4 } } disk_size_gb = 200 } </code></pre> <p>but I am getting the following error:</p> <pre><code>Error: Unsupported argument on kubernetes.tf line 65, in resource &quot;google_container_cluster&quot; &quot;gc-dev-kube-ds0&quot;: 65: disk_size_gb = 200 An argument named &quot;disk_size_gb&quot; is not expected here. </code></pre> <p>Also checked the terraform <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#cluster_autoscaling" rel="nofollow noreferrer">documentation</a> but nothing is mentioned on this.</p>
AKs
<p>The error is getting because the <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#disk_size_gb" rel="nofollow noreferrer">disk_size_gb</a> module must be in the node_config block, as the following.</p> <pre><code>node_config { disk_size_gb = 200 } </code></pre> <p>The TerraForm documentation about <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster" rel="nofollow noreferrer">google_container_cluster</a> the module needs to be under the block.</p> <p><a href="https://i.stack.imgur.com/XRZHO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XRZHO.png" alt="enter image description here" /></a></p>
Nadia Espinosa