question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I have a project that was started in Objective-C, and I am trying to import some Swift code into the same class files that I have previously written Objective-C in.
I have consulted the Apple docs on using Swift and Objective-C in the same project, as well as SO question like this, but still no avail: I continue to get the file not found error after putting in #import "NewTestApp-Swift.h" (NewTestApp is the name of the Product and module).
Here is what I have done so far:
In Define Modules, selected YES for the app.
Ensured that the Product Module name did not have any space in it (see screenshot below question)
I have tried using #import "NewTestApp-Swift.h" inside ViewController.m, ViewController.h and AppDelegate.m but none of them has worked.
What else am I doing incorrectly? Thanks for your help.
Screenshot of settings:
Errors that I am presently encountering:
| I was running into the same issue and couldn't get my project to import Swift into Objective-C classes. Using Xcode 6, (should work for Xcode 6+) and was able to do it in this way:
Any class that you need to access in the .h file needs to be a forward declaration like this:
@class MySwiftClass;
In the .m file ONLY, if the code is in the same project (module) then you need to import it with:
#import "ProductModuleName-Swift.h"
Link to the Apple documentation about it.
| Swift | 26,328,034 | 177 |
I have been trying to initialise a string from NSData in Swift.
In the NSString Cocoa Documentation Apple is saying you have to use this:
init(data data: NSData!, encoding encoding: UInt)
However Apple did not include any example for usage or where to put the init.
I am trying to convert the following code from Objective-C to Swift
NSString *string;
string = [[NSString alloc] initWithData: data encoding: NSUTF8StringEncoding];
I have been trying a lot of possible syntaxes such as the following (of course it did not work):
var string:NSString!
string = init(data: fooData,encoding: NSUTF8StringEncoding)
| This is the implemented code needed:
in Swift 3.0:
var dataString = String(data: fooData, encoding: String.Encoding.utf8)
or just
var dataString = String(data: fooData, encoding: .utf8)
Older swift version:
in Swift 2.0:
import Foundation
var dataString = String(data: fooData, encoding: NSUTF8StringEncoding)
in Swift 1.0:
var dataString = NSString(data: fooData, encoding:NSUTF8StringEncoding)
| Swift | 24,023,253 | 177 |
Is there such thing as a startsWith() method or something similar in Swift?
I'm basically trying to check if a certain string starts with another string. I also want it to be case insensitive.
As you might be able to tell, I'm just trying to do a simple search feature but I seem to be failing miserably at this.
This is what I'd like:
typing in "sa" should give me results for "San Antonio", "Santa Fe", etc.
typing in "SA" or "Sa" or even "sA" should also return "San Antonio" or "Santa Fe".
I was using
self.rangeOfString(find, options: NSStringCompareOptions.CaseInsensitiveSearch) != nil
prior to iOS9 and it was working just fine. After upgrading to iOS9, however, it stopped working and now searches are case sensitive.
var city = "San Antonio"
var searchString = "san "
if(city.rangeOfString(searchString, options: NSStringCompareOptions.CaseInsensitiveSearch) != nil){
print("San Antonio starts with san ");
}
var myString = "Just a string with san within it"
if(myString.rangeOfString(searchString, options: NSStringCompareOptions.CaseInsensitiveSearch) != nil){
print("I don't want this string to print bc myString does not start with san ");
}
| use hasPrefix instead of startsWith.
Example:
"hello dolly".hasPrefix("hello") // This will return true
"hello dolly".hasPrefix("abc") // This will return false
| Swift | 32,664,543 | 176 |
How can I deal with this error without creating additional variable?
func reduceToZero(x:Int) -> Int {
while (x != 0) {
x = x-1 // ERROR: cannot assign to 'let' value 'x'
}
return x
}
I don't want to create additional variable just to store the value of x. Is it even possible to do what I want?
| As stated in other answers, as of Swift 3 placing var before a variable has been deprecated. Though not stated in other answers is the ability to declare an inout parameter. Think: passing in a pointer.
func reduceToZero(_ x: inout Int) {
while (x != 0) {
x = x-1
}
}
var a = 3
reduceToZero(&a)
print(a) // will print '0'
This can be particularly useful in recursion.
Apple's inout declaration guidelines can be found here.
| Swift | 24,077,880 | 176 |
I would like to keep the border at the bottom part only in UITextField.
But I don't know how we can keep it on the bottom side.
Can you please advise me?
| I am creating custom textField to make it reusable component for SwiftUI
SwiftUI
struct CustomTextField: View {
var placeHolder: String
@Binding var value: String
var lineColor: Color
var width: CGFloat
var body: some View {
VStack {
TextField(self.placeHolder, text: $value)
.padding()
.font(.title)
Rectangle().frame(height: self.width)
.padding(.horizontal, 20).foregroundColor(self.lineColor)
}
}
}
Usage:
@Binding var userName: String
@Binding var password: String
var body: some View {
VStack(alignment: .center) {
CustomTextField(placeHolder: "Username", value: $userName, lineColor: .white, width: 2)
CustomTextField(placeHolder: "Password", value: $password, lineColor: .white, width: 2)
}
}
Swift 5.0
I am using Visual Formatting Language (VFL) here, This will allow adding a line to any UIControl.
You can create a UIView extension class like UIView+Extention.swift
import UIKit
enum LinePosition {
case top
case bottom
}
extension UIView {
func addLine(position: LinePosition, color: UIColor, width: Double) {
let lineView = UIView()
lineView.backgroundColor = color
lineView.translatesAutoresizingMaskIntoConstraints = false // This is important!
self.addSubview(lineView)
let metrics = ["width" : NSNumber(value: width)]
let views = ["lineView" : lineView]
self.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "H:|[lineView]|", options:NSLayoutConstraint.FormatOptions(rawValue: 0), metrics:metrics, views:views))
switch position {
case .top:
self.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:|[lineView(width)]", options:NSLayoutConstraint.FormatOptions(rawValue: 0), metrics:metrics, views:views))
break
case .bottom:
self.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:[lineView(width)]|", options:NSLayoutConstraint.FormatOptions(rawValue: 0), metrics:metrics, views:views))
break
}
}
}
Usage:
textField.addLine(position: .LINE_POSITION_BOTTOM, color: .darkGray, width: 0.5)
Objective C:
You can add this helper method to your global helper class(I used global class method) or in the same view controller(using an instance method).
typedef enum : NSUInteger {
LINE_POSITION_TOP,
LINE_POSITION_BOTTOM
} LINE_POSITION;
- (void) addLine:(UIView *)view atPosition:(LINE_POSITION)position withColor:(UIColor *)color lineWitdh:(CGFloat)width {
// Add line
UIView *lineView = [[UIView alloc] init];
[lineView setBackgroundColor:color];
[lineView setTranslatesAutoresizingMaskIntoConstraints:NO];
[view addSubview:lineView];
NSDictionary *metrics = @{@"width" : [NSNumber numberWithFloat:width]};
NSDictionary *views = @{@"lineView" : lineView};
[view addConstraints:[NSLayoutConstraint constraintsWithVisualFormat:@"H:|[lineView]|" options: 0 metrics:metrics views:views]];
switch (position) {
case LINE_POSITION_TOP:
[view addConstraints:[NSLayoutConstraint constraintsWithVisualFormat:@"V:|-0-[lineView(width)]" options: 0 metrics:metrics views:views]];
break;
case LINE_POSITION_BOTTOM:
[view addConstraints:[NSLayoutConstraint constraintsWithVisualFormat:@"V:[lineView(width)]|" options: 0 metrics:metrics views:views]];
break;
default: break;
}
}
Usage:
[self addLine:self.textField atPosition:LINE_POSITION_TOP withColor:[UIColor darkGrayColor] lineWitdh:0.5];
Xamarin code:
var border = new CALayer();
nfloat width = 2;
border.BorderColor = UIColor.Black.CGColor;
border.Frame = new CoreGraphics.CGRect(0, textField.Frame.Size.Height - width, textField.Frame.Size.Width, textField.Frame.Size.Height);
border.BorderWidth = width;
textField.Layer.AddSublayer(border);
textField.Layer.MasksToBounds = true;
| Swift | 26,800,963 | 175 |
I am trying to implement a CollectionView.
When I am using Autolayout, my cells won't change the size, but their alignment.
Now I would rather want to change their sizes to e.g.
var size = CGSize(width: self.view.frame.width/10, height: self.view.frame.width/10)
I tried setting in my CellForItemAtIndexPath
collectionCell.size = size
it didn't work though.
Is there a way to achieve this?
edit:
It seems, that the answers will only change my CollectionView width and height itself. Is there are conflict in Constraints possible? Any ideas on that ?
| Use this method to set custom cell height width.
Make sure to add this protocols
UICollectionViewDelegate
UICollectionViewDataSource
UICollectionViewDelegateFlowLayout
If you are using swift 5 or xcode 11 and later you need to set Estimate Size to none using storyboard in order to make it work properly. If you will not set that than below code will not work as expected.
Swift 4 or Later
extension YourViewController: UICollectionViewDelegate {
//Write Delegate Code Here
}
extension YourViewController: UICollectionViewDataSource {
//Write DataSource Code Here
}
extension YourViewController: UICollectionViewDelegateFlowLayout {
func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize {
return CGSize(width: screenWidth, height: screenWidth)
}
}
Objective-C
@interface YourViewController : UIViewController<UICollectionViewDelegate,UICollectionViewDataSource,UICollectionViewDelegateFlowLayout>
- (CGSize)collectionView:(UICollectionView *)collectionView layout:(UICollectionViewLayout *)collectionViewLayout sizeForItemAtIndexPath:(NSIndexPath *)indexPath
{
return CGSizeMake(CGRectGetWidth(collectionView.frame), (CGRectGetHeight(collectionView.frame)));
}
| Swift | 38,028,013 | 174 |
I am using Swift and I want to be able to load a UIViewController when I rotate to landscape, can anyone point me in the right direction?
I Can't find anything online and a little bit confused by the documentation.
| Here's how I got it working:
In AppDelegate.swift inside the didFinishLaunchingWithOptions function I put:
NotificationCenter.default.addObserver(self, selector: #selector(AppDelegate.rotated), name: UIDevice.orientationDidChangeNotification, object: nil)
and then inside the AppDelegate class I put the following function:
func rotated() {
if UIDeviceOrientationIsLandscape(UIDevice.current.orientation) {
print("Landscape")
}
if UIDeviceOrientationIsPortrait(UIDevice.current.orientation) {
print("Portrait")
}
}
| Swift | 25,666,269 | 174 |
Here's my SwiftUI code:
struct ContentView : View {
@State var showingTextField = false
@State var text = ""
var body: some View {
return VStack {
if showingTextField {
TextField($text)
}
Button(action: { self.showingTextField.toggle() }) {
Text ("Show")
}
}
}
}
What I want is when the text field becomes visible, to make the text field become the first responder (i.e. receive focus & have the keyboard pop up).
| Using SwiftUI-Introspect, you can do:
TextField("", text: $value)
.introspectTextField { textField in
textField.becomeFirstResponder()
}
| Swift | 56,507,839 | 173 |
I have done some research, but I couldn't find any code example on how to center cells in a UICollectionView horizontally.
instead of the first cell being like this X00, I want it to be like this 0X0. is there any way to accomplish this?
EDIT:
to visualize what I want:
I need it to look like version B when there is only one element in the CollectionView. When I got more than one element, then it should be like version A but with more elements.
At the moment it looks like Version A when I have only 1 element, and I wonder how I can make it look like B.
Thanks for the help!
| Its not a good idea to use a library, if your purpose is only this i.e to centre align.
Better you can do this simple calculation in your collectionViewLayout function.
func collectionView(collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, insetForSectionAtIndex section: Int) -> UIEdgeInsets {
let totalCellWidth = CellWidth * CellCount
let totalSpacingWidth = CellSpacing * (CellCount - 1)
let leftInset = (collectionViewWidth - CGFloat(totalCellWidth + totalSpacingWidth)) / 2
let rightInset = leftInset
return UIEdgeInsets(top: 0, left: leftInset, bottom: 0, right: rightInset)
}
| Swift | 34,267,662 | 173 |
I would like to make a UILabel clickable.
I have tried this, but it doesn't work:
class DetailViewController: UIViewController {
@IBOutlet weak var tripDetails: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
...
let tap = UITapGestureRecognizer(target: self, action: Selector("tapFunction:"))
tripDetails.addGestureRecognizer(tap)
}
func tapFunction(sender:UITapGestureRecognizer) {
print("tap working")
}
}
| Have you tried to set isUserInteractionEnabled to true on the tripDetails label? This should work.
| Swift | 33,658,521 | 173 |
I am trying to build an input screen for the iPhone. The screen has a number of input fields. Most of them on the top of the screen, but two fields are at the bottom.
When the user tries to edit the text on the bottom of the screen, the keyboard will pop up and it will cover the screen.
I found a simple solution to move the screen up when this happens, but the result is that the screen always moves up and the fields on top of the screen move out of reach when the user tries to edit those.
Is there a way to have the screen only move when the bottom fields are edited?
I have used this code I found here:
override func viewDidLoad() {
super.viewDidLoad()
NSNotificationCenter.defaultCenter().addObserver(self, selector: Selector("keyboardWillShow:"), name: UIKeyboardWillShowNotification, object: nil)
NSNotificationCenter.defaultCenter().addObserver(self, selector: Selector("keyboardWillHide:"), name: UIKeyboardWillHideNotification, object: nil)
}
func keyboardWillShow(sender: NSNotification) {
self.view.frame.origin.y -= 150
}
func keyboardWillHide(sender: NSNotification) {
self.view.frame.origin.y += 150
}
| Your problem is well explained in this document by Apple. Example code on this page (at Listing 4-1) does exactly what you need, it will scroll your view only when the current editing should be under the keyboard. You only need to put your needed controls in a scrollViiew.
The only problem is that this is Objective-C and I think you need it in Swift..so..here it is:
Declare a variable
var activeField: UITextField?
then add these methods
func registerForKeyboardNotifications()
{
//Adding notifies on keyboard appearing
NSNotificationCenter.defaultCenter().addObserver(self, selector: "keyboardWasShown:", name: UIKeyboardWillShowNotification, object: nil)
NSNotificationCenter.defaultCenter().addObserver(self, selector: "keyboardWillBeHidden:", name: UIKeyboardWillHideNotification, object: nil)
}
func deregisterFromKeyboardNotifications()
{
//Removing notifies on keyboard appearing
NSNotificationCenter.defaultCenter().removeObserver(self, name: UIKeyboardWillShowNotification, object: nil)
NSNotificationCenter.defaultCenter().removeObserver(self, name: UIKeyboardWillHideNotification, object: nil)
}
func keyboardWasShown(notification: NSNotification)
{
//Need to calculate keyboard exact size due to Apple suggestions
self.scrollView.scrollEnabled = true
var info : NSDictionary = notification.userInfo!
var keyboardSize = (info[UIKeyboardFrameBeginUserInfoKey] as? NSValue)?.CGRectValue().size
var contentInsets : UIEdgeInsets = UIEdgeInsetsMake(0.0, 0.0, keyboardSize!.height, 0.0)
self.scrollView.contentInset = contentInsets
self.scrollView.scrollIndicatorInsets = contentInsets
var aRect : CGRect = self.view.frame
aRect.size.height -= keyboardSize!.height
if let activeFieldPresent = activeField
{
if (!CGRectContainsPoint(aRect, activeField!.frame.origin))
{
self.scrollView.scrollRectToVisible(activeField!.frame, animated: true)
}
}
}
func keyboardWillBeHidden(notification: NSNotification)
{
//Once keyboard disappears, restore original positions
var info : NSDictionary = notification.userInfo!
var keyboardSize = (info[UIKeyboardFrameBeginUserInfoKey] as? NSValue)?.CGRectValue().size
var contentInsets : UIEdgeInsets = UIEdgeInsetsMake(0.0, 0.0, -keyboardSize!.height, 0.0)
self.scrollView.contentInset = contentInsets
self.scrollView.scrollIndicatorInsets = contentInsets
self.view.endEditing(true)
self.scrollView.scrollEnabled = false
}
func textFieldDidBeginEditing(textField: UITextField!)
{
activeField = textField
}
func textFieldDidEndEditing(textField: UITextField!)
{
activeField = nil
}
Be sure to declare your ViewController as UITextFieldDelegate and set correct delegates in your initialization methods:
ex:
self.you_text_field.delegate = self
And remember to call registerForKeyboardNotifications on viewInit and deregisterFromKeyboardNotifications on exit.
Edit/Update: Swift 4.2 Syntax
func registerForKeyboardNotifications(){
//Adding notifies on keyboard appearing
NotificationCenter.default.addObserver(self, selector: #selector(keyboardWasShown(notification:)), name: NSNotification.Name.UIResponder.keyboardWillShowNotification, object: nil)
NotificationCenter.default.addObserver(self, selector: #selector(keyboardWillBeHidden(notification:)), name: NSNotification.Name.UIResponder.keyboardWillHideNotification, object: nil)
}
func deregisterFromKeyboardNotifications(){
//Removing notifies on keyboard appearing
NotificationCenter.default.removeObserver(self, name: NSNotification.Name.UIResponder.keyboardWillShowNotification, object: nil)
NotificationCenter.default.removeObserver(self, name: NSNotification.Name.UIResponder.keyboardWillHideNotification, object: nil)
}
@objc func keyboardWasShown(notification: NSNotification){
//Need to calculate keyboard exact size due to Apple suggestions
self.scrollView.isScrollEnabled = true
var info = notification.userInfo!
let keyboardSize = (info[UIResponder.keyboardFrameBeginUserInfoKey] as? NSValue)?.cgRectValue.size
let contentInsets : UIEdgeInsets = UIEdgeInsets(top: 0.0, left: 0.0, bottom: keyboardSize!.height, right: 0.0)
self.scrollView.contentInset = contentInsets
self.scrollView.scrollIndicatorInsets = contentInsets
var aRect : CGRect = self.view.frame
aRect.size.height -= keyboardSize!.height
if let activeField = self.activeField {
if (!aRect.contains(activeField.frame.origin)){
self.scrollView.scrollRectToVisible(activeField.frame, animated: true)
}
}
}
@objc func keyboardWillBeHidden(notification: NSNotification){
//Once keyboard disappears, restore original positions
var info = notification.userInfo!
let keyboardSize = (info[UIResponder.keyboardFrameBeginUserInfoKey] as? NSValue)?.cgRectValue.size
let contentInsets : UIEdgeInsets = UIEdgeInsets(top: 0.0, left: 0.0, bottom: -keyboardSize!.height, right: 0.0)
self.scrollView.contentInset = contentInsets
self.scrollView.scrollIndicatorInsets = contentInsets
self.view.endEditing(true)
self.scrollView.isScrollEnabled = false
}
func textFieldDidBeginEditing(_ textField: UITextField){
activeField = textField
}
func textFieldDidEndEditing(_ textField: UITextField){
activeField = nil
}
| Swift | 28,813,339 | 173 |
I have a short mp4 video file that I've added to my current Xcode6 Beta project.
I want to play the video in my app.
After hours searching, I can't find anything remotely helpful. Is there a way to accomplish this with Swift or do you have to use Objective-C?
Can I get pointed in the right direction? I can't be the only one wondering this.
| Sure you can use Swift!
1. Adding the video file
Add the video (lets call it video.m4v) to your Xcode project
2. Checking your video is into the Bundle
Open the Project Navigator cmd + 1
Then select your project root > your Target > Build Phases > Copy Bundle Resources.
Your video MUST be here. If it's not, then you should add it using the plus button
3. Code
Open your View Controller and write this code.
import UIKit
import AVKit
import AVFoundation
class ViewController: UIViewController {
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
playVideo()
}
private func playVideo() {
guard let path = Bundle.main.path(forResource: "video", ofType:"m4v") else {
debugPrint("video.m4v not found")
return
}
let player = AVPlayer(url: URL(fileURLWithPath: path))
let playerController = AVPlayerViewController()
playerController.player = player
present(playerController, animated: true) {
player.play()
}
}
}
| Swift | 25,348,877 | 173 |
I'm using Xcode 6 Beta 4. I have this weird situation where I cannot figure out how to appropriately test for optionals.
If I have an optional xyz, is the correct way to test:
if (xyz) // Do something
or
if (xyz != nil) // Do something
The documents say to do it the first way, but I've found that sometimes, the second way is required, and doesn't generate a compiler error, but other times, the second way generates a compiler error.
My specific example is using the GData XML parser bridged to swift:
let xml = GDataXMLDocument(
XMLString: responseBody,
options: 0,
error: &xmlError);
if (xmlError != nil)
Here, if I just did:
if xmlError
it would always return true. However, if I do:
if (xmlError != nil)
then it works (as how it works in Objective-C).
Is there something with the GData XML and the way it treats optionals that I am missing?
| In Xcode Beta 5, they no longer let you do:
var xyz : NSString?
if xyz {
// Do something using `xyz`.
}
This produces an error:
does not conform to protocol 'BooleanType.Protocol'
You have to use one of these forms:
if xyz != nil {
// Do something using `xyz`.
}
if let xy = xyz {
// Do something using `xy`.
}
| Swift | 25,097,727 | 173 |
While using Swift4 and Codable protocols I got the following problem - it looks like there is no way to allow JSONDecoder to skip elements in an array.
For example, I have the following JSON:
[
{
"name": "Banana",
"points": 200,
"description": "A banana grown in Ecuador."
},
{
"name": "Orange"
}
]
And a Codable struct:
struct GroceryProduct: Codable {
var name: String
var points: Int
var description: String?
}
When decoding this json
let decoder = JSONDecoder()
let products = try decoder.decode([GroceryProduct].self, from: json)
Resulting products is empty. Which is to be expected, due to the fact that the second object in JSON has no "points" key, while points is not optional in GroceryProduct struct.
Question is how can I allow JSONDecoder to "skip" invalid object?
| One option is to use a wrapper type that attempts to decode a given value; storing nil if unsuccessful:
struct FailableDecodable<Base : Decodable> : Decodable {
let base: Base?
init(from decoder: Decoder) throws {
let container = try decoder.singleValueContainer()
self.base = try? container.decode(Base.self)
}
}
We can then decode an array of these, with your GroceryProduct filling in the Base placeholder:
import Foundation
let json = """
[
{
"name": "Banana",
"points": 200,
"description": "A banana grown in Ecuador."
},
{
"name": "Orange"
}
]
""".data(using: .utf8)!
struct GroceryProduct : Codable {
var name: String
var points: Int
var description: String?
}
let products = try JSONDecoder()
.decode([FailableDecodable<GroceryProduct>].self, from: json)
.compactMap { $0.base } // .flatMap in Swift 4.0
print(products)
// [
// GroceryProduct(
// name: "Banana", points: 200,
// description: Optional("A banana grown in Ecuador.")
// )
// ]
We're then using .compactMap { $0.base } to filter out nil elements (those that threw an error on decoding).
This will create an intermediate array of [FailableDecodable<GroceryProduct>], which shouldn't be an issue; however if you wish to avoid it, you could always create another wrapper type that decodes and unwraps each element from an unkeyed container:
struct FailableCodableArray<Element : Codable> : Codable {
var elements: [Element]
init(from decoder: Decoder) throws {
var container = try decoder.unkeyedContainer()
var elements = [Element]()
if let count = container.count {
elements.reserveCapacity(count)
}
while !container.isAtEnd {
if let element = try container
.decode(FailableDecodable<Element>.self).base {
elements.append(element)
}
}
self.elements = elements
}
func encode(to encoder: Encoder) throws {
var container = encoder.singleValueContainer()
try container.encode(elements)
}
}
You would then decode as:
let products = try JSONDecoder()
.decode(FailableCodableArray<GroceryProduct>.self, from: json)
.elements
print(products)
// [
// GroceryProduct(
// name: "Banana", points: 200,
// description: Optional("A banana grown in Ecuador.")
// )
// ]
| Swift | 46,344,963 | 172 |
I have a struct that implements Swift 4’s Codable. Is there a simple built-in way to encode that struct into a dictionary?
let struct = Foo(a: 1, b: 2)
let dict = something(struct)
// now dict is ["a": 1, "b": 2]
| If you don't mind a bit of shifting of data around you could use something like this:
extension Encodable {
func asDictionary() throws -> [String: Any] {
let data = try JSONEncoder().encode(self)
guard let dictionary = try JSONSerialization.jsonObject(with: data, options: .allowFragments) as? [String: Any] else {
throw NSError()
}
return dictionary
}
}
Or an optional variant
extension Encodable {
var dictionary: [String: Any]? {
guard let data = try? JSONEncoder().encode(self) else { return nil }
return (try? JSONSerialization.jsonObject(with: data, options: .allowFragments)).flatMap { $0 as? [String: Any] }
}
}
Assuming Foo conforms to Codable or really Encodable then you can do this.
let struct = Foo(a: 1, b: 2)
let dict = try struct.asDictionary()
let optionalDict = struct.dictionary
If you want to go the other way(init(any)), take a look at this Init an object conforming to Codable with a dictionary/array
| Swift | 45,209,743 | 172 |
I am doing a proof of concept to demonstrate how we might implement 3scale in our stack. In one example I want to do some POST request body manipulation to create an API façade that maps what might be a legacy API format to a new internal one. Eg. change something like
{ "foo" : "bar" , "deprecated" : true }
into
{ "FOO" : "bar" }
The Lua module docs for content_by_lua, which seems like the appropriate method say
Do not use this directive and other content handler directives in the same location. For example, this directive and the proxy_pass directive should not be used in the same location.
My understanding is that the content_by_lua is a content handler like proxy_pass, only one of which can be used per location.
I don't think there's any way to remove proxy_pass as that's the basis of how the proxying works, so is it possible capture the request in a separate location, use content_by_lua, then pass to the location implementing proxy_pass or is there a different method like rewrite_by_lua which is more appropriate?
If it helps anyone else, I added the following function (my first bit of Lua) which removes the user_key parameter which 3scale requires for authorization but is invalid for our API if forwarded on:
function remove_user_key()
ngx.req.read_body()
-- log the original body so we can compare to the new one later
local oldbody = ngx.req.get_body_data()
log(oldbody)
-- grab the POST parameters as a table
local params = ngx.req.get_post_args()
-- build up the new JSON string
local newbody = "{"
for k,v in pairs(params) do
-- add all the params we want to keep
if k ~= "user_key" then
log("adding"..k.." as "..v)
newbody = newbody..'"'..k..'":"'..v..'",'
else
log("not adding user_key")
end
end
--remove the last trailing comma before closing this off
newbody = string.sub(newbody, 0, #newbody-1)
newbody = newbody.."}"
ngx.req.set_body_data(newbody)
log(newbody)
end
if ngx.req.get_method() == "POST" then
remove_user_key()
end
| I will suggest you to use access_by_lua
in nginx.conf
location / {
#host and port to fastcgi server
default_type text/html;
set $URL "http://$http_host$request_uri";
access_by_lua_file /home/lua/cache.lua;
proxy_pass http://$target;
-------
---------
in cache.lua file you can do something like :
if ngx.req.get_method() == "POST" then
-- check if request method is POST
-- implement your logic
return
end
| 3Scale | 22,788,236 | 19 |
Edit:
The answer is so clear. One may use the flag --user root when entering the container.
docker exec -it --user root mycontainername bash or sh
I just downloaded this official docker hub's 1.5.0-alpine image for a service (Kong API Gateway) and now I can not run apk commands to install nano, for instance.
Before, I just had to enter the container
docker exec -it kong sh
or
docker-compose exec kong sh
and I was able to run commands like apk update or apk add nano, for instance.
But now I get these errors
$ apk update
ERROR: Unable to lock database: Permission denied
ERROR: Failed to open apk database: Permission denied
$ apk add nano
ERROR: Unable to lock database: Permission denied
ERROR: Failed to open apk database: Permission denied
I also tried to run sudo and su... but I got
$ su
su: must be suid to work properly
$ su root
su: must be suid to work properly
$ suid
sh: suid: not found
Will I really need to build my own custom image? I was using the official one and it was working fine.
| You can run a command within the container as root using --user root. To get a shell:
docker exec -it --user root kong sh
| Kong | 61,683,448 | 35 |
I have a Kong API Gateway container and a postgres container and I need to check whether postgres has started up and ready from the Kong container before running the migrations. I was thinking of installing the postgres client utilities into a custom image based on the official Kong image using RUN yum install postgresql -y && yum clean all in my Dockerfile and using either psql or pg_isready to achieve this. I've created a postgres user called polling with an empty password specifically for checking the status of the server by these two utilities. Neither of them work.
I tried to execute these commands from the custom Kong image:
psql. The command psql -h postgres -U polling -w -c '\l' fails with the error psql: fe_sendauth: no password supplied. But the user has no password. What am I doing wrong? The full shell script checking whether the server is ready using psql is described here.
pg_isready. I don't get how to install this utility separately into a custom image based on the official Kong image which in turn based on the centos:7 image, the postgresql package doesn't include pg_isready. Only these utilities are installed and can be found in /usr/bin: pg_config, pg_dump, pg_dumpall, pg_restore, psql. How to install pg_isready? I don't want to have the full server installation in the Kong image.
| Here is a shell one liner using pg_isready tool provided by PostgreSQL.
To call outside docker:
DOCKER_CONTAINER_NAME="mypgcontainer"
timeout 90s bash -c "until docker exec $DOCKER_CONTAINER_NAME pg_isready ; do sleep 5 ; done"
Based on a post from Jeroma Belleman.
| Kong | 46,516,584 | 34 |
Afternoon y'all,
Just looking for someone to double check my work. Is the below an effective way to secure microservices?
Premise
Breaking up our monolithic application and monolithic Partner API into microservices oriented around specific business functions. They'll most likely be small expressjs applications running in a docker container, on elastic beanstalk, who knows. They'll live somewhere :)
I'm looking into either standing up Kong as my API Gateway or using AWS API Gateway to encapsulate the details of my microservices. Also, it just feels good.
The JWT plugin for Kong will verify the signature of the JWT and then pass the customer_id along in the header to the microservice. I should also mention that we have 3rd party developers that will be partaking in the integration fun as well. Here's a basic sketch of what I see happening:
Implementation
Generate "consumers" for each platform and 3rd party developer we have. (Web app, mobile app, and the current integration partners we have. Note: I'm not looking to create consumers for every user that logs in. While certainly more secure, this adds a lot of work. Also, if you figure out how to get the secret out of my API Gateway I clearly have other issues)
Let Kong verify the request for me. Kind of like a bouncer at the door, there's no authorization, just authentication.
I don't need to know that the token is valid once it gets to the microservice, I can just use some middleware to decode it and use custom logic to decide if this user really should be doing whatever is they're trying to do.
Extra Stuff
There's a nice access control plugin for Kong. Our application and mobile app would run with "God" privileges, but I could definitely lock down the developers to specific routes and methods.
Revoking 3rd party access will be easy, revoking end users access won't be so simple unless I'm willing to invalidate all JWTs at once by generating a new secret. Perhaps I can limit token time to 10 minutes or so and make our applications check if they're expired, get a new token, and then get on with the original request. This way I can "flag" them in the database or something and not let the JWT be generated.
SSL used everywhere, JWT is stored in an SSL only cookie in the web browser and there's no sensitive information stored in any of the claims.
Thanks guys.
| I recently worked on a solution to this very question and premise, refactoring a large monolith into multiple services in an AWS architecture.
There is no right, wrong or definitive how to this question.
However, we did implement a solution very similar to the one described in the question above.
I hope this answer can deliver a good sense of direction for someone who's looking at this for the first time.
This is how we went about it...
What do we need from an API gateway?
Highly available
Secure
Performant
Authoritative
Scalable
Solution 1: AWS API Gateway
pros
Highly available managed solution.
Don't need to worry about scalability.
Supports SSL and custom domains.
Authoritative through lambda and IAM.
Plays nice with other AWS services.
Supports API versioning out of the box.
Easy monitoring with CloudWatch.
cons
Traffic can't be routed directly into an internal network (private VPC segment), meaning an additional gateway would be required.
Edit:
Amazon API Gateway Supports Endpoint Integrations with Private VPCs. Thanks @Red for mentioning this.
Slow, our benchmark showed each request through API Gateway added 100-150 ms latency.
Solution 2: Kong
pros
Scalable, but needs to implemented and managed on our end.
Supports SSL and custom domains.
Authoritative through plugins, with solutions for JWT and OAUTH2 already packaged.
RESTful API for easy integration with our authentication server.
Extensible, in case we need some custom logic.
Fast, our benchmark showed each request through Kong added 20-30 ms latency.
cons
Requires management on our end (upgrades, deployment, maintenance).
In order to achieve HA, requires an additional endpoint, in the form of a load balancer to route traffic to the actual GW(s).
Implementation
We decided to go with Kong.
The major issue with the hosted solution was the inability to route traffic to our private network, where we also host a private DNS zone.
Additionally, the extensible nature of Kong allowed us to create custom plugins with logic that is relevant to our solutions.
We work with an ALB to round robin between multiple instances of Kong in different AZs in order to achieve redundancy and high availability.
The API configuration is saved on a Postgres RDS which is also internal and multi AZ.
Flow
Client authenticates against our authentication server. The authentication server is a micro service behind the Kong GW with a publicly exposed upstream.
Authentication server creates a consumer with a JWT for the individual client.
Authentication server replies with the JWT.
Client requests access from an API with the JWT, traffic routed via Kong.
Kong verifies the JWT and routes the request to the micro service with information about the consumer.
Micro service responds to the client.
Other
Revoking user access is as easy as deleting the token.
No sensitive information is stored in the JWT claims.
All services know about each other through a private DNS zone.
Schema:
| Kong | 34,640,611 | 23 |
I have been developing microservices (Spring Cloud) for a while (~2 years) and heavily used Netflix Zuul. While it offers a lot of functionalities and great features, my developer mind wandered towards knowing about the alternatives and came to know about Tyk and Kong.
Reading from the individual documentation and blogs, I understood more or less both offer the similar features. I would like to know a comprehensive comparison between the two and any real-world examples where you have implemented will be a great help understand.
| According to CI/CD both can comply with Infrastructure-as-Code approach, so i do not see difference in terms on Deployment Pipeline practices.
tyk API function-set is more compared to Kong, which may make sense if you rely your business on API(need to integrate with some Billing, ...)
https://tyk.io/docs/tyk-rest-api/api-definition-objects/
On the other side, the API of Kong has limited functions and terminology IMHO is not understandable:
https://galileo.gelato.io/docs/versions/2.0.0/
Kong uses Galileo reporting tool for DashBoard/UI, tyk uses its own DashBoard including not only Reporting functions, but also almost all Management Functions if you wanna go with the UI
If you need to transform your legacy APIs to external world, tyk has Transform function which can be used to transform XML<->JSON<->YAML<->Custom
On tyk you can code extension not only with Lua, but also with Go, Java. Python. .NET, Javascript ...
If you have DR needs, tyk has Multi-Datacenter option which is targeted for Enterprise level architecture including a Disaster Site
If you need performance tyk is written with Go. (We have benchmarked tyk to respond around 3000 req./sec. where Kong did around 2500 req./sec. on same VM with same APICall patterns)
So based on your needs, if any of your needs matches with one of the above, you can consider tyk, if not you can consider whichever you like more...
| Kong | 46,769,814 | 18 |
I am stuck in choosing One API gateway from the three API gateways mentioned below:
KrakenD (https://www.krakend.io/)
Kong (https://konghq.com/kong/)
Spring Cloud Gateway (https://cloud.spring.io/spring-cloud-gateway/reference/html/)
My requirements are:
Good performance and must have majority of the API gateway features.
Supports aggregating data from two Different Micro-services API's.
All the three of them, looks good from the feature list and the performance wise.
I am thinking of relaxing the second requirement, as I am not sure, whether that is a good practice or not.
| API Gateway is a concept that is used in all kind of products, I really think the industry should start sub-categorizing these products as most of them are completely different from each other.
I'll try to summarize here the main highlights according to your requirements.
Both Kong and KrakenD offer the "majority" of API gateway functionalities. Although the word is fuzzy, at least all of them cover stuff like routing, rate limiting, authorization, and such.
Kong
Kong is basically an Nginx proxy that adds a lot of functionality on top of it using Lua.
When using Kong your endpoints have a 1:1 relationship with your backends. Meaning that you declare an endpoint in Kong that exposes data from one backend, and does the magic in the middle (authorization, limiting, etc). This magic is the essence of Kong and is based on Lua plugins (unfortunately, these are not written in C as Nginx is).
If you want to aggregate data from several backends into one single endpoint, Kong does not fit in your scenario.
Finally, Kong is stateful (it's impressive how they try to sell it the other way around, but this is out of the scope of this question). The configuration lives inside a database, and changes to the configuration are through an API that ends up modifying its internal Postgres or equivalent.
Performance is also inevitably linked to the existence of this database (and Lua), and going multi-region can be a real pain.
Kong functionality can be extended with Lua code.
In summary:
Proxy with cross cutting concerns
Nodes require coordination and synchronization
Mutable configuration
The database is the source of truth
More pieces, more complexity
Multi-region lag
Requires powerful hardware to run
Customizations in Lua
KrakenD
KrakenD is a service written from the ground up using Go, taking advantage of the language features for concurrency, speed, and small footprint. In terms of performance, this is the winning racehorse.
KrakenD's natural positioning is as a Gateway with aggregation. It's meant to connect lots of backend services to a single endpoint. It's mostly adopted by companies for feeding Mobile applications, Webapps and other clients. It implements the pattern Backend for Frontend, allowing you to define exactly and with a declarative configuration how is the API that you want to expose to the clients. You can choose which fields are taken from responses, aggregate them, validate them, transform them, etc.
KrakenD is stateless, you version your API the same way you do with the rest of the code, using git. And you deploy it in the same way you do with your application (e.g: a CI/CD pipeline that pushes a new container with the new configuration and is rolled out). As everything is on the config, there is no need to have a central database, neither nodes need communication with each other.
As per the customizations, with KrakenD you can create middlewares, plugins or just scripting in several languages: Go, Lua, Common Expression Language (CEL) -sort of JS- and Martian DSL.
In summary:
On the-fly API creation using upstream services, with cross-cutting concerns (api gateway).
Not a proxy, although it can be used as one.
No node coordination
No synchronization needed
Zero complexity (docker container with a configuration file)
No challenges for Multi-region
Declarative configuration
Immutable infrastructure
Runs on micro and small machines in production without issues.
Customizations in Go, Lua, CEL, and Martian DSL
Spring Cloud Gateway
(As well as Zuul) is used mostly by Java developers that want to stick in the JVM space. I am less familiar with this one, but it's design is also for proxying to existing services, adds also the cross-concerns of the API gateway.
I see it more as a framework that you use to deliver your API. With this product you need to code the transformations yourself in Java. The included gateway functionalitites are declarative as well.
--
I am hoping this sheds some light
| Kong | 60,050,154 | 15 |
I am currently playing around with the Kong API Gateway and I would like to use it to validate the authentication of users at the gateway and restrict access to services if the user is not logged in properly.
I have an authentication service which issues JWTs whenever a user logs in.
I would now like to share the JWT secret with Kong and use it for validation of the issued JWTs to secure services which need proper authentication.
I had a look at this plugin: https://getkong.org/plugins/jwt/
But it seems that this plugin works a bit different than what I would like to achieve. Why do I have to create consumers? I would like to have only one user database at my authentication service to avoid the need of synchronisation. It seems that the approach of this plugin is designed for giving 3rd party stakeholders access to my API.
Any hint would be highly appreciated.
| The answer given by Riley is sort of correct in implementation but that is not the intended use of a consumer in the Kong.
A consumer in kong is the application that is is using the API. So, unless you have multiple vendors using your app/web service, I suggest you create a single consumer.
You can create multiple key and secret pair(JWT credentials) for that consumer. Create a JWT for a user by using the users Key and secret. Store this Key and secret in your current database along with your userID and other details. Create your JWT using these and return the JWT to the user.
Anything else you want to append as a claim can be added to the JWT while you are creating it. You can create a check for these claims in Kong. So, when you get a call to any of your APIs along with these JWT Kong will check the validity of the JWT(along with all the claims) and only then allow the access to the API.
| Kong | 36,060,029 | 10 |
I have an a service that accepts POSTs with base64 encoded files in the body. I'm currently getting
Error: 413 Request Entity Too Large when I POST anything larger than 1MB, otherwise it works fine.
My setup has kong proxying to the service. I have the following annotation for the proxy's ingress installed via the stable kong helm chart :
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: “true”
ingress.kubernetes.io/proxy-body-size: 50m
I also added this to the kong env values:
client_max_body_size: 0
My understanding is this should update the nginx.conf
Kong has an nginx-ingress sitting in front of it which I installed with the stable helm chart. For the ingress-controller I have set:
--set controller.config.proxy-body-size: "50m"
However none of these settings are working. Looking through the ingress-controller's pod logs I see:
2019/08/02 15:01:34 [warn] 42#42: *810139 a client request body is buffered to a temporary file /tmp/client-body/0000000014, client: 1X.XXX.X0.X, server: example.com, request: "POST /endpoint HTTP/1.1", host: "example.com"
And the corresponding log in the kong pod:
2019/08/02 15:01:39 [warn] 33#0: *1147388 a client request body is buffered to a temporary file /usr/local/kong/client_body_temp/0000000017, client: XX.XXX.XXX.XX, server: kong, request: "POST /ENDPOINT HTTP/1.1", host: "example.com"
10.120.20.17 - - [02/Aug/2019:15:01:39 +0000] "POST /endpoint HTTP/1.1" 413 794 "-" "PostmanRuntime/7.15.2"
Is there another setting I am missing or am I going about this wrong? How can I get this to work as expected.
If I just POST to the pod directly using it's IP, no ingress controllers involved I get the same 413 error. Does kubernetes have a default ingress somewhere that also needs to be changed?
| the annotation seemed to work fine. The limitation I was running into was because the code I was testing was in a kubeless. The kubeless functions use bottle and the ingress limit was on bottle. I increased that in a custom python3.7 image for kubeless and all worked fine.
| Kong | 57,329,247 | 10 |
Which is the difference between a Role or a ClusterRole?
When should I create one or the other one?
I don't quite figure out which is the difference between them.
| From the documentation:
A Role can only be used to grant access to resources within a single namespace.
Example: List all pods in a namespace
A ClusterRole can be used to grant the same permissions as a Role, but
because they are cluster-scoped, they can also be used to grant access
to:
cluster-scoped resources (like nodes)
non-resource endpoints (like “/healthz”)
namespaced resources (like pods) across all namespaces (needed to run kubectl get pods --all-namespaces, for example)
Examples: List all pods in all namespaces. Get a list of all nodes and theis public IP.
| Kubernetes | 51,647,643 | 56 |
Say I have a service that isn't hosted on Kubernetes. I also have an ingress controller and cert-manager set up on my kubernetes cluster.
Because it's so much simpler and easy to use kubernetes ingress to control access to services, I wanted to have a kubernetes ingress that points to a non-kubernetes service.
For example, I have a service that's hosted at https://10.0.40.1:5678 (ssl required, but self signed certificate) and want to access at service.example.com.
| You can do it by manual creation of Service and Endpoint objects for your external server.
Objects will looks like that:
apiVersion: v1
kind: Service
metadata:
name: external-ip
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 5678
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-ip
subsets:
- addresses:
- ip: 10.0.40.1
ports:
- name: app
port: 5678
protocol: TCP
Also, it is possible to use an EndpointSlice object instead of Endpoints.
Then, you can create an Ingress object which will point to Service external-ip with port 80:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: external-service
spec:
rules:
- host: service.example.com
http:
paths:
- backend:
serviceName: external-ip
servicePort: 80
path: /
| Kubernetes | 57,764,237 | 55 |
I’ve created a Cronjob in kubernetes with schedule(8 * * * *), with job’s backoffLimit defaulting to 6 and pod’s RestartPolicy to Never, the pods are deliberately configured to FAIL. As I understand, (for podSpec with restartPolicy : Never) Job controller will try to create backoffLimit number of pods and then it marks the job as Failed, so, I expected that there would be 6 pods in Error state.
This is the actual Job’s status:
status:
conditions:
- lastProbeTime: 2019-02-20T05:11:58Z
lastTransitionTime: 2019-02-20T05:11:58Z
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 5
Why were there only 5 failed pods instead of 6? Or is my understanding about backoffLimit in-correct?
| In short: You might not be seeing all created pods because period of schedule in the cronjob is too short.
As described in documentation:
Failed Pods associated with the Job are recreated by the Job
controller with an exponential back-off delay (10s, 20s, 40s …) capped
at six minutes. The back-off count is reset if no new failed Pods
appear before the Job’s next status check.
If new job is scheduled before Job controller has a chance to recreate a pod (having in mind the delay after previous failure), Job controller starts counting from one again.
I reproduced your issue in GKE using following .yaml:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hellocron
spec:
schedule: "*/3 * * * *" #Runs every 3 minutes
jobTemplate:
spec:
template:
spec:
containers:
- name: hellocron
image: busybox
args:
- /bin/cat
- /etc/os
restartPolicy: Never
backoffLimit: 6
suspend: false
This job will fail because file /etc/os doesn't exist.
And here is an output of kubectl describe for one of the jobs:
Name: hellocron-1551194280
Namespace: default
Selector: controller-uid=b81cdfb8-39d9-11e9-9eb7-42010a9c00d0
Labels: controller-uid=b81cdfb8-39d9-11e9-9eb7-42010a9c00d0
job-name=hellocron-1551194280
Annotations: <none>
Controlled By: CronJob/hellocron
Parallelism: 1
Completions: 1
Start Time: Tue, 26 Feb 2019 16:18:07 +0100
Pods Statuses: 0 Running / 0 Succeeded / 6 Failed
Pod Template:
Labels: controller-uid=b81cdfb8-39d9-11e9-9eb7-42010a9c00d0
job-name=hellocron-1551194280
Containers:
hellocron:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/cat
/etc/os
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-4lf6h
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-85khk
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-wrktb
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-6942s
Normal SuccessfulCreate 25m job-controller Created pod: hellocron-1551194280-662zv
Normal SuccessfulCreate 22m job-controller Created pod: hellocron-1551194280-6c6rh
Warning BackoffLimitExceeded 17m job-controller Job has reached the specified backoff limit
Note the delay between creation of pods hellocron-1551194280-662zv and hellocron-1551194280-6c6rh.
| Kubernetes | 54,825,671 | 55 |
How can I access environment variables in Vue, that are passed to the container at runtime and not during the build?
Stack is as follows:
VueCLI 3.0.5
Docker
Kubernetes
There are suggested solutions on stackoverflow and elsewhere to use .env file to pass variables (and using mode) but that's at build-time and gets baked into the docker image.
I would like to pass the variable into Vue at run-time as follows:
Create Kubernetes ConfigMap (I get this right)
Pass ConfigMap value into K8s pod env variable when running deployment yaml file (I get this right)
Read from env variable created above eg. VUE_APP_MyURL and do something with that value in my Vue App (I DO NOT get this right)
I've tried the following in helloworld.vue:
<template>
<div>{{displayURL}}
<p>Hello World</p>
</div>
</template>
<script>
export default {
data() {
return {
displayURL: ""
}
},
mounted() {
console.log("check 1")
this.displayURL=process.env.VUE_APP_ENV_MyURL
console.log(process.env.VUE_APP_ENV_MyURL)
console.log("check 3")
}
}
</script>
I get back "undefined" in the console log and nothing showing on the helloworld page.
I've also tried passing the value into a vue.config file and reading it from there. Same "undefined" result in console.log
<template>
<div>{{displayURL}}
<p>Hello World</p>
</div>
</template>
<script>
const vueconfig = require('../../vue.config');
export default {
data() {
return {
displayURL: ""
}
},
mounted() {
console.log("check 1")
this.displayURL=vueconfig.VUE_APP_MyURL
console.log(vueconfig.VUE_APP_MyURL)
console.log("check 3")
}
}
</script>
With vue.config looking like this:
module.exports = {
VUE_APP_MyURL: process.env.VUE_APP_ENV_MyURL
}
If I hardcode a value into VUE_APP_MyURL in the vue.config file it shows successfully on the helloworld page.
VUE_APP_ENV_MyURL is successfully populated with the correct value when I interrogate it: kubectl describe pod
process.env.VUE_APP_MyURL doesn't seem to successfully retrieve the value.
For what it is worth... I am able to use process.env.VUE_APP_3rdURL successfully to pass values into a Node.js app at runtime.
| Create a file config.js with your desired configuration. We will use that later to create a config map that we deploy to Kubernetes. Put it into your your Vue.js project where your other JavaScript files are. Although we will exclude it later from minification, it is useful to have it there so that IDE tooling works with it.
const config = (() => {
return {
"VUE_APP_ENV_MyURL": "...",
};
})();
Now make sure that your script is excluded from minification. To do that, create a file vue.config.js with the following content that preserves our config file.
const path = require("path");
module.exports = {
publicPath: '/',
configureWebpack: {
module: {
rules: [
{
test: /config.*config\.js$/,
use: [
{
loader: 'file-loader',
options: {
name: 'config.js'
},
}
]
}
]
}
}
}
In your index.html, add a script block to load the config file manually. Note that the config file won't be there as we just excluded it. Later, we will mount it from a ConfigMap into our container. In this example, we assume that we will mount it into the same directory as our HTML document.
<script src="<%= BASE_URL %>config.js"></script>
Change your code to use our runtime config:
this.displayURL = config.VUE_APP_ENV_MyURL || process.env.VUE_APP_ENV_MyURL
In Kubernetes, create a config map that uses the content your config file. Of course, you wanna read the content from your config file.
apiVersion: v1
kind: ConfigMap
metadata:
...
data:
config.js: |
var config = (() => {
return {
"VUE_APP_ENV_MyURL": "...",
};
})();
Reference the config map in your deployment. This mounts the config map as a file into your container. The mountPath Already contains our minified index.html. We mount the config file that we referenced before.
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
volumes:
- name: config-volume
configMap:
name: ...
containers:
- ...
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/config.js
subPath: config.js
Now you can access the config file at <Base URL>/config.js and you should see the exact content that you put into the ConfigMap entry. Your HTML document loads that config map as it loads the rest of your minified Vue.js code. Voila!
| Kubernetes | 53,010,064 | 55 |
I am trying to run Kubernetes and trying to use sudo kubeadm init.
Swap is off as recommended by official doc.
The issue is it displays the warning:
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- No internet connection is available so the kubelet cannot pull or find the following control plane images:
- k8s.gcr.io/kube-apiserver-amd64:v1.11.2
- k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
- k8s.gcr.io/kube-scheduler-amd64:v1.11.2
- k8s.gcr.io/etcd-amd64:3.2.18
- You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
are downloaded locally and cached.
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
The docker version I am using is Docker version 17.03.2-ce, build f5ec1e2
I m using Ubuntu 16.04 LTS 64bit
The docker images shows the following images:
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver-amd64 v1.11.2 821507941e9c 3 weeks ago 187 MB
k8s.gcr.io/kube-controller-manager-amd64 v1.11.2 38521457c799 3 weeks ago 155 MB
k8s.gcr.io/kube-proxy-amd64 v1.11.2 46a3cd725628 3 weeks ago 97.8 MB
k8s.gcr.io/kube-scheduler-amd64 v1.11.2 37a1403e6c1a 3 weeks ago 56.8 MB
k8s.gcr.io/coredns 1.1.3 b3b94275d97c 3 months ago 45.6 MB
k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 4 months ago 219 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 8 months ago 742 kB
Full logs can be found here :
https://pastebin.com/T5V0taE3
I didn't found any solution on internet.
EDIT:
docker ps -a output:
ubuntu@ubuntu-HP-Pavilion-15-Notebook-PC:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
journalctl -xeu kubelet output:
journalctl -xeu kubelet
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
Sep 01 10:40:05 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: Started kubelet: T
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-d
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-d
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: F0901 10:40:06.
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: M
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: U
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: F
lines 788-810/810 (END)
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
Sep 01 10:40:05 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-driver has been deprecated, This parameter should be set via the
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: Flag --cgroup-driver has been deprecated, This parameter should be set via the
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.117131 9107 server.go:408] Version: v1.11.2
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.117406 9107 plugins.go:97] No cloud provider specified.
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.121192 9107 certificate_store.go:131] Loading cert/key pair
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: I0901 10:40:06.145720 9107 server.go:648] --cgroups-per-qos enabled, but --
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC kubelet[9107]: F0901 10:40:06.146074 9107 server.go:262] failed to run Kubelet: Running wi
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: Unit entered failed state.
Sep 01 10:40:06 ubuntu-HP-Pavilion-15-Notebook-PC systemd[1]: kubelet.service: Failed with result 'exit-code'.
~
PORTS NAMES
Any help/suggestion/comment would be appreciated.
| I faced similar issue recently. The problem was cgroup driver. Kubernetes cgroup driver was set to systems but docker was set to systemd. So I created /etc/docker/daemon.json:
vim /etc/docker/daemon.json
and added below:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
Then
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl restart kubelet
Run kubeadm init or kubeadm join again.
| Kubernetes | 52,119,985 | 55 |
I am looking for a way to rollback a helm release to its previous release without specifying the target release version as a number.
Something like helm rollback <RELEASE> ~1 (like git reset HEAD~1) would be nice.
| As it turns out, there is an undocumented option to rollback to the previous release by defining the target release version as 0.
like: helm rollback <RELEASE> 0
Source: https://github.com/helm/helm/issues/1796
| Kubernetes | 51,894,307 | 55 |
From what I can tell in the documentation, a ReplicaSet is created when running a Deployment. It seems to support some of the same features of a ReplicationController - scale up/down and auto restart, but it's not clear if it supports rolling upgrades or autoscale.
The v1.1.8 user guide shows how to create a deployment in Deploying Applications (which automatically creates a ReplicaSet), yet the kubectl get replicasets command is not available until v1.2.0. I cannot find any other information about ReplicaSet in the documentation.
Will ReplicaSet eventually replace ReplicationController? Why would I want to use Deployment and ReplicaSet instead of ReplicationController?
| Replica Set is the next generation of Replication Controller. Replication controller is kinda imperative, but replica sets try to be as declarative as possible.
1.The main difference between a Replica Set and a Replication Controller right now is the selector support.
+--------------------------------------------------+-----------------------------------------------------+
| Replica Set | Replication Controller |
+--------------------------------------------------+-----------------------------------------------------+
| Replica Set supports the new set-based selector. | Replication Controller only supports equality-based |
| This gives more flexibility. for eg: | selector. for eg: |
| environment in (production, qa) | environment = production |
| This selects all resources with key equal to | This selects all resources with key equal to |
| environment and value equal to production or qa | environment and value equal to production |
+--------------------------------------------------+-----------------------------------------------------+
2.The second thing is the updating the pods.
+-------------------------------------------------------+-----------------------------------------------+
| Replica Set | Replication Controller |
+-------------------------------------------------------+-----------------------------------------------+
| rollout command is used for updating the replica set. | rolling-update command is used for updating |
| Even though replica set can be used independently, | the replication controller. This replaces the |
| it is best used along with deployments which | specified replication controller with a new |
| makes them declarative. | replication controller by updating one pod |
| | at a time to use the new PodTemplate. |
+-------------------------------------------------------+-----------------------------------------------+
These are the two things that differentiates RS and RC. Deployments with RS is widely used as it is more declarative.
| Kubernetes | 36,220,388 | 55 |
Below is the describe output for both my clusterissuer and certificate reource. I am brand new to cert-manager so not 100% sure this is set up properly - we need to use http01 validation however we are not using an nginx controller. Right now we only have 2 microservices so the public-facing IP address simply belongs to a k8s service (type loadbalancer) which routes traffic to a pod where an Extensible Service Proxy container sits in front of the container running the application code. Using this set up I haven't been able to get anything beyond the errors below, however as I mentioned I'm brand new to cert-manager & ESP so this could be configured incorrectly...
Name: clusterissuer-dev
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
API Version: cert-manager.io/v1beta1
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2020-08-07T18:46:29Z
Generation: 1
Resource Version: 4550439
Self Link: /apis/cert-manager.io/v1beta1/clusterissuers/clusterissuer-dev
UID: 65933d87-1893-49af-b90e-172919a18534
Spec:
Acme:
Email: [email protected]
Private Key Secret Ref:
Name: letsencrypt-dev
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Solvers:
http01:
Ingress:
Class: nginx
Status:
Acme:
Last Registered Email: [email protected]
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/15057658
Conditions:
Last Transition Time: 2020-08-07T18:46:30Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
Name: test-cert-default-ns
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
API Version: cert-manager.io/v1beta1
Kind: Certificate
Metadata:
Creation Timestamp: 2020-08-10T15:05:31Z
Generation: 2
Resource Version: 5961064
Self Link: /apis/cert-manager.io/v1beta1/namespaces/default/certificates/test-cert-default-ns
UID: 259f62e0-b272-47d6-b70e-dbcb7b4ed21b
Spec:
Dns Names:
dev.test.com
Issuer Ref:
Name: clusterissuer-dev
Secret Name: clusterissuer-dev-tls
Status:
Conditions:
Last Transition Time: 2020-08-10T15:05:31Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2020-08-10T15:05:31Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: test-cert-default-ns-rrl7j
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Requested 2m51s cert-manager Created new CertificateRequest resource "test-cert-default-ns-c4wxd"
One last item - if I run the command kubectl get certificate -o wide I get the following output.
NAME READY SECRET ISSUER STATUS AGE
test-cert-default-ns False clusterissuer-dev-tls clusterissuer-dev Issuing certificate as Secret does not exist 2d23h
| I had the same issue and I followed the advice given in the comments by @Popopame suggesting to check out the troubleshooting guide of cert-manager to find out how to troubleshoot cert-manager. or [cert-managers troubleshooting guide for acme issues] to find out which part of the acme process breaks the setup.
It seems that often it is the acme-challenge where letsencrypt verifies the domain ownership by requesting a certain code be offered at port 80 at a certain path. For example: http://example.com/.well-known/acme-challenge/M8iYs4tG6gM-B8NHuraXRL31oRtcE4MtUxRFuH8qJmY. Notice the http:// that shows letsencrypt will try to validate domain ownership on port 80 of your desired domain.
So one of the common errors is, that cert-manager could not put the correct challenge in the correct path behind port 80. For example due to a firewall blocking port 80 on a bare metal server or a loadbalancer that only forwards port 443 to the kubernetes cluster and redirects to 443 directly.
Also be aware of the fact, that cert-manager tries to validate the ACME challenge as well so you should configure the firewalls to allow requests coming from your servers as well.
If you have trouble getting your certificate to a different namespace, this would be a good point to start with.
In your specific case I would guess at a problem with the ACME challenge as the CSR (Certificate Signing Request) was created as indicated in the bottom most describe line but nothing else happened.
| Kubernetes | 63,346,728 | 54 |
I am running selenium hubs and my pods are getting terminated frequently. I would like to look at the logs of the pods which are terminated. How to do it?
NAME READY STATUS RESTARTS AGE
chrome-75-0-0e5d3b3d-3580-49d1-bc25-3296fdb52666 0/2 Terminating 0 49s
chrome-75-0-29bea6df-1b1a-458c-ad10-701fe44bb478 0/2 Terminating 0 23s
chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 0/2 ContainerCreating 0 7s
kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5
Error from server (NotFound): pods "chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5" not found
$ kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 --previous
Error from server (NotFound): pods "chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5" not found
| Running kubectl logs -p will fetch logs from existing resources at API level. This means that terminated pods' logs will be unavailable using this command.
As mentioned in other answers, the best way is to have your logs centralized via logging agents or directly pushing these logs into an external service.
Alternatively and given the logging architecture in Kubernetes, you might be able to fetch the logs directly from the log-rotate files in the node hosting the pods. However, this option might depend on the Kubernetes implementation as log files might be deleted when the pod eviction is triggered.
| Kubernetes | 57,007,134 | 54 |
I am been struggling to get my simple 3 node Kubernetes cluster running.
$ kubectl get nodes NAME STATUS ROLES AGE VERSION
ubu1 Ready master 31d v1.13.4
ubu2 Ready master,node 31d v1.13.4
ubu3 Ready node 31d v1.13.4
I tried creating a PVC, which was stuck in Pending forever. So I deleted it, but now it is stuck in Terminating status.
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
task-pv-claim Terminating task-pv-volume 100Gi RWO manual 26d
How can I create a PV that is properly created and useable for the demos described on the official kubernetes web site?
PS: I used kubespray to get this up and running.
On my Ubuntu 16.04 VMs, this is the Docker version installed:
ubu1:~$ docker version
Client:
Version: 18.06.2-ce
API version: 1.38
Go version: go1.10.3
Git commit: 6d37f41
Built: Sun Feb 10 03:47:56 2019
OS/Arch: linux/amd64
Experimental: false
Thanks in advance.
| kubectl edit pv (pv name)
Find the following in the manifest file
finalizers:
- kubernetes.io/pv-protection
... and delete it.
Then exit, and run this command to delete the pv
kubectl delete pv (pv name) --grace-period=0 --force
| Kubernetes | 55,672,498 | 54 |
I've searched online and most links seem to mention manifests without actually explaining what they are. What are Manifests?
| It's basically a Kubernetes "API object description". A config file can include one or more of these. (i.e. Deployment, ConfigMap, Secret, DaemonSet, etc)
As per this:
Specification of a Kubernetes API object in JSON or YAML format.
A manifest specifies the desired state of an object that Kubernetes will maintain when you apply the manifest. Each configuration file can contain multiple manifests.
And a previous version of the documentation:
Configuration files - Written in YAML or JSON, these files describe the desired state of your application in terms of Kubernetes API objects. A file can include one or more API object descriptions (manifests).
| Kubernetes | 55,130,795 | 54 |
I'm trying to create a local Kubernetes deployment using Minikube, Docker Registry, and a demo node project.
The first thing I did was install Docker v1.12.3, then Minikube v0.12.2.
Then I created a Docker Registry container by running this command (via this tutorial, only running the first command below)
docker run -d -p 5000:5000 --name registry registry:2
Next I ran this minikube command to create a local kubernetes cluster:
minikube start --vm-driver="virtualbox" --insecure-registry="0.0.0.0:5000"
My project structure looks like this:
.
├── Dockerfile
└── server.js
and my Dockerfile looks like this:
FROM node:7.1.0
EXPOSE 8080
COPY server.js .
CMD node server.js
Then I built my own docker image and pushed it to my private repository:
docker build -t hello-node .
docker tag hello-node localhost:5000/hello-node
docker push localhost:5000/hello-node
Then I tried to run a deployment with this command:
kubectl run hello-node --image=localhost:5000/hello-node --port=8888
But then I get this:
sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hello-node-3745105022-gzs5a 0/1 ErrImagePull 0 11m
kube-system kube-addon-manager-minikube 1/1 Running 4 10d
kube-system kube-dns-v20-2x64k 3/3 Running 12 10d
kube-system kubernetes-dashboard-mjpjv 1/1 Running 4 10d
I think I might be missing some kind of docker registry authentication, but as I'm googling I can't find something that I understand. Could someone please point me in the right direction?
Edit
After using ssh to access bash on the kubernetes VM and pull the hello-node image from my private registry by using this command:
minikube ssh
Boot2Docker version 1.11.1, build master : 901340f - Fri Jul 1
22:52:19 UTC 2016
Docker version 1.11.1, build 5604cbe
docker@minikube:~$ sudo docker pull localhost:5000/hello-node
Using default tag: latest
Pulling repository localhost:5000/hello-node
Error while pulling image: Get http://localhost:5000/v1/repositories/hello-node/images: dial tcp 127.0.0.1:5000: getsockopt: connection refused
Is localhost:5000 the correct address to use within the kubernetes host VM?
| It looks like you're running the registry on the host. In fact, you need to run the registry inside the VM. You can point your docker client to the docker daemon inside the minikube VM by running this command first
eval $(minikube docker-env)
in your shell.
Then, you can run the docker build command on your host, but it will build inside the VM.
In fact, if your goal is to simply run the local version of your images, you should run the eval $(minikube docker-env) to point towards the docker daemon in your VM, and set the imagePullPolicy: IfNotPresent in your pod YAML. Then, kubernetes will use a locally built image if available.
| Kubernetes | 40,600,419 | 54 |
I'm trying to use minikube and kitematic for testing kubernetes on my local machine. However, kubernetes fail to pull image in my local repository (ImagePullBackOff).
I tried to solve it with this : Can not pull docker image from private repo when using Minikube
But I have no /etc/init.d/docker, I think it's because of kinematic ? (I am on OS X)
EDIT :
I installed https://github.com/docker/docker-registry, and
docker tag local-image-build localhost:5000/local-image-build
docker push localhost:5000/local-image-build
My kubernetes yaml contains :
spec:
containers:
- name: backend-nginx
image: localhost:5000/local-image-build:latest
imagePullPolicy: Always
But it's still not working...
Logs :
Error syncing pod, skipping: failed to "StartContainer"
for "backend-nginx" with ErrImagePull: "Error while pulling image:
Get http://127.0.0.1:5000/v1/repositories/local-image-build/images:
dial tcp 127.0.0.1:5000: getsockopt: connection refused
EDIT 2 :
I don't know if I'm on the good path, but I find this :
http://kubernetes.io/docs/user-guide/images/
But I don't know what is my DOCKER_USER...
kubectl create secret docker-registry myregistrykey --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
EDIT 3
now I got on my pod :
Failed to pull image "local-image-build:latest": Error: image library/local-image-build not found
Error syncing pod, skipping: failed to "StartContainer" for "backend-nginx" with ErrImagePull: "Error: image library/local-image-build not found"
Help me I'm going crazy.
EDIT 4
Error syncing pod, skipping: failed to "StartContainer" for "backend-nginx" with ErrImagePull: "Error response from daemon: Get https://192.168.99.101:5000/v1/_ping: tls: oversized record received with length 20527"
I added :
EXTRA_ARGS='
--label provider=virtualbox
--insecure-registry=192.168.99.101:5000
to my docker config, but it's still don't work, the same message....
By the way, I changed my yaml :
spec:
containers:
- name: backend-nginx
image: 192.168.99.101:5000/local-image-build:latest
imagePullPolicy: Always
And I run my registry like that :
docker run -d -p 5000:5000 --restart=always --name myregistry registry:2
| Use the minikube docker registry instead of your local docker
https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/#create-a-docker-container-image
Set docker to point to minikube
eval $(minikube docker-env)
Push to minikube docker
docker build -t hello-node:v1 .
Set your deployment to not pull IfNotPresent
K8S default is set to "Always"
Change to "IfNotPresent"
imagePullPolicy: IfNotPresent
Related Issue
| Kubernetes | 38,979,231 | 54 |
I'm looking for a pattern that allows to share volumes between two containers running on the same pod in Kubernetes.
My use case is:
I have a Ruby on Rails application running inside a docker container.
The docker image contains static assets in /app/<app-name>/public directory, and I need to access those assets from the nginx container running alongside in the same pod.
In 'vanilla' docker I would have used --volumes-from flag to share this directory:
docker run --name app -v /app/<app-dir>/public <app-image>
docker run --volumes-from app nginx
After reading this doc: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md
I tried this (only relevant entries presented):
spec:
containers:
- image: <app-image>
name: <app-name>
volumeMounts:
- mountPath: /app/<app-name>/public
name: assets
- image: nginx
name: nginx
volumeMounts:
- mountPath: /var/www/html
name: assets
readOnly: true
volumes:
- name: assets
hostPath:
path: /tmp/assets
But:
Even though /tmp/assets on the node exists, it's empty
/app/<app-name>/public inside the app container is also empty
As a workaround I'm gonna try to populate the shared directory when the application container is up (simply cp /app/<app-name>/public/* to shared directory), but I really dislike this idea.
Question: how to mimic --volumes-from in Kubernetes, or if there is no direct counterpart, how can I share files from one container to other running in the same pod ?
apiVersion: v1beta3
Client Version: version.Info{Major:"0", Minor:"17", GitVersion:"v0.17.0", GitCommit:"82f8bdac06ddfacf493a9ed0fedc85f5ea62ebd5", GitTreeState:"clean"}
Server Version: version.Info{Major:"0", Minor:"17", GitVersion:"v0.17.0", GitCommit:"82f8bdac06ddfacf493a9ed0fedc85f5ea62ebd5", GitTreeState:"clean"}
| [update-2016-8] In latest Kubernetes release, you can use a very nice feature named init-container to replace the postStart part in my answer below, which will make sure the container order.
apiVersion: v1
kind: Pod
metadata:
name: javaweb-2
spec:
initContainers:
- name: war
image: resouer/sample:v2
command: ["cp", "/sample.war", "/app"]
volumeMounts:
- mountPath: /app
name: app-volume
containers:
- name: tomcat
image: resouer/mytomcat:7.0
command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"]
volumeMounts:
- mountPath: /root/apache-tomcat-7.0.42-v2/webapps
name: app-volume
ports:
- containerPort: 8080
hostPort: 8001
volumes:
- name: app-volume
emptyDir: {}
NOTE: initContainer is still a beta feature so the work version of this yaml is actually like: http://kubernetes.io/docs/user-guide/production-pods/#handling-initialization, please notice the pod.beta.kubernetes.io/init-containers part.
---original answer begin---
Actually, you can. You need to use container life cycle handler to control what files/dirs you want to share with other containers. Like:
---
apiVersion: v1
kind: Pod
metadata:
name: server
spec:
restartPolicy: OnFailure
containers:
- image: resouer/sample:v2
name: war
lifecycle:
postStart:
exec:
command:
- "cp"
- "/sample.war"
- "/app"
volumeMounts:
- mountPath: /app
name: hostv1
- name: peer
image: busybox
command: ["tail", "-f", "/dev/null"]
volumeMounts:
- name: hostv2
mountPath: /app/sample.war
volumes:
- name: hostv1
hostPath:
path: /tmp
- name: hostv2
hostPath:
path: /tmp/sample.war
Please check my gist for more details:
https://gist.github.com/resouer/378bcdaef1d9601ed6aa
And of course you can use emptyDir. Thus, war container can share its /sample.war to peer container without mess peer's /app directory.
If we can tolerate /app been overridden, it will be much simpler:
---
apiVersion: v1
kind: Pod
metadata:
name: javaweb-2
spec:
restartPolicy: OnFailure
containers:
- image: resouer/sample:v2
name: war
lifecycle:
postStart:
exec:
command:
- "cp"
- "/sample.war"
- "/app"
volumeMounts:
- mountPath: /app
name: app-volume
- image: resouer/mytomcat:7.0
name: tomcat
command: ["sh","-c","/root/apache-tomcat-7.0.42-v2/bin/start.sh"]
volumeMounts:
- mountPath: /root/apache-tomcat-7.0.42-v2/webapps
name: app-volume
ports:
- containerPort: 8080
hostPort: 8001
volumes:
- name: app-volume
emptyDir: {}
| Kubernetes | 30,538,210 | 54 |
I need to monitor my container memory usage running on kubernetes cluster. After read some articles there're two recommendations: container_memory_rss, container_memory_working_set_bytes
The definitions of both metrics are said (from the cAdvisor code)
container_memory_rss : The amount of anonymous and swap cache memory
container_memory_working_set_bytes: The amount of working set memory, this includes recently accessed memory, dirty memory, and kernel memory
I think both metrics are represent the bytes size on the physical memory that process uses. But there are some differences between the two values from my Grafana dashboard.
My question is:
What is the difference between two metrics?
Which metrics are much proper to monitor memory usage? Some post said both because one of those metrics reaches to the limit, then that container is OOM killed.
| You are right. I will try to address your questions in more detail.
What is the difference between two metrics?
container_memory_rss equals to the value of total_rss from /sys/fs/cgroups/memory/memory.status file:
// The amount of anonymous and swap cache memory (includes transparent
// hugepages).
// Units: Bytes.
RSS uint64 `json:"rss"`
The total amount of anonymous and swap cache memory (it includes transparent hugepages), and it equals to the value of total_rss from memory.status file. This should not be confused with the true resident set size or the amount of physical memory used by the cgroup. rss + file_mapped will give you the resident set size of cgroup. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.
container_memory_working_set_bytes (as already mentioned by Olesya) is the total usage - inactive file. It is an estimate of how much memory cannot be evicted:
// The amount of working set memory, this includes recently accessed memory,
// dirty memory, and kernel memory. Working set is <= "usage".
// Units: Bytes.
WorkingSet uint64 `json:"working_set"`
Working Set is the current size, in bytes, of the Working Set of this process. The Working Set is the set of memory pages touched recently by the threads in the process.
Which metrics are much proper to monitor memory usage? Some post said
both because one of those metrics reaches to the limit, then that
container is oom killed.
If you are limiting the resource usage for your pods than you should monitor both as they will cause an oom-kill if they reach a particular resource limit.
I also recommend this article which shows an example explaining the below assertion:
You might think that memory utilization is easily tracked with
container_memory_usage_bytes, however, this metric also includes
cached (think filesystem cache) items that can be evicted under memory
pressure. The better metric is container_memory_working_set_bytes as
this is what the OOM killer is watching for.
EDIT:
Adding some additional sources as a supplement:
A Deep Dive into Kubernetes Metrics — Part 3 Container Resource Metrics
#1744
Understanding Kubernetes Memory Metrics
Memory_working_set vs Memory_rss in Kubernetes, which one you should monitor?
Managing Resources for Containers
cAdvisor code
| Kubernetes | 65,428,558 | 53 |
I am trying to check the status of a pod using kubectl wait command through this documentation.
Following is the command that i am trying
kubectl wait --for=condition=complete --timeout=30s -n d1 job/test-job1-oo-9j9kj
Following is the error that i am getting
Kubectl error: status.conditions accessor error: Failure is of the type string, expected map[string]interface{}
and my kubectl -o json output can be accessed via this github link.
Can someone help me to fix the issue
| To wait until your pod is running, check for "condition=ready". In addition, prefer to filter by label, rather than specifying pod id. For example:
$ kubectl wait --for=condition=ready pod -l app=netshoot
pod/netshoot-58785d5fc7-xt6fg condition met
Another option is rollout status - To wait until the deployment is done:
$ kubectl rollout status deployment netshoot
deployment "netshoot" successfully rolled out
Both options work great in automation scripts, when it is required to wait for an app to be installed. However, as @CallMeLaNN noted for the second option, deployment "rolled out" does not check if its pods are ready or failed.
Update:
A very handy tip I found about kubectl wait is to use --for jsonpath if the available conditions are not sufficient. For example, to wait up to 3m for an Operator Subscription with an InstalPlan to be ready, I don't check for its condition but for its state:
$ kubectl wait --for jsonpath='{.status.state}'=AtLatestKnown sub mysub -n myns --timeout=3m
| Kubernetes | 53,536,907 | 53 |
I understand that you can create a pod with Deployment/Job using kubectl run. But is it possible to create one with a volume attached to it? I tried running this command:
kubectl run -i --rm --tty ubuntu --overrides='{ "apiVersion":"batch/v1", "spec": {"containers": {"image": "ubuntu:14.04", "volumeMounts": {"mountPath": "/home/store", "name":"store"}}, "volumes":{"name":"store", "emptyDir":{}}}}' --image=ubuntu:14.04 --restart=Never -- bash
But the volume does not appear in the interactive bash.
Is there a better way to create a pod with volume that you can attach to?
| Your JSON override is specified incorrectly. Unfortunately kubectl run just ignores fields it doesn't understand.
kubectl run -i --rm --tty ubuntu --overrides='
{
"apiVersion": "batch/v1",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ubuntu:14.04",
"args": [
"bash"
],
"stdin": true,
"stdinOnce": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/home/store",
"name": "store"
}]
}
],
"volumes": [{
"name":"store",
"emptyDir":{}
}]
}
}
}
}
' --image=ubuntu:14.04 --restart=Never -- bash
To debug this issue I ran the command you specified, and then in another terminal ran:
kubectl get job ubuntu -o json
From there you can see that the actual job structure differs from your json override (you were missing the nested template/spec, and volumes, volumeMounts, and containers need to be arrays).
| Kubernetes | 37,555,281 | 53 |
I've multiple secrets created from different files. I'd like to store all of them in common directory /var/secrets/. Unfortunately, I'm unable to do that because kubernetes throws 'Invalid value: "/var/secret": must be unique error during pod validation step. Below is an example of my pod definition.
apiVersion: v1
kind: Pod
metadata:
labels:
run: alpine-secret
name: alpine-secret
spec:
containers:
- command:
- sleep
- "3600"
image: alpine
name: alpine-secret
volumeMounts:
- name: xfile
mountPath: "/var/secrets/"
readOnly: true
- name: yfile
mountPath: "/var/secrets/"
readOnly: true
volumes:
- name: xfile
secret:
secretName: my-secret-one
- name: yfile
secret:
secretName: my-secret-two
How can I store files from multiple secrets in the same directory?
| Projected Volume
You can use a projected volume to have two secrets in the same directory
Example
apiVersion: v1
kind: Pod
metadata:
labels:
run: alpine-secret
name: alpine-secret
spec:
containers:
- command:
- sleep
- "3600"
image: alpine
name: alpine-secret
volumeMounts:
- name: xyfiles
mountPath: "/var/secrets/"
readOnly: true
volumes:
- name: xyfiles
projected:
sources:
- secret:
name: my-secret-one
- secret:
name: my-secret-two
| Kubernetes | 59,079,318 | 52 |
I have kubernetes cluster and every thing work fine. after some times I drain my worker node and reset it and join it again to master but
#kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready master 159m v1.14.0
ubuntu1 Ready,SchedulingDisabled <none> 125m v1.14.0
ubuntu2 Ready,SchedulingDisabled <none> 96m v1.14.0
what should i do?
| To prevent a node from scheduling new pods use:
kubectl cordon <node-name>
Which will cause the node to be in the status: Ready,SchedulingDisabled.
To tell is to resume scheduling use:
kubectl uncordon <node-name>
More information about draining a node can be found here. And manual node administration here
| Kubernetes | 55,432,764 | 52 |
Most tutorials I've seen for developing with Kubernetes locally use Minikube. In the latest Edge release of Docker for Windows, you can also enable Kubernetes. I'm trying to understand the differences between the two and which I should use.
Minikube lets you choose the version of Kubernetes you want, can Docker for Windows do that? I don't see a way to configure it.
Minikube has CLI commands to enable the dashboard, heapster, ingress and other addons. I'm not sure why because my undertstanding is that these are simply executing kubectl apply -f http://....
With Minikube I can do a minikube ip to get the cluster IP address for ingress, how can I do this with Docker for Windows?
Is there anything else different that I should care about.
| I feel like you largely understand the space, and mostly have answers to your questions already. You might find Docker for Mac vs. Docker Toolbox an informative read, even if it's about the Mac equivalent rather than Windows and about Docker packaged as a VM rather than Kubernetes specifically.
In fact you are stuck with the specific version of Kubernetes the Docker Edge desktop distribution publishes.
is answered in the question.
I believe NodePort-type Services are published on your host's IP address; there isn't an intermediate VM address like there is with Docker Toolbox.
Docker Toolbox and minikube always use a full-blown virtual machine with an off-the-shelf hypervisor. The Docker desktop application might use a lighter-weight virtualization engine if one is available.
Kubernetes can involve some significant background work. If you're using Kubernetes-in-Docker it's hard to "turn off" Kubernetes and still have Docker available; but if you have a separate minikube VM you can just stop it.
| Kubernetes | 51,209,870 | 52 |
By default docker uses a shm size of 64m if not specified, but that can be increased in docker using --shm-size=256m
How should I increase shm size of a kuberenetes container or use --shm-size of docker in kuberenetes.
| I originally bumped into this post coming from google and went through the whole kubernetes issue and openshift workaround. Only to find the much simpler solution listed on another stackoverflow answer later.
| Kubernetes | 43,373,463 | 52 |
I am becoming more familiar with Kubernetes by the day, but am still at a basic level. I am also not a networking guy.
I am staring at the following snippet of a Service definition, and I can't form the right picture in my mind of what is being declared:
spec:
type: NodePort
ports:
- port: 27018
targetPort: 27017
protocol: TCP
Referencing the ServicePort documentation, which reads in part:
nodePort The port on each node on which this service is exposed when type=NodePort or LoadBalancer. Usually
integer assigned by the system. If specified, it will be allocated to the service if unused or else creation of the
service will fail. Default is to auto-allocate a port if the ServiceType of this Service requires one. More info:
http://kubernetes.io/docs/user-guide/services#type--nodeport
port The port that will be exposed by this service.
integer
targetPort Number or name of the port to access on the pods targeted by the service. Number must be in the range 1
IntOrString to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the
target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map).
This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field.
More info: http://kubernetes.io/docs/user-guide/services#defining-a-service
My understanding is that the port that a client outside of the cluster will "see" will be the dynamically assigned one in the range of 30000-32767, as defined in the documentation. This will, using some black magic that I do not yet understand, flow to the targetPort on a given node (27017 in this case).
So what is the port used for here?
| nodePort is the port that a client outside of the cluster will "see". nodePort is opened on every node in your cluster via kube-proxy. With iptables magic Kubernetes (k8s) then routes traffic from that port to a matching service pod (even if that pod is running on a completely different node).
port is the port your service listens on inside the cluster. Let's take this example:
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- port: 8080
targetPort: 8070
nodePort: 31222
protocol: TCP
selector:
component: my-service-app
From inside my k8s cluster this service will be reachable via my-service.default.svc.cluster.local:8080 (service to service communication inside your cluster) and any request reaching there is forwarded to a running pod on targetPort 8070.
tagetPort is also by default the same value as port if not specified otherwise.
| Kubernetes | 41,963,433 | 52 |
In kubernetes I can expose services with service. This is fine.
Lets say I have 1 web instance and 10 java server instances.
I have a windows gateway I'm used to access those 10 java servers instances via the jconsole installed on it.
Obviously I do not expose all apps jmx port via kubernetes service.
What are my options here? how should I allow this external to kubernetes cluster windows gateway access to those 10 servers jmx ports? Any practices here?
| Another option is to forward JMX port from K8 pod to your local PC with kubectl port-forward.
I do it like this:
1). Add following JVM options to your app:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.local.only=false
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.rmi.port=1099
-Djava.rmi.server.hostname=127.0.0.1
The critical part here is that:
The same port should be used as 'jmxremote.port' and 'jmxremote.rmi.port'. This is needed to forward one port only.
127.0.0.1 should be passed as rmi server hostname. This is needed for JMX connection to work via port-forwarding.
2). Forward the JMX port (1099) to your local PC via kubectl:
kubectl port-forward <your-app-pod> 1099
3). Open jconsole connection to your local port 1099:
jconsole 127.0.0.1:1099
This way makes it possible to debug any Java pod via JMX without having to publicly expose JMX via K8 service (which is better from security perspective).
Another option that also may be useful is to attach the Jolokia (https://jolokia.org/) agent to the Java process inside the container so it proxies the JMX over HTTP port and expose or port-forward this HTTP port to query JMX over HTTP.
| Kubernetes | 35,184,558 | 52 |
How does Kubernetes' scheduler work? What I mean is that Kubernetes' scheduler appears to be very simple?
My initial thought is that this scheduler is just a simple admission control system, not a real scheduler. Is it that correct?
I found a short description, but it is not terribly informative:
The kubernetes scheduler is a policy-rich, topology-aware,
workload-specific function that significantly impacts availability,
performance, and capacity. The scheduler needs to take into account
individual and collective resource requirements, quality of service
requirements, hardware/software/policy constraints, affinity and
anti-affinity specifications, data locality, inter-workload
interference, deadlines, and so on. Workload-specific requirements
will be exposed through the API as necessary.
| The paragraph you quoted describes where we hope to be in the future (where the future is defined in units of months, not years). We're not there yet, but the scheduler does have a number of useful features already, enough for a simple deployment. In the rest of this reply, I'll explain how the scheduler works today.
The scheduler is not just an admission controller; for each pod that is created, it finds the "best" machine for that pod, and if no machine is suitable, the pod remains unscheduled until a machine becomes suitable.
The scheduler is configurable. It has two types of policies, FitPredicate (see master/pkg/scheduler/predicates.go) and PriorityFunction (see master/pkg/scheduler/priorities.go). I'll describe them.
Fit predicates are required rules, for example the labels on the node must be compatible with the label selector on the pod (this rule is implemented in PodSelectorMatches() in predicates.go), and the sum of the requested resources of the container(s) already running on the machine plus the requested resources of the new container(s) you are considering scheduling onto the machine must not be greater than the capacity of the machine (this rule is implemented in PodFitsResources() in predicates.go; note that "requested resources" is defined as pod.Spec.Containers[n].Resources.Limits, and if you request zero resources then you always fit). If any of the required rules are not satisfied for a particular (new pod, machine) pair, then the new pod is not scheduled on that machine. If after checking all machines the scheduler decides that the new pod cannot be scheduled onto any machine, then the pod remains in Pending state until it can be satisfied by one of the machines.
After checking all of the machines with respect to the fit predicates, the scheduler may find that multiple machines "fit" the pod. But of course, the pod can only be scheduled onto one machine. That's where priority functions come in. Basically, the scheduler ranks the machines that meet all of the fit predicates, and then chooses the best one. For example, it prefers the machine whose already-running pods consume the least resources (this is implemented in LeastRequestedPriority() in priorities.go). This policy spreads pods (and thus containers) out instead of packing lots onto one machine while leaving others empty.
When I said that the scheduler is configurable, I mean that you can decide at compile time which fit predicates and priority functions you want Kubernetes to apply. Currently, it applies all of the ones you see in predicates.go and priorities.go.
| Kubernetes | 28,857,993 | 52 |
I have added mysql in requirements.yaml. Helm dependency downloads the mysql chart
helm dependency update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "nginx" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading mysql from repo <our private repository>
Deleting outdated charts
But when I do helm install my_app_chart ../my_app_chart
It gives error
Error: found in Chart.yaml, but missing in charts/ directory: mysql
| You don't have to add it to the control version system, you just download them again if for some reason you have lost them (for example when you clone the repository). To do this, execute the command:
helm dependency update
The above command will download the dependencies you've defined in the requirements.yaml file or dependencies entry in Chart.yaml to the charts folder. This way, requirements are updated and you'll have the correct dependencies without worrying about if you updated them also in the control version system.
| Kubernetes | 59,210,148 | 51 |
I want to use the postgresql chart as a requirements for my Helm chart.
My requirements.yaml file hence looks like this:
dependencies:
- name: "postgresql"
version: "3.10.0"
repository: "@stable"
In the postgreSQL Helm chart I now want to set the username with the property postgresqlUsername (see https://github.com/helm/charts/tree/master/stable/postgresql for all properties).
Where do I have to specify this property in my project so that it gets propagated to the postgreSQL dependency?
| As described in https://v2.helm.sh/docs/chart_template_guide/#subcharts-and-global-values, in your parent (i.e. not the dependency) chart's values.yaml file, have a section that contains
postgresql:
postgresUsername: ....
postgresPassword: ....
...
That is, all values under the postgresql key will override the child (postgresql) chart's values.yaml values. Note that if you have aliased the postgresql dependency chart to another name in your requirements.yaml, you should use that other name instead of postgresql.
edit: The corresponding article in v3 is here https://helm.sh/docs/chart_template_guide/subcharts_and_globals/
| Kubernetes | 55,748,639 | 51 |
What is the best way to wait for kubernetes job to be complete? I noticed a lot of suggestions to use:
kubectl wait --for=condition=complete job/myjob
but i think that only works if the job is successful. if it fails, i have to do something like:
kubectl wait --for=condition=failed job/myjob
is there a way to wait for both conditions using wait? if not, what is the best way to wait for a job to either succeed or fail?
| Run the first wait condition as a subprocess and capture its PID. If the condition is met, this process will exit with an exit code of 0.
kubectl wait --for=condition=complete job/myjob &
completion_pid=$!
Do the same for the failure wait condition. The trick here is to add && exit 1 so that the subprocess returns a non-zero exit code when the job fails.
kubectl wait --for=condition=failed job/myjob && exit 1 &
failure_pid=$!
Then use the Bash builtin wait -n $PID1 $PID2 to wait for one of the conditions to succeed. The command will capture the exit code of the first process to exit:
MAC USERS! Note that wait -n [...PID] requires Bash version 4.3 or higher. MacOS is forever stuck on version 3.2 due to license issues. Please see this Stackoverflow Post on how to install the latest version.
wait -n $completion_pid $failure_pid
Finally, you can check the actual exit code of wait -n to see whether the job failed or not:
exit_code=$?
if (( $exit_code == 0 )); then
echo "Job completed"
else
echo "Job failed with exit code ${exit_code}, exiting..."
fi
exit $exit_code
Complete example:
# wait for completion as background process - capture PID
kubectl wait --for=condition=complete job/myjob &
completion_pid=$!
# wait for failure as background process - capture PID
kubectl wait --for=condition=failed job/myjob && exit 1 &
failure_pid=$!
# capture exit code of the first subprocess to exit
wait -n $completion_pid $failure_pid
# store exit code in variable
exit_code=$?
if (( $exit_code == 0 )); then
echo "Job completed"
else
echo "Job failed with exit code ${exit_code}, exiting..."
fi
exit $exit_code
| Kubernetes | 55,073,453 | 51 |
There is a default ClusterRoleBinding named cluster-admin.
When I run kubectl get clusterrolebindings cluster-admin -o yaml I get:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2018-06-13T12:19:26Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "98"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
uid: 0361e9f2-6f04-11e8-b5dd-000c2904e34b
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
In the subjects field I have:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
How can I see the members of the group system:masters ?
I read here about groups but I don't understand how can I see who is inside the groups as the example above with system:masters.
I noticed that when I decoded /etc/kubernetes/pki/apiserver-kubelet-client.crt using the command:
openssl x509 -in apiserver-kubelet-client.crt -text -noout it contained the subject system:masters but I still didn't understand who are the users in this group:
Issuer: CN=kubernetes
Validity
Not Before: Jul 31 19:08:36 2018 GMT
Not After : Jul 31 19:08:37 2019 GMT
Subject: O=system:masters, CN=kube-apiserver-kubelet-client
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
| Answer updated:
It seems that there is no way to do it using kubectl. There is no object like Group that you can "get" inside the Kubernetes configuration.
Group information in Kubernetes is currently provided by the Authenticator modules and usually it's just string in the user property.
Perhaps you can get the list of group from the subject of user certificate or if you use GKE, EKS or AKS the group attribute is stored in a cloud user management system.
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/reference/access-authn-authz/authentication/
Information about ClusterRole membership in system groups can be requested from ClusterRoleBinding objects. (for example for "system:masters" it shows only cluster-admin ClusterRole):
Using jq:
kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group") | select(.subjects[0].name=="system:masters")'
If you want to list the names only:
kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group") | select(.subjects[0].name=="system:masters") | .metadata.name'
Using go-templates:
kubectl get clusterrolebindings -o go-template='{{range .items}}{{range .subjects}}{{.kind}}-{{.name}} {{end}} {{" - "}} {{.metadata.name}} {{"\n"}}{{end}}' | grep "^Group-system:masters"
Some additional information about system groups can be found in GitHub issue #44418 or in RBAC document:
| Kubernetes | 51,612,976 | 51 |
I have a test executor Pod in K8s cluster created through helm, which asks for a dynamically created PersistentVolume where it stores the test results.
Now I would like to get the contents of this volume. It seems quite natural thing to do.
I would expect some kubectl download pv <id>. But I can't google up anything.
How can I get the contents of a PersistentVolume?
I am in AWS EKS; so AWS API is also an option. Also I can access ECR so perhaps I could somehow store it as an image and download?
Or, in general, I am looking for a way to transfer a directory, can be even in an archive. But It should be after the container finished and doesn't run anymore.
| I can think about two options to fulfill your needs:
Create a pod with the PV attached to it and use kubectl cp to copy the contents wherever you need. You could for example use a PodSpec similar to the following:
apiVersion: v1
kind: Pod
metadata:
name: dataaccess
spec:
containers:
- name: alpine
image: alpine:latest
command: ['sleep', 'infinity']
volumeMounts:
- name: mypvc
mountPath: /data
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: mypvc
Please note that mypvc should be the name of the PersistentVolumeClaim that is bound to the PV you want to copy data from.
Once the pod is running, you can run something like below to copy the data from any machine that has kubectl configured to connect to your cluster:
kubectl cp dataaccess:/data data/
Mount the PV's EBS volume in an EC2 instance and copy the data from there. This case is less simple to explain in detail because it needs a little more context about what you're trying to achieve.
| Kubernetes | 50,375,826 | 51 |
In Kubernetes, it is possible to make a service running in cluster externally accessible by running kubectl expose deployment. Why deployment as opposed to service is beyond my simpleton's comprehension. That aside, I would like to also be able to undo this operation afterwards. Think of a scenario, where I need to get access to the service that normally is only accessible inside the cluster for debugging purposes and then to restore original situation.
Is there any way of doing this short of deleting the deployment and creating it afresh?
PS. Actually deleting service and deployment doesn't help. Re-creating service and deployment with the same name will result in service being exposed.
| Assuming you have a deployment called hello-world, and do a kubectl expose as follows:
kubectl expose deployment hello-world --type=ClusterIP --name=my-service
this will create a service called my-service, which makes your deployment accessible for debugging, as you described.
To display information about the Service:
kubectl get services my-service
To delete this service when you are done debugging:
kubectl delete service my-service
Now your deployment is un-exposed.
| Kubernetes | 48,639,273 | 51 |
I have a kubernetes cluster on Azure and I created 2 namespaces and 2 service accounts because I have two teams deploying on the cluster.
I want to give each team their own kubeconfig file for the serviceaccount I created.
I am pretty new to Kubernetes and haven't been able to find a clear instruction on the kubernetes website. How do I create a kube config file for a serviceaccount?
Hopefully someone can help me out :), I rather not give the default kube config file to the teams.
With kind regards,
Bram
| # your server name goes here
server=https://localhost:8443
# the name of the secret containing the service account token goes here
name=default-token-sg96k
ca=$(kubectl get secret/$name -o jsonpath='{.data.ca\.crt}')
token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode)
namespace=$(kubectl get secret/$name -o jsonpath='{.data.namespace}' | base64 --decode)
echo "
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
cluster:
certificate-authority-data: ${ca}
server: ${server}
contexts:
- name: default-context
context:
cluster: default-cluster
namespace: default
user: default-user
current-context: default-context
users:
- name: default-user
user:
token: ${token}
" > sa.kubeconfig
| Kubernetes | 47,770,676 | 51 |
In several places on the Kubernetes documentation site they recommend that you store your configuration YAML files inside source control for easy version-tracking, rollback, and deployment.
My colleagues and I are currently in the process of trying to decide on the structure of our git repository.
We have decided that since configuration can change without any changes to the app code, that we would like to store configurations in a separate, shared repository.
We may need multiple versions of some components running side-by-side within a given environment (cluster). These versions may have different configurations.
There seem to be a lot of potential variations, and all of them have shortcomings. What is the accepted way to structure such a repository?
| There is no established standard yet, I believe. I find helm's charts too complicated to start with, especially having another unmanaged component running on the k8s cluster. This is a workflow that we follow that works quite well for a setup of 15ish microservices, and 5 different environments (devx2, staging, qa, prod).
The 2 key ideas:
Store kubernetes configurations in the same source repo that has the
other build tooling. Eg: alongside the microservice source code which has the tooling for building/releasing that particular microservice.
Template the kubernetes configuration with something like jinja and render the templates according to the environment you're targeting.
The tooling is reasonably straightforward to figure out by putting together a few bash scripts or integrating with a Makefile etc.
EDIT: to answer some of the questions in the comment
The application source code repository is used as the single source of truth. So that means that if everything works as it should, changes should never be moved from the kubernetes cluster to the repository.
Changes directly on the server are prohibited in our workflow. If it ever does happen, we have to manually make sure they enter the application repository again.
Again, just want to note that the configurations stored in the source code are actually templates and use secretKeyRef quite liberally. This means that some configurations are coming in from the CI tooling as they are rendered and some are coming in from secrets that live only on the cluster (like database passwords, API tokens etc.).
| Kubernetes | 47,168,381 | 51 |
Use case:
I have a NFS directory available and I want to use it to persist data for multiple deployments & pods.
I have created a PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: http://mynfs.com
path: /server/mount/point
I want multiple deployments to be able to use this PersistentVolume, so my understanding of what is needed is that I need to create multiple PersistentVolumeClaims which will all point at this PersistentVolume.
kind: PersistentVolumeClaim
apiVersion: v1
metaData:
name: nfs-pvc-1
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
I believe this to create a 50MB claim on the PersistentVolume. When I run kubectl get pvc, I see:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc-1 Bound nfs-pv 10Gi RWX 35s
I don't understand why I see 10Gi capacity, not 50Mi.
When I then change the PersistentVolumeClaim deployment yaml to create a PVC named nfs-pvc-2 I get this:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc-1 Bound nfs-pv 10Gi RWX 35s
nfs-pvc-2 Pending 10s
PVC2 never binds to the PV. Is this expected behaviour? Can I have multiple PVCs pointing at the same PV?
When I delete nfs-pvc-1, I see the same thing:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc-2 Pending 10s
Again, is this normal?
What is the appropriate way to use/re-use a shared NFS resource between multiple deployments / pods?
| Basically you can't do what you want, as the relationship PVC <--> PV is one-on-one.
If NFS is the only storage you have available and would like multiple PV/PVC on one nfs export, use Dynamic Provisioning and a default storage class.
It's not in official K8s yet, but this one is in the incubator and I've tried it and it works well: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
This will enormously simplify your volume provisioning as you only need to take care of the PVC, and the PV will be created as a directory on the nfs export / server that you have defined.
| Kubernetes | 44,204,223 | 51 |
I have a problem with Kubernetes that run in a CentOS virtual machine in CloudStack. My pods remain in pending state.
I got the following error message when I print the log for a pod:
[root@kubernetes-master ~]# kubectl logs wildfly-rc-6a0fr
Error from server: Internal error occurred: Pod "wildfly-rc-6a0fr" in namespace "default" : pod is not in 'Running', 'Succeeded' or 'Failed' state - State: "Pending"
If I launch describe command on the pod, this is the result:
[root@kubernetes-master ~]# kubectl describe pod wildfly-rc-6a0fr
Name: wildfly-rc-6a0fr
Namespace: default
Image(s): jboss/wildfly
Node: kubernetes-minion1/
Start Time: Sun, 03 Apr 2016 15:00:20 +0200
Labels: name=wildfly
Status: Pending
Reason:
Message:
IP:
Replication Controllers: wildfly-rc (2/2 replicas created)
Containers:
wildfly-rc-pod:
Container ID:
Image: jboss/wildfly
Image ID:
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Ready: False
Restart Count: 0
Environment Variables:
Volumes:
default-token-0dci1:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-0dci1
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
8m 8m 1 {kubelet kubernetes-minion1} implicitly required container POD Pulled Container image "registry.access.redhat.com/rhel7/pod-infrastructure:latest" already present on machine
8m 8m 1 {kubelet kubernetes-minion1} implicitly required container POD Created Created with docker id 97c1a3ea4aa5
8m 8m 1 {kubelet kubernetes-minion1} implicitly required container POD Started Started with docker id 97c1a3ea4aa5
8m 8m 1 {kubelet kubernetes-minion1} spec.containers{wildfly-rc-pod} Pulling pulling image "jboss/wildfly"
Kubelet has some errors that I print below.Is this possible because of the vm has only 5GB of storage?
systemctl status -l kubelet
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since lun 2016-04-04 08:08:59 CEST; 9min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 2112 (kubelet)
Memory: 39.3M
CGroup: /system.slice/kubelet.service
└─2112 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://kubernetes-master:8080 --address=0.0.0.0 --allow-privileged=false --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest
apr 04 08:13:33 kubernetes-minion1 kubelet[2112]: W0404 08:13:33.877859 2112 kubelet.go:1690] Orphaned volume "167d0ead-fa29-11e5-bddc-064278000020/default-token-0dci1" found, tearing down volume
apr 04 08:13:53 kubernetes-minion1 kubelet[2112]: W0404 08:13:53.887279 2112 kubelet.go:1690] Orphaned volume "9f772358-fa2b-11e5-bddc-064278000020/default-token-0dci1" found, tearing down volume
apr 04 08:14:35 kubernetes-minion1 kubelet[2112]: I0404 08:14:35.341994 2112 provider.go:91] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
apr 04 08:14:35 kubernetes-minion1 kubelet[2112]: E0404 08:14:35.397168 2112 manager.go:1867] Failed to create pod infra container: impossible: cannot find the mounted volumes for pod "wildfly-rc-oroab_default"; Skipping pod "wildfly-rc-oroab_default"
apr 04 08:14:35 kubernetes-minion1 kubelet[2112]: E0404 08:14:35.401583 2112 pod_workers.go:113] Error syncing pod 167d0ead-fa29-11e5-bddc-064278000020, skipping: impossible: cannot find the mounted volumes for pod "wildfly-rc-oroab_default"
apr 04 08:14:58 kubernetes-minion1 kubelet[2112]: E0404 08:14:58.076530 2112 manager.go:1867] Failed to create pod infra container: impossible: cannot find the mounted volumes for pod "wildfly-rc-1aimv_default"; Skipping pod "wildfly-rc-1aimv_default"
apr 04 08:14:58 kubernetes-minion1 kubelet[2112]: E0404 08:14:58.078292 2112 pod_workers.go:113] Error syncing pod 9f772358-fa2b-11e5-bddc-064278000020, skipping: impossible: cannot find the mounted volumes for pod "wildfly-rc-1aimv_default"
apr 04 08:15:23 kubernetes-minion1 kubelet[2112]: W0404 08:15:23.879138 2112 kubelet.go:1690] Orphaned volume "56257e55-fa2c-11e5-bddc-064278000020/default-token-0dci1" found, tearing down volume
apr 04 08:15:28 kubernetes-minion1 kubelet[2112]: E0404 08:15:28.574574 2112 manager.go:1867] Failed to create pod infra container: impossible: cannot find the mounted volumes for pod "wildfly-rc-43b0f_default"; Skipping pod "wildfly-rc-43b0f_default"
apr 04 08:15:28 kubernetes-minion1 kubelet[2112]: E0404 08:15:28.581467 2112 pod_workers.go:113] Error syncing pod 56257e55-fa2c-11e5-bddc-064278000020, skipping: impossible: cannot find the mounted volumes for pod "wildfly-rc-43b0f_default"
Could someone, kindly, help me?
| Run below command to get the events. This will show the issue ( and all other events) why pod has not be scheduled.
kubectl get events
| Kubernetes | 36,377,784 | 51 |
I have just started with Kubernetes and I am confused about the difference between NodePort and LoadBalancer type of service.
The difference I understand is that LoadBalancer does not support UDP but apart from that whenever we create a service either Nodeport or Loadbalancer we get a service IP and port, a NodePort, and endpoints.
From Kubernetes docs:
NodePort: on top of having a cluster-internal IP, expose the service
on a port on each node of the cluster (the same port on each node).
You'll be able to contact the service on any NodeIP:NodePort
address.
LoadBalancer: on top of having a cluster-internal IP and
exposing service on a NodePort also, ask the cloud provider for a load
balancer which forwards to the Service exposed as a NodeIP:NodePort
for each Node.
So, I will always access service on NodeIP:NodePort.
My understanding is, whenever we access the node:NodePort, the kubeproxy will intercept the request and forward it to the respective pod.
The other thing mentioned about LoadBalancer is that we can have an external LB which will LB between the Nodes. What prevents us to put a LB for services created as nodeport?
I am really confused. Most of the docs or tutorials talk only about LoadBalancer service therefore I couldn't find much on internet.
| Nothing prevents you from placing an external load balancer in front of your nodes and use the NodePort option.
The LoadBalancer option is only used to additionally ask your cloud provider for a new software LB instance, automatically in the background.
I'm not up to date which cloud providers are supported yet, but i saw it working for Compute Engine and OpenStack already.
| Kubernetes | 34,443,138 | 51 |
I have a Kubernetes cluster running on Google Compute Engine and I would like to assign static IP addresses to my external services (type: LoadBalancer). I am unsure about whether this is possible at the moment or not. I found the following sources on that topic:
Kubernetes Service Documentation lets you define an external IP address, but it fails with cannot unmarshal object into Go value of type []v1.LoadBalancerIngress
The publicIPs field seems to let me specify external IPs, but it doesn't seem to work either
This Github issue states that what I'm trying to do is not supported yet, but will be in Kubernetes v1.1
The clusterIP field also lets me specify an IP address, but fails with "provided IP is not in the valid range"
I feel like the usage of static IPs is quite important when setting up web services. Am I missing something here? I'd be very grateful if somebody could enlighten me here!
EDIT: For clarification: I am not using Container Engine, I set up a cluster myself using the official installation instructions for Compute Engine. All IP addresses associated with my k8s services are marked as "ephemeral", which means recreating a kubernetes service may lead to a different external IP address (which is why I need them to be static).
| TL;DR Google Container Engine running Kubernetes v1.1 supports loadBalancerIP just mark the auto-assigned IP as static first.
Kubernetes v1.1 supports externalIPs:
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.10.10
...
So far there isn't a really good consistent documentation on how to use it on GCE. What is sure is that this IP must first be one of your pre-allocated static IPs.
The cross-region load balancing documentation is mostly for Compute Engine and not Kubernetes/Container Engine, but it's still useful especially the part "Configure the load balancing service".
If you just create a Kubernetes LoadBalancer on GCE, it will create a network Compute Engine > Network > Network load balancing > Forwarding Rule pointing to a target pool made of your machines on your cluster (normally only those running the Pods matching the service selector). It looks like deleting a namespace doesn't nicely clean-up the those created rules.
Update
It is actually now supported (even though under documented):
Check that you're running Kubernetes 1.1 or later (under GKE edit your cluster and check "Node version")
Allocate static IPs under Networking > External IP addresses, either:
Deploy once without loadBalancerIP, wait until you've an external IP allocated when you run kubectl get svc, and look up that IP in the list on that page and change those from Ephemeral to Static.
Click "Reserver a static address" regional in the region of your cluster, attached to None.
Edit your LoadBalancer to have loadBalancerIP=10.10.10.10 as above (adapt to the IP that was given to you by Google).
Now if you delete your LoadBalancer or even your namespace, it'll preserve that IP address upon re-reploying on that cluster.
Update 2016-11-14
See also Kubernetes article describing how to set up a static IP for single or multiple domains on Kubernetes.
| Kubernetes | 32,266,053 | 51 |
I’m a mobile developer and recently adept at using containers with docker. I’m developing a container architecture for my graduate project. One of the modules of this architecture would need to be run on an android device. But I could not find information on how to run a container on an android device. It could be something simple like an alpine image with python.
Can anyone tell me if there is a possibility to run a container on an android device with docker, or even kubernetes?
| In 2021, the answer is definitely yes.
Here is a tutorial on that topic, which shows you how to run docker directly on Android, without VMs nor chroot. Note that you do need to root your phone and build a custom kernel though.
If you only want a quick look of docker running on android without getting your hands dirty, check out this comment on GitHub.
| Kubernetes | 53,527,277 | 50 |
Is there a way to use kubectl to list only the pods belonging to a deployment?
Currently, I do this to get pods:
kubectl get pods| grep hello
But it seems an overkill to get ALL the pods when I am interested to know only the pods for a given deployment. I use the output of this command to see the status of all pods, and then possibly exec into one of them.
I also tried kc get -o wide deployments hellodeployment, but it does not print the Pod names.
| There's a label in the pod for the selector in the deployment. That's how a deployment manages its pods. For example for the label or selector app=http-svc you can do something like that this and avoid using grep and listing all the pods (this becomes useful as your number of pods becomes very large)
here are some examples command line:
# single label
kubectl get pods -l=app=http-svc
kubectl get pods --selector=app=http-svc
# multiple labels
kubectl get pods --selector key1=value1,key2=value2
| Kubernetes | 52,957,227 | 50 |
I create a deployment which results in 4 pods existing across 2 nodes.
I then expose these pods via a service which results in the following cluster IP and pod endpoints:
Name: s-flask
......
IP: 10.110.201.8
Port: <unset> 9080/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30817/TCP
Endpoints:
192.168.251.131:5000,192.168.251.132:5000,192.168.251.134:5000 + 1 more...
If accessing the service internally via the cluster IP, the requests are balanced across both nodes and all pods, not just the pods on a single node (e.g. like access via a nodePort).
I know kubernetes uses IP tables to balance requests across pods on a single node, but I can't find any documentation which explains how kubernetes balances internal service requests across multiple nodes (we are don't use load balancers or ingress for internal service load balancing).
The cluster IP itself is virtual, the only way I think this can work, is if the cluster IP is round robin mapped to a service endpoint IP address, where the client would have to look up the cluster IP / service and select an endpoint IP?
| Everything you need is explained in second paragraph "Virtual IPs and service proxies" of this documentation: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
In nutshell: currently, depending on the proxy mode, for ClusterIP it's just round robin/random. It's done by kube-proxy, which runs on each nodes, proxies UDP and TCP and provides load balancing.
It's better to think of kubernetes as a whole rather than specific nodes. Abstraction does its thing here.
Hope it answers your question.
| Kubernetes | 49,888,133 | 50 |
Our Kubernetes 1.6 cluster had certificates generated when the cluster was built on April 13th, 2017.
On December 13th, 2017, our cluster was upgraded to version 1.8, and new certificates were generated [apparently, an incomplete set of certificates].
On April 13th, 2018, we started seeing this message within our Kubernetes dashboard for api-server:
[authentication.go:64] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]
Tried pointing client-certificate & client-key within /etc/kubernetes/kubelet.conf at the certificates generated on Dec 13th [apiserver-kubelet-client.crt and apiserver-kubelet-client.crt], but continue to see the above error.
Tried pointing client-certificate & client-key within /etc/kubernetes/kubelet.conf at different certificates generated on Dec 13th [apiserver.crt and apiserver.crt] (I honestly don't understand the difference between these 2 sets of certs/keys), but continue to see the above error.
Tried pointing client-certificate & client-key within /etc/kubernetes/kubelet.conf at non-existent files, and none of the kube* services would start, with /var/log/syslog complaining about this:
Apr 17 17:50:08 kuber01 kubelet[2422]: W0417 17:50:08.181326 2422 server.go:381] invalid kubeconfig: invalid configuration: [unable to read client-cert /tmp/this/cert/does/not/exist.crt for system:node:node01 due to open /tmp/this/cert/does/not/exist.crt: no such file or directory, unable to read client-key /tmp/this/key/does/not/exist.key for system:node:node01 due to open /tmp/this/key/does/not/exist.key: no such file or directory]
Any advice on how to overcome this error, or even troubleshoot it at a more granular level? Was considering regenerating certificates for api-server (kubeadm alpha phase certs apiserver), based on instructions within https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-certs ... but not sure if I'd be doing more damage.
Relatively new to Kubernetes, and the gentleman who set this up is not available for consult ... any help is appreciated. Thanks.
| I think you need re-generate the apiserver certificate /etc/kubernetes/pki/apiserver.crt you can view current expire date like this.
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ' Not '
Not Before: Dec 20 14:32:00 2017 GMT
Not After : Dec 20 14:32:00 2018 GMT
Here is the steps I used to regenerate the certificates on v1.11.5 cluster. compiled steps from here https://github.com/kubernetes/kubeadm/issues/581
to check all certificate expire date:
find /etc/kubernetes/pki/ -type f -name "*.crt" -print|egrep -v 'ca.crt$'|xargs -L 1 -t -i bash -c 'openssl x509 -noout -text -in {}|grep After'
Renew certificate on Master node.
*) Renew certificate
mv /etc/kubernetes/pki/apiserver.key /etc/kubernetes/pki/apiserver.key.old
mv /etc/kubernetes/pki/apiserver.crt /etc/kubernetes/pki/apiserver.crt.old
mv /etc/kubernetes/pki/apiserver-kubelet-client.crt /etc/kubernetes/pki/apiserver-kubelet-client.crt.old
mv /etc/kubernetes/pki/apiserver-kubelet-client.key /etc/kubernetes/pki/apiserver-kubelet-client.key.old
mv /etc/kubernetes/pki/front-proxy-client.crt /etc/kubernetes/pki/front-proxy-client.crt.old
mv /etc/kubernetes/pki/front-proxy-client.key /etc/kubernetes/pki/front-proxy-client.key.old
kubeadm alpha phase certs apiserver --config /root/kubeadm-kubetest.yaml
kubeadm alpha phase certs apiserver-kubelet-client
kubeadm alpha phase certs front-proxy-client
mv /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/apiserver-etcd-client.crt.old
mv /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.key.old
kubeadm alpha phase certs apiserver-etcd-client
mv /etc/kubernetes/pki/etcd/server.crt /etc/kubernetes/pki/etcd/server.crt.old
mv /etc/kubernetes/pki/etcd/server.key /etc/kubernetes/pki/etcd/server.key.old
kubeadm alpha phase certs etcd-server --config /root/kubeadm-kubetest.yaml
mv /etc/kubernetes/pki/etcd/healthcheck-client.crt /etc/kubernetes/pki/etcd/healthcheck-client.crt.old
mv /etc/kubernetes/pki/etcd/healthcheck-client.key /etc/kubernetes/pki/etcd/healthcheck-client.key.old
kubeadm alpha phase certs etcd-healthcheck-client --config /root/kubeadm-kubetest.yaml
mv /etc/kubernetes/pki/etcd/peer.crt /etc/kubernetes/pki/etcd/peer.crt.old
mv /etc/kubernetes/pki/etcd/peer.key /etc/kubernetes/pki/etcd/peer.key.old
kubeadm alpha phase certs etcd-peer --config /root/kubeadm-kubetest.yaml
*) Backup old configuration files
mv /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.old
mv /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old
mv /etc/kubernetes/controller-manager.conf /etc/kubernetes/controller-manager.conf.old
mv /etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.old
kubeadm alpha phase kubeconfig all --config /root/kubeadm-kubetest.yaml
mv $HOME/.kube/config .$HOMEkube/config.old
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
chmod 777 $HOME/.kube/config
export KUBECONFIG=.kube/config
Reboot the node and check the logs for etcd, kubeapi and kubelet.
Note:
Remember to update your CI/CD job kubeconfig file. If you’re using helm command test that also.
| Kubernetes | 49,885,636 | 50 |
Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:default:default" cannot get services in the namespace "mycomp-services-process"
For the above issue I have created "mycomp-service-process" namespace and checked the issue.
But it shows again message like this:
Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:mycomp-services-process:default" cannot get services in the namespace "mycomp-services-process"
| Creating a namespace won't, of course, solve the issue, as that is not the problem at all.
In the first error the issue is that serviceaccount default in default namespace can not get services because it does not have access to list/get services. So what you need to do is assign a role to that user using clusterrolebinding.
Following the set of minimum privileges, you can first create a role which has access to list services:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
What above snippet does is create a clusterrole which can list, get and watch services. (You will have to create a yaml file and apply above specs)
Now we can use this clusterrole to create a clusterrolebinding:
kubectl create clusterrolebinding service-reader-pod \
--clusterrole=service-reader \
--serviceaccount=default:default
In above command the service-reader-pod is name of clusterrolebinding and it is assigning the service-reader clusterrole to default serviceaccount in default namespace. Similar steps can be followed for the second error you are facing.
In this case I created clusterrole and clusterrolebinding but you might want to create a role and rolebinding instead. You can check the documentation in detail here
| Kubernetes | 47,973,570 | 50 |
I am setting up Github Actions for a project repository.
The workflow consists of the following steps:
Building a docker image
Pushing the image to a container registry
Rollout a Kubernetes deployment.
However, I have two different Kubernetes deployments: one for development, and one for production. Hence, I have also two Github Action workflows.
The Github Action workflow for development is triggered everytime that a commit is pushed:
on:
push:
branches:
- master
But I don't want that for my production workflow. I would need a manual trigger, like a Send to production button. I didn't see anything close to that in the docs.
Is there a way to trigger a workflow manually in Github Actions?
How can I split my development and my production workflows to achieve what I want, either on Github Actions, Docker or Kubernetes?
|
Is there a way to trigger a workflow manually in Github Actions?
You might consider, from July2020:
GitHub Actions: Manual triggers with workflow_dispatch
(Note: or multiple workflows, through the new Composite Run Steps, August 2020)
You can now create workflows that are manually triggered with the new workflow_dispatch event.
You will then see a 'Run workflow' button on the Actions tab, enabling you to easily trigger a run.
You can choose which branch the workflow is run on.
philippe adds in the comments:
One thing that's not mentioned in the documentation: the workflow must exist on the default branch for the "Run workflow" button to appear.
Once you add it there, you can continue developing the action on its own branch and the changes will take effect when run using the button
The documentation goes on:
In addition, you can optionally specify inputs, which GitHub will present as form elements in the UI. Workflow dispatch inputs are specified with the same format as action inputs.
For example:
on:
workflow_dispatch:
inputs:
logLevel:
description: 'Log level'
required: true
default: 'warning'
tags:
description: 'Test scenario tags'
The triggered workflow receives the inputs in the github.event context.
For example:
jobs:
printInputs:
runs-on: ubuntu-latest
steps:
- run: |
echo "Log level: ${{ github.event.inputs.logLevel }}"
echo "Tags: ${{ github.event.inputs.tags }}"
shim adds in the comments:
You can add workflow_dispatch to a workflow that also has other triggers (like on push and / or schedule)
For instance:
on:
workflow_dispatch:
push:
branches:
- master
pull_request:
types: [opened, synchronize, reopened]
| Kubernetes | 58,933,155 | 49 |
Is there a different way than kubectl edit to delete an annotation in Kubernetes?
I do not like the interactivity of kubectl edit. I prefer something usable in a script.
| Use minus - sign at the end of the annotation in kubectl annotate.
Example:
kubectl annotate service shopping-cart prometheus.io/scrape-
Removes annotation prometheus.io/scrape from shopping-cart service.
| Kubernetes | 54,973,593 | 49 |
I've a Deployment object where I expose the POD ID using the Downward API. That works fine. However, I want to set up another env variable, log path, with reference to the POD ID. But, setting that variable value to /var/log/mycompany/${POD_ID}/logs isn't working, no logs are created in the container.
I can make the entrypoint script or the app aware of the POD ID, and build up the log path, but I'd rather not do that.
| The correct syntax is to use $(FOO), as is described in the the documentation; the syntax you have used is "shell" syntax, which isn't the way kubernetes interpolates variables. So:
containers:
- env:
- name: POD_ID
valueFrom: # etc etc
- name: LOG_PATH
value: /var/log/mycompany/$(POD_ID)/logs
Also please note that, as mentioned in the Docs, the variable to expand must be defined before the variable referencing it.
| Kubernetes | 49,582,349 | 49 |
I have a pod that responds to requests to /api/
I want to do a rewrite where requests to /auth/api/ go to /api/.
Using an Ingress (nginx), I thought that with the ingress.kubernetes.io/rewrite-target: annotation I could do it something like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapi-ing
annotations:
ingress.kubernetes.io/rewrite-target: /api
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: api.myapp.com
http:
paths:
- path: /auth/api
backend:
serviceName: myapi
servicePort: myapi-port
What's happening however is that /auth/ is being passed to the service/pod and a 404 is rightfully being thrown. I must be misunderstanding the rewrite annotation.
Is there a way to do this via k8s & ingresses?
| I don't know if this is still an issue, but since version 0.22 it seems you need to use capture groups to pass values to the rewrite-target value
From the nginx example available here
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.
For your specific needs, something like this should do the trick
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapi-ing
annotations:
ingress.kubernetes.io/rewrite-target: /api/$2
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: api.myapp.com
http:
paths:
- path: /auth/api(/|$)(.*)
backend:
serviceName: myapi
servicePort: myapi-port
| Kubernetes | 47,837,087 | 49 |
I have a service exposed of type=LoadBalancer and when I do a
kubectl describe services servicename,
I get this output :
Name: ser1
Namespace: default
Labels: app=online1
Selector: app=online1
Type: LoadBalancer
IP: 10.0.0.32
External IPs: 192.168.99.100
Port: <unset> 8080/TCP
NodePort: <unset> 30545/TCP
Endpoints: 172.17.0.10:8080,172.17.0.11:8080,172.17.0.8:8080 + 1 more...
Session Affinity: None
Can someone please guide on the following doubts :
1.) I can't understand what <unset> means in Port and NodePort. Also, how does it affect my service?
2.) When I want to hit a service, I should hit the service using <external-ip:NodePort> right? Then what's the use of Port?
| Port unset means: You didn't specify a name in service creation.
Service Yaml excerpt (note name: grpc):
spec:
ports:
- port: 26257
targetPort: 26257
name: grpc
type: NodePort
kubectl describe services servicename output excerpt:
Type: NodePort
IP: 10.101.87.248
Port: grpc 26257/TCP
NodePort: grpc 31045/TCP
Endpoints: 10.20.12.71:26257,10.20.12.73:26257,10.20.8.81:26257
Port is definition of container ports that service will send the traffic on (Actual Endpoint).
| Kubernetes | 42,528,409 | 49 |
I confused between Multi-Container Pod Design patterns.
(sidecar, adapter, ambassador)
What I understand is :
Sidecar : container + container(share same resource and do other functions)
Adapter : container + adapter(for checking other container's status. e.g. monitoring)
Ambassador : container + proxy(to networking outside)
But, According to Istio -Installing the Sidecar, They introduce proxy as a sidecar pattern.
Adapter is container, and Proxy is container too.
So, My question is What is differences between Sidecar pattern and Adapter&Ambassador pattern?
Is the Sidecar pattern concept contain Adapter&Ambassador pattern?
| First, you are right, the term sidecar container has now became a word for describing an extra container in your pod. Originally(?) it was a specific multi-container design pattern.
Multi-container design patterns
Sidecar pattern
An extra container in your pod to enhance or extend the functionality of the main container.
Ambassador pattern
A container that proxy the network connection to the main container.
Adapter pattern
A container that transform output of the main container.
This is taken from the original article from 2015: Patterns for Composite Containers
Summary
Your note on
But, According to Istio -Installing the Sidecar, They introduce proxy as a sidecar pattern.
In the patterns above, both Ambassador and Adapter must in fact proxy the network connection, but do it with different purpose. With Istio, this is done e.g. to terminate mTLS connection, collect metrics and more to enhance your main container. So it actually is a sidecar pattern but confusingly, as you correctly pointed out, all pattern proxy the connection - but for different purposes.
| Kubernetes | 59,451,056 | 48 |
Is it possible to generate yaml with kubernetes kubectl command ? to clarify - I'm not talking about generating yaml from existing deployments like kubectl get XXXX -o yaml, but merely about generating yamls for the very first time for pod, service, ingress, etc.
PS There is a way to get yaml files from kubernetes.io site ( 1 , 2 ) but I am looking if there is a way to generate yamls templates with kubectl only.
| There's the command create in kubectl that does the trick and replaced the run used in the past: let's image you want to create a Deployment running a nginx:latest Docker image.
# kubectl create deployment my_deployment --image=busybox --dry-run=client --output=yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my_deployment
name: my_deployment
spec:
replicas: 1
selector:
matchLabels:
app: my_deployment
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my_deployment
spec:
containers:
- image: busybox
name: busybox
resources: {}
status: {}
Let's analyze each parameter:
my_deployment is the Deployment name you chose
--image is the Docker image you want to deploy
--dry-run=client won't execute the resource creation, used mainly for validation. Replace 'client' with 'true' for older versions of Kubernetes. Neither client nor server will actually create the resource, though server will return an error if the resource cannot be created without a dry run (ie: resource already exists). The difference is very subtle.
--output=yaml prints to standard output the YAML definition of the Deployment resource.
Obviously, you can perform this options just with few Kubernetes default resources:
# kubectl create
clusterrole Create a ClusterRole.
clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole
configmap Create a configmap from a local file, directory or literal value
deployment Create a deployment with the specified name.
job Create a job with the specified name.
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name.
priorityclass Create a priorityclass with the specified name.
quota Create a quota with the specified name.
role Create a role with single rule.
rolebinding Create a RoleBinding for a particular Role or ClusterRole
secret Create a secret using specified subcommand
service Create a service using specified subcommand.
serviceaccount Create a service account with the specified name
According to this, you can render the template without the prior need of deploying your resource.
| Kubernetes | 57,696,087 | 48 |
I've recently learned about kubectl --field-selector flag, but ran into errors when trying to use it with various objects.
For example :
$ kubectl delete jobs.batch --field-selector status.succeeded==1
Error from server (BadRequest): Unable to find "batch/v1, Resource=jobs" that match label selector "", field selector "status.succeeded==1": field label "status.succeeded" not supported for batchv1.Job
According to the documentation, Supported field selectors vary by Kubernetes resource type., so I guess this behaviour was to be expected.
The annoying part is that I had to try individually each field to know if I could use them or not.
Is there any way to get all the fields supported for a given resource type / resource version / kubectl version ?
| The issue in your case is that you mistakenly use status.succeeded instead of status.successful, so right command is
kubectl delete jobs.batch --field-selector status.successful==1
No resources found
Regarding your question about all the fields: my suggestion is to deep into the code and search for proper resources types in conversion.go for each API.
Example:
Batch Jobs conversion.go
return scheme.AddFieldLabelConversionFunc(SchemeGroupVersion.WithKind("Job"),
func(label, value string) (string, string, error) {
switch label {
case "metadata.name", "metadata.namespace", "status.successful":
return label, value, nil
default:
return "", "", fmt.Errorf("field label %q not supported for batchv1.Job", label)
}
},
)
}
| Kubernetes | 55,762,084 | 48 |
I want to upgrade the kubectl client version to 1.11.3.
I executed brew install kubernetes-cli but the version doesnt seem to be updating.
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.7", GitCommit:"0c38c362511b20a098d7cd855f1314dad92c2780", GitTreeState:"clean", BuildDate:"2018-08-20T10:09:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.4", GitCommit:"bf9a868e8ea3d3a8fa53cbb22f566771b3f8068b", GitTreeState:"clean", BuildDate:"2018-10-25T19:06:30Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
I'm trying to get the logs for a cell by running this command.
kubectl logs -l groupname/cell=my-cell --all-containers=true
This works in my VM which has client version 1.11.3. But in my mac it gives me an error saying --all-containers=true flag is not available for kubectl logs command.
| Install specific version of kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/<specific-kubectl-version>/bin/darwin/amd64/kubectl
For your case if you want to install version v1.11.3 then replace specific-kubectl-version with v1.11.3
Then make this binary executable
chmod +x ./kubectl
Then move this binary to your PATH
sudo mv ./kubectl $(which kubectl)
| Kubernetes | 53,701,151 | 48 |
I'm not sure what the difference is between the CNI plugin and the Kube-proxy in Kubernetes. From what I get out of the documentation I conclude the following:
Kube-proxy is responsible for communicating with the master node and routing.
CNI provides connectivity by assigning IP addresses to pods and services, and reachability through its routing deamon.
the routing seems to be an overlapping function between the two, is that true?
Kind regards,
Charles
| OVERLAY NETWORK
Kubernetes assumes that every pod has an IP address and that you can communicate with services inside that pod by using that IP address. When I say “overlay network” this is what I mean (“the system that lets you refer to a pod by its IP address”).
All other Kubernetes networking stuff relies on the overlay networking working correctly.
There are a lot of overlay network backends (calico, flannel, weave) and the landscape is pretty confusing. But as far as I’m concerned an overlay network has 2 responsibilities:
Make sure your pods can send network requests outside your cluster
Keep a stable mapping of nodes to subnets and keep every node in your cluster updated with that mapping. Do the right thing when nodes are added & removed.
KUBE-PROXY
Just to understand kube-proxy, Here’s how Kubernetes services work! A service is a collection of pods, which each have their own IP address (like 10.1.0.3, 10.2.3.5, 10.3.5.6)
Every Kubernetes service gets an IP address (like 10.23.1.2)
kube-dns resolves Kubernetes service DNS names to IP addresses (so my-svc.my-namespace.svc.cluster.local might map to 10.23.1.2)
kube-proxy sets up iptables rules in order to do random load balancing between them.
So when you make a request to my-svc.my-namespace.svc.cluster.local, it resolves to 10.23.1.2, and then iptables rules on your local host (generated by kube-proxy) redirect it to one of 10.1.0.3 or 10.2.3.5 or 10.3.5.6 at random.
In short, overlay networks define the underlying network which can be used for communicating the various component of kubernetes. While kube-proxy is a tool to generate the IP tables magic which let you connect to any of the pod(using servics) in kubernetes no matter on which node that pod exist.
Parts of this answer were taken from this blog:
https://jvns.ca/blog/2017/10/10/operating-a-kubernetes-network/
Hope this gives you brief idea about kubernetes networking.
| Kubernetes | 53,534,553 | 48 |
I'm just getting started with kubernetes and setting up a cluster on AWS using kops. In many of the examples I read (and try), there will be commands like:
kubectl run my-app --image=mycompany/myapp:latest --replicas=1 --port=8080
kubectl expose deployment my=app --port=80 --type=LoadBalancer
This seems to do several things behind the scenes, and I can view the manifest files created using kubectl edit deployment, and so forth However, i see many examples where people are creating the manifest files by hand, and using commands like kubectl create -f or kubectl apply -f
Am I correct in assuming that both approaches accomplish the same goals, but that by creating the manifest files yourself, you have a finer grain of control?
Would I then have to be creating Service, ReplicationController, and Pod specs myself?
Lastly, if you create the manifest files yourself, how do people generally structure their projects as far as storing these files? Are they simply in a directory alongside the project they are deploying?
| The fundamental question is how to apply all of the K8s objects into the k8s cluster. There are several ways to do this job.
Using Generators (Run, Expose)
Using Imperative way (Create)
Using Declarative way (Apply)
All of the above ways have a different purpose and simplicity. For instance, If you want to check quickly whether the container is working as you desired then you might use Generators .
If you want to version control the k8s object then it's better to use declarative way which helps us to determine the accuracy of data in k8s objects.
Deployment, ReplicaSet and Pods are different layers which solve different problems.All of these concepts provide flexibility to k8s.
Pods: It makes sure that related containers are together and provide efficiency.
ReplicaSet: It makes sure that k8s cluster has desirable replicas of the pods
Deployment: It makes sure that you can have different version of Pods and provide the capability to rollback to the previous version
Lastly, It depends on use case how you want to use these concepts or methodology. It's not about which is good or which is bad.
| Kubernetes | 48,015,637 | 48 |
How do I get a pod's name from its IP address? What's the magic incantation of kubectl + sed/awk/grep/etc regardless of where kubectl is invoked?
| Example:
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
alpine-3835730047-ggn2v 1/1 Running 0 5d 10.22.19.69 ip-10-35-80-221.ec2.internal
get pod name by IP
kubectl get --all-namespaces --output json pods | jq '.items[] | select(.status.podIP=="10.22.19.69")' | jq .metadata.name
"alpine-3835730047-ggn2v"
get container name by IP
kubectl get --all-namespaces --output json pods | jq '.items[] | select(.status.podIP=="10.22.19.69")' | jq .spec.containers[].name
"alpine"
| Kubernetes | 41,563,021 | 48 |
in a kubernetes Deployment yaml file is there a simple way to run multiple commands in the postStart hook of a container?
I'm trying to do something like this:
lifecycle:
postStart:
exec:
command: ["/bin/cp", "/webapps/myapp.war", "/apps/"]
command: ["/bin/mkdir", "-p", "/conf/myapp"]
command: ["touch", "/conf/myapp/ready.txt"]
But it doesn't work.
(looks like only the last command is executed)
I know I could embed a script in the container image and simply call it there... But I would like to be able to customize those commands in the yaml file without touching the container image.
thanks
| Only one command allowed, but you can use sh -c like this
lifecycle:
postStart:
exec:
command:
- "sh"
- "-c"
- >
if [ -s /var/www/mybb/inc/config.php ]; then
rm -rf /var/www/mybb/install;
fi;
if [ ! -f /var/www/mybb/index.php ]; then
cp -rp /originroot/var/www/mybb/. /var/www/mybb/;
fi
| Kubernetes | 39,436,845 | 48 |
I have known clearly about the usage of the docker option --net=container:NAME_or_ID, I also have read the source code of kubernetes about how to configure the container to use the network of InfraContainer, so I think the only work the process in container gcr.io/google_containers/pause:0.8.0 does is "pause", it will never do any complex work like "receiving", "sending" or "routing".
But I am not sure about it because I can not find the Dockerfile of gcr.io/google_containers/pause:0.8.0, so I need someone know clearly about it to tell me the truth, thanks!
| In Kubernetes, each pod has an IP and within a pod there exists a so called infrastructure container, which is the first container that the Kubelet instantiates and it acquires the pod’s IP and sets up the network namespace. All the other containers in the pod then join the infra container’s network and IPC namespace. The infra container has network bridge mode enabled and all the other containers in the pod share its namespace via container mode. The initial process that runs in the infra container does effectively nothing since its sole purpose is to act as the home for the namespaces.
| Kubernetes | 33,472,741 | 48 |
I have built a 4 node kubernetes cluster running multi-container pods all running on CoreOS. The images come from public and private repositories. Right now I have to log into each node and manually pull down the images each time I update them. I would like be able to pull them automatically.
I have tried running docker login on each server and putting the .dockercfg file in /root and /core
I have also done the above with the .docker/config.json
I have added secret to the kube master and added imagePullSecrets:
name: docker.io to the Pod configuration file.
When I create the pod i get the error message Error:
image <user/image>:latest not found
If I log in and run docker pull it will pull the image. I have tried this using docker.io and quay.io.
| To add to what @rob said, as of docker 1.7, the use of .dockercfg has been deprecated and they now use a ~/.docker/config.json file. There is support for this type of secret in kube 1.1, but you must create it using different keys/type configuration in the yaml:
First, base64 encode your ~/.docker/config.json:
cat ~/.docker/config.json | base64 -w0
Note that the base64 encoding should appear on a single line so with -w0 we disable the wrapping.
Next, create a yaml file:
my-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
-
$ kubectl create -f my-secret.yaml && kubectl get secrets
NAME TYPE DATA
default-token-olob7 kubernetes.io/service-account-token 2
registrypullsecret kubernetes.io/dockerconfigjson 1
Then, in your pod's yaml you need to reference registrypullsecret or create a replication controller:
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: yourusername/privateimage:version
imagePullSecrets:
- name: registrypullsecret
| Kubernetes | 32,726,923 | 48 |
Based on the docs that I've read, there are 3 methods of patching:
patches
patchesStrategicMerge
patchesJson6902.
The difference between patchesStrategicMerge and patchesJson6902 is obvious. patchesStrategicMerge requires a duplicate structure of the kubernetes resource to identify the base resource that is being patched followed by the modified portion of the spec to denote what gets changed (or deleted).
patchesJson6902 defines a 'target' attribute used to specify the kubernetes resource with a 'path' attribute that specifies which attribute in the resource gets modified, added, or removed.
However, what is not clear to me is the difference between patches and patchesJson6902. They seem to be very similar in nature. Both specify a 'target' attribute and operation objects which describes what gets modified.
The only difference I've noticed is that patches does not require a 'group' attribute while patchesJson6902 does; The reason for this is unknown.
So why the difference between the two? How do I determine which one to use?
| The explanation for this is here.
To summarize, patchJson6902 is an older keyword which can only match one resource via target (no wildcards), and accepts only Group-version-kind (GVK), namespace, and name.
The patches directive is newer and accepts more elements (annotation selector and label selector as well). In addition, namespace and name can be regexes. The target for patches can match more than one resource, all of which will be patched.
In addition, with patches, it will attempt to parse patch files as a Json6902 patch, and if that does not work, it will fall back to attempting the patch as a strategic merge. Therefore, in many cases patches can obviate the need of using patchesStrategicMerge as well.
Overall, it seems as if patches should work pretty universally for new projects.
UPDATE: Indeed, both patchesJson6902 and patchesStrategicMerge have been deprecated in v5.0.0 in favor of patches.
Upstream documentation for these key words:
patches
patchesJson6902
patchesStrategicMerge
| Kubernetes | 63,604,579 | 47 |
What's the best way to list out the environment variables in a kubernetes pod?
(Similar to this, but for Kube, not Docker.)
| kubectl exec -it <pod_name> -- env
| Kubernetes | 59,198,188 | 47 |
I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on Digital Oceans. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.
I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: testit-01-deployment
spec:
replicas: 4
#number of replicas generated
selector:
#assigns labels to the pods for future selection
matchLabels:
app: testit
version: v01
template:
metadata:
Labels:
app: testit
version: v01
spec:
containers:
-name: testit-container
image: teejayfamo/testit
ports:
-containerPort: 80
I ran this line on cmd in the folder container:
kubectl apply -f deployment.yaml --validate=false
Error from server (BadRequest): error when creating "deployment.yaml":
Deployment in version "v1" cannot be handled as a Deployment:
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: decode
slice: expect [ or n, but found {, error found in #10 byte of
...|tainers":{"-name":"t|..., bigger context
...|:"testit","version":"v01"}},"spec":{"containers":{"-name":"testit-container","image":"teejayfamo/tes|...
I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?
| Since this is the top result of the search, I thought I should add another case when this can occur. In my case, it was coming because there was no double quote on numeric env. var. Log did provide a subtle hint, but it was not very helpful.
Log
..., bigger context ...|c-server-service"},{"name":"SERVER_PORT","value":80}]
Env variable - the value of SERVER_PORT needs to be in double quote.
env:
- name: SERVER_HOST
value: grpc-server-service
- name: SERVER_PORT
value: "80"
Kubernetes issue for reference.
| Kubernetes | 57,233,686 | 47 |
How do I force delete Namespaces stuck in Terminating?
Steps to recreate:
Apply this YAML
apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
finalizers:
- foregroundDeletion
kubectl delete ns delete-me
It is not possible to delete delete-me.
The only workaround I've found is to destroy and recreate the entire cluster.
Things I've tried:
None of these work or modify the Namespace. After any of these the problematic finalizer still exists.
Edit the YAML and kubectl apply
Apply:
apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
finalizers:
$ kubectl apply -f tmp.yaml
namespace/delete-me configured
The command finishes with no error, but the Namespace is not udpated.
The below YAML has the same result:
apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
kubectl edit
kubectl edit ns delete-me, and remove the finalizer. Ditto removing the list entirely. Ditto removing spec. Ditto replacing finalizers with an empty list.
$ kubectl edit ns delete-me
namespace/delete-me edited
This shows no error message but does not update the Namespace. kubectl editing the object again shows the finalizer still there.
kubectl proxy &
kubectl proxy &
curl -k -H "Content-Type: application/yaml" -X PUT --data-binary @tmp.yaml http://127.0.0.1:8001/api/v1/namespaces/delete-me/finalize
As above, this exits successfully but does nothing.
Force Delete
kubectl delete ns delete-me --force --grace-period=0
This actually results in an error:
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
Error from server (Conflict): Operation cannot be fulfilled on namespaces "delete-me": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.
However, it doesn't actually do anything.
Wait a long time
In the test cluster I set up to debug this issue, I've been waiting over a week. Even if the Namespace might eventually decide to be deleted, I need it to be deleted faster than a week.
Make sure the Namespace is empty
The Namespace is empty.
$ kubectl get -n delete-me all
No resources found.
etcdctl
$ etcdctl --endpoint=http://127.0.0.1:8001 rm /namespaces/delete-me
Error: 0: () [0]
I'm pretty sure that's an error, but I have no idea how to interpret that. It also doesn't work. Also tried with --dir and -r.
ctron/kill-kube-ns
There is a script for force deleting Namespaces. This also does not work.
$ ./kill-kube-ns delete-me
Killed namespace: delete-me
$ kubectl get ns delete-me
NAME STATUS AGE
delete-me Terminating 1h
POSTing the edited resource to /finalize
Returns a 405. I'm not sure if this is the canonical way to POST to /finalize though.
Links
This
appears
to
be
a
recurring
problem
and
none
of
these
resources
helped.
Kubernetes bug
| The kubectl proxy try is almost correct, but not quite. It's possible using JSON instead of YAML does the trick, but I'm not certain.
The JSON with an empty finalizers list:
~$ cat ns.json
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "delete-me"
},
"spec": {
"finalizers": []
}
}
Use curl to PUT the object without the problematic finalizer.
~$ curl -k -H "Content-Type: application/json" -X PUT --data-binary @ns.json http://127.0.0.1:8007/api/v1/namespaces/delete-me/finalize
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "delete-me",
"selfLink": "/api/v1/namespaces/delete-me/finalize",
"uid": "0df02f91-6782-11e9-8beb-42010a800137",
"resourceVersion": "39047",
"creationTimestamp": "2019-04-25T17:46:28Z",
"deletionTimestamp": "2019-04-25T17:46:31Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"delete-me\"},\"spec\":{\"finalizers\":[\"foregroundDeletion\"]}}\n"
}
},
"spec": {
},
"status": {
"phase": "Terminating"
}
}
The Namespace is deleted!
~$ kubectl get ns delete-me
Error from server (NotFound): namespaces "delete-me" not found
| Kubernetes | 55,853,312 | 47 |
I have couple of namespaces - assume NS1 and NS2. I have serviceaccounts created in those - sa1 in NS1 and sa2 in NS2. I have created roles and rolebindings for sa1 to do stuff within NS1 and sa2 within NS2.
What I want is give sa1 certain access within NS2 (say only Pod Reader role).
I am wondering if that's possible or not?
| You can simply reference a ServiceAccount from another namespace in the RoleBinding:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: pod-reader
namespace: ns2
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-ns1
namespace: ns2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: ns1-service-account
namespace: ns1
| Kubernetes | 53,960,516 | 47 |
In Kubernetes cronjobs, It is stated in the limitations section that
Jobs may fail to run if the CronJob controller is not running or broken for a span of time from before the start time of the CronJob to start time plus startingDeadlineSeconds, or if the span covers multiple start times and concurrencyPolicy does not allow concurrency.
What I understand from this is that, If the startingDeadlineSeconds is set to 10 and the cronjob couldn't start for some reason at its scheduled time, then it can still be attempted to start again as long as those 10 seconds haven't passed, however, after the 10 seconds, it for sure won't be started, is this correct?
Also, If I have concurrencyPolicy set to Forbid, does K8s count it as a fail if a cronjob tries to be scheduled, when there is one already running?
| After investigating the code base of the Kubernetes repo, so this is how the CronJob controller works:
The CronJob controller will check the every 10 seconds the list of cronjobs in the given Kubernetes Client.
For every CronJob, it checks how many schedules it missed in the duration from the lastScheduleTime till now. If there are more than 100 missed schedules, then it doesn't start the job and records the event:
"FailedNeedsStart", "Cannot determine if job needs to be started. Too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew."
It is important to note, that if the field startingDeadlineSeconds is set (not nil), it will count how many missed jobs occurred from the value of startingDeadlineSeconds till now. For example, if startingDeadlineSeconds = 200, It will count how many missed jobs occurred in the last 200 seconds. The exact implementation of counting how many missed schedules can be found here.
In case there are not more than a 100 missed schedules from the previous step, the CronJob controller will check if the time now is not after the time of its scheduledTime + startingDeadlineSeconds , i.e. that it's not too late to start the job (passed the deadline). If it wasn't too late, the job will continue to be attempted to be started by the CronJob Controller. However, If it is already too late, then it doesn't start the job and records the event:
"Missed starting window for {cronjob name}. Missed scheduled time to start a job {scheduledTime}"
It is also important to note, that if the field startingDeadlineSeconds is not set, then it means there is no deadline at all. This means the job will be attempted to start by the CronJob controller without checking if it's later or not.
Therefore to answer the questions above:
1. If the startingDeadlineSeconds is set to 10 and the cronjob couldn't start for some reason at its scheduled time, then it can still be attempted to start again as long as those 10 seconds haven't passed, however, after the 10 seconds, it for sure won't be started, is this correct?
The CronJob controller will attempt to start the job and it will be successfully scheduled if the 10 seconds after it's schedule time haven't passed yet. However, if the deadline has passed, it won't be started this run, and it will be counted as a missed schedule in later executions.
2. If I have concurrencyPolicy set to Forbid, does K8s count it as a fail if a cronjob tries to be scheduled, when there is one already running?
Yes, it will be counted as a missed schedule. Since missed schedules are calculated as I stated above in point 2.
| Kubernetes | 51,065,538 | 47 |
When processing a rolling update with database migrations, how does kubernetes handle this?
For an instance - I have an app that gets updated from app-v1 to app-v2, which includes a migration step to alter an existing table. So this would mean it requires me to run something like db:migrate for a rails app once deployed.
When a rolling deployment takes place on 3 replica set. It will deploy from one pod to another. Potentially allowing PODs that don't have the new version of the app to break.
Although this scenario is not something that happens very often. It's quite possible that it would. I would like to learn about the best/recommended approaches for this scenario.
| One way to prevent an old version from breaking is to split a migration into multiple steps.
E.g. you want to rename a column in the database. Renaming the column directly would break old versions of the app. This can be split into multiple steps:
Add a db migration that inserts the new column
Change the app so that all writes go to the old and new column
Run a task that copies all values from the old to the new column
Change the app that it reads from the new column
Add a migration that remove the old column
This is unfortunately quite a hassle, but prevents having a downtime with a maintenance page up.
| Kubernetes | 48,877,182 | 47 |
How do you find the cluster/service CIDR for a Kubernetes cluster, once it is already running?
I know for Minikube, it is 10.0.0.1/24.
For GKE, you can find out via
gcloud container clusters describe XXXXXXX --zone=XXXXXX |
grep -e clusterIpv4Cidr -e servicesIpv4Cidr
But how do you find out on a generic Kubernetes cluster, particularly via kubectl?
| I spent hours searching for a generic way to do this. I gave up searching and wrote my own. As of Kubernetes 1.18, this method works across cloud providers, beyond just GKE.
SVCRANGE=$(echo '{"apiVersion":"v1","kind":"Service","metadata":{"name":"tst"},"spec":{"clusterIP":"1.1.1.1","ports":[{"port":443}]}}' | kubectl apply -f - 2>&1 | sed 's/.*valid IPs is //')
echo $SVCRANGE
172.21.0.0/16
This one liner works by feeding an invalid service cluster IP into kubectl apply and parsing the error output, which provides the service CIDR information.
| Kubernetes | 44,190,607 | 47 |
I'm writing a shell script which needs to login into the pod and execute a series of commands in a kubernetes pod.
Below is my sample_script.sh:
kubectl exec octavia-api-worker-pod-test -c octavia-api bash
unset http_proxy https_proxy
mv /usr/local/etc/octavia/octavia.conf /usr/local/etc/octavia/octavia.conf-orig
/usr/local/bin/octavia-db-manage --config-file /usr/local/etc/octavia/octavia.conf upgrade head
After running this script, I'm not getting any output.
Any help will be greatly appreciated
| Are you running all these commands as a single line command? First of all, there's no ; or && between those commands. So if you paste it as a multi-line script to your terminal, likely it will get executed locally.
Second, to tell bash to execute something, you need: bash -c "command".
Try running this:
$ kubectl exec POD_NAME -- bash -c "date && echo 1"
Wed Apr 19 19:29:25 UTC 2017
1
You can make it multiline like this:
$ kubectl exec POD_NAME -- bash -c "date && \
echo 1 && \
echo 2"
| Kubernetes | 43,499,313 | 47 |
I've created the persistent volume (EBS 10G) and corresponding persistent volume claim first. But when I try to deploy the postgresql pods as below (yaml file) :
Receive the errors from pod:
initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.
Why the pod can't use this path? I've tried the same tests on minikube. I didn't meet any problem.
I tried to change volume mount directory path to "/var/lib/test/data", the pods can be running. I created a new table and some data on it, and then killed this pod. Kubernete created a new pod. But the new one didn't preserve the previous data and table.
So what's the way to correctly mount a postgresql volume using Aws EBS in Kubernete, which allows the recreated pods can reuse initial data base stored in EBS?
|
So what's the way to correctly mount a postgresql volume using Aws EBS
You are on a right path...
Error you get is because you want to use root folder of mounted volume / as postgresql Data dir and postgresql complains that it is not best practice to do so since it is not empty and contains already some data inside (namely lost+found directory).
It is far better to locate data dir in separate empty subfolder (/postgres for example) and give postgresql clean slate when creating its file structure. You didn't get same thing on minicube since you most probably mounted host folder that didn't have anything inside (was empty) and didn't trigger such a complaint.
To do so, you would need initially empty subPath of your volume (empty /postgres subfolder on your PV for example) mounted to appropriate mount point (/var/lib/posgresql/data) in your pod. Note that you can name subPath and mount point end folder the same name, they are different here just as an example where test-db-volume/postgres folder would be mounted on pod to /var/lib/postgresql/data folder:
...
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: test-db-volume
subPath: postgres
...
| Kubernetes | 51,168,558 | 46 |
I install the latest version of Kubernetes with the following command on Raspberry PI 3 running Raspbian Stretch.
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
sudo apt-get update -q && \
sudo apt-get install -qy kubeadm
Currently this will install v1.10.0.
How can I install a specific version of Kubernetes? Let's say v1.9.6.
| To install specific version of the package it is enough to define it during the apt-get install command:
apt-get install -qy kubeadm=<version>
But in the current case kubectl and kubelet packages are installed by dependencies when we install kubeadm, so all these three packages should be installed with a specific version:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
sudo apt-get update -q && \
sudo apt-get install -qy kubelet=<version> kubectl=<version> kubeadm=<version>
where available <version> is:
curl -s https://packages.cloud.google.com/apt/dists/kubernetes-xenial/main/binary-amd64/Packages | grep Version | awk '{print $2}'
For your particular case it is:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
sudo apt-get update -q && \
sudo apt-get install -qy kubelet=1.9.6-00 kubectl=1.9.6-00 kubeadm=1.9.6-00
| Kubernetes | 49,721,708 | 46 |
I am trying reach my k8s master from my workstation. I can access the master from the LAN fine but not from my workstation. The error message is:
% kubectl --context=employee-context get pods
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 10.161.233.80, not 114.215.201.87
How can I do to add 114.215.201.87 to the certificate? Do I need to remove my old cluster ca.crt, recreate it, restart whole cluster and then resign client certificate? I have deployed my cluster with kubeadm and I am not sure how to do these steps manually.
| One option is to tell kubectl that you don't want the certificate to be validated. Obviously this brings up security issues but I guess you are only testing so here you go:
kubectl --insecure-skip-tls-verify --context=employee-context get pods
The better option is to fix the certificate. Easiest if you reinitialize the cluster by running kubeadm reset on all nodes including the master and then do
kubeadm init --apiserver-cert-extra-sans=114.215.201.87
It's also possible to fix that certificate without wiping everything, but that's a bit more tricky. Execute something like this on the master as root:
rm /etc/kubernetes/pki/apiserver.*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=10.161.233.80,114.215.201.87
docker rm `docker ps -q -f 'name=k8s_kube-apiserver*'`
systemctl restart kubelet
| Kubernetes | 46,360,361 | 46 |
I have a running pod and I want to change one of it's container's environment variable and made it work immediately. Can I achieve that? If I can, how to do that?
| Simply put and in kube terms, you can not.
Environment for linux process is established on process startup, and there are certainly no kube tools that can achieve such goal.
For example, if you make a change to your Deployment (I assume you use it to create pods) it will roll the underlying pods.
Now, that said, there is a really hacky solution reported under Is there a way to change the environment variables of another process in Unix? that involves using GDB
Also, remember that even if you could do that, there is still application logic that would need to watch for such changes instead of, as it usually is now, just evaluate configuration from envs during startup.
| Kubernetes | 45,050,050 | 46 |
Can one store a binary file in a Kubernetes ConfigMap and then later read the same content from a volume that mounts this ConfigMap? For example, if directory /etc/mycompany/myapp/config contains binary file keystore.jks, will
kubectl create configmap myapp-config --from-file=/etc/mycompany/myapp/config
include file keystore.jks in ConfigMap myapp-config that can later be mapped to a volume, mounted into a container, and read as a binary file?
For example, given the following pod spec, should keystore.jks be available to myapp at /etc/mycompany/myapp/config/keystore.jks?
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: mycompany/myapp
volumeMounts:
- name: myapp-config
mountPath: /etc/mycompany/myapp/config
volumes:
- name: myapp-config
configMap:
name: myapp-config
Kubernetes version details:
derek@derek-HP-EliteOne-800-G1-AiO:~/Documents/platinum/fix/brvm$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6", GitCommit:"ae4550cc9c89a593bcda6678df201db1b208133b", GitTreeState:"clean", BuildDate:"2016-08-26T18:13:23Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6+coreos.0", GitCommit:"f6f0055b8e503cbe5fb7b6f1a2ee37d0f160c1cd", GitTreeState:"clean", BuildDate:"2016-08-29T17:01:01Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
| Binary ConfigMaps are now supported since Kubernetes version 1.10.0. From the readme notes:
ConfigMap objects now support binary data via a new binaryData field. When using kubectl create configmap --from-file, files containing non-UTF8 data will be placed in this new field in order to preserve the non-UTF8 data. Note that kubectl's --append-hash feature doesn't take binaryData into account. Use of this feature requires 1.10+ apiserver and kubelets. (#57938, @dims)
See the changelog for more details: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.10.md#apps
| Kubernetes | 39,420,102 | 46 |
I have tried to run Helm for the first time. I am having deployment.yaml, service.yaml and ingress.yaml files alongwith values.yaml and chart.yaml.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
namespace: xyz
labels:
app: abc
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 3
template:
spec:
containers:
- name: abc
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
-
containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
name: abc
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
namespace: xyz
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.service.sslCert }}
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
selector:
app: abc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "haproxy-ingress"
namespace: xyz
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: alb
From what I can see I do not think I have missed putting app.kubernetes.io/managed-by but still, I keep getting an error:
rendered manifests contain a resource that already exists. Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
It renders the file locally correctly.
helm list --all --all-namespaces returns nothing.
Please help.
| The error below is quiet common:
label validation error: missing key "app.kubernetes.io/managed-by":
must be set to "Helm"; annotation validation error: missing key
"meta.helm.sh/release-name": must be set to ..
So I'll provide a bit longer explanation and also a context to the topic.
What happend?
It seems that you tried to create resources that were already exist and created outside of Helm (probably with kubectl).
Why Helm throw the error?
Helm doesn't allow a resource to be owned by more than one
deployment.
It is the responsibility of the chart creator to ensure that the chart
produce unique resources only.
How can you solve this?
Option 1 - Follow the error message and add the meta.helm.sh annotations:
As can be describe in this PR: Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that
already exists in the target cluster if the existing resource has the
correct meta.helm.sh/release-name and
meta.helm.sh/release-namespace annotations, and matches the label
selector app.kubernetes.io/managed-by=Helm. This facilitates
zero-downtime migrations to Helm 3 for managing existing deployments,
and allows Helm to "adopt" existing resources that it previously
created.
(*) I think that the meta.helm.sh scope is a less common approach today.
Option 2 - Add the app.kubernetes.io/instance label:
As can be seen in different Helm chart providers (Bitnami, Nginx ingress controller, External-Dns for example) - the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
(*) Notice: There are some CD tools like ArgoCD that automatically sets the app.kubernetes.io/instance label and uses it to determine which resources form the app.
Option 3 - Delete old resources.
It might be relevant in your specific case where the old resources might not be relevant anymore.
For those who need some context
What are those labels?
Shared labels and annotations share a common prefix: app.kubernetes.io. Labels without a prefix are private to users. The shared prefix ensures that shared labels do not interfere with custom user labels.
In order to take full advantage of using these labels, they should be applied on every resource object.
The app.kubernetes.io/managed-by label is used to describe the tool being used to manage the operation of an application - for example: helm.
Read more on the Recommended Labels section.
Are they added by helm?
No.
First of all, as mentioned before, those labels are not specific to Helm and Helm itself never requires that a particular label be present.
From the other hand, Helm docs recommend to use the following Standard Labels. app.kubernetes.io/managed-by is one of them and should be set to {{ .Release.Service }} in order to find all resources managed by Helm.
So it is the role of the chart maintainer to add those labels.
What is the best way to add them?
Many Helm chart providers adds them to the _helpers.tpl file and let all resources include it:
labels: {{ include "my-chart.labels" . | nindent 4 }}
| Kubernetes | 62,964,532 | 45 |
I need to loop through a list of instances and create 1 stateful set for every instance. However, inside range I then limit myself to the scope of that loop. I need to access some global values in my statefulset.
I've solved it by just putting all global objects I need in an env variable but... this very seems hacky.
What is the correct way to loop through ranges while still being able to reference global objects?
Example of my loop
{{- $values := .Values -}}
{{- $release := .Release -}}
{{- range .Values.nodes }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $release.Name }} <-- Global Scope
labels:
.
.
.
env:
- name: IP_ADDRESS
value: {{ .ip_address }} <-- From range scope
.
.
.
{{- end }}
Example of values
# Global
image:
repository: ..ecr.....
# Instances
nodes:
- node1:
name: node-1
iP: 1.1.1.1
- node2:
name: node-2
iP: 1.1.1.1
| When entering a loop block you lose your global context when using .. You can access the global context by using $. instead.
As written in the Helm docs -
there is one variable that is always global - $ - this variable will always point to the root context. This can be very useful when you are looping in a range and need to know the chart's release name.
In your example, using this would look something like:
{{- range .Values.nodes }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ $.Release.Name }}
labels:
.
.
.
env:
- name: IP_ADDRESS
value: {{ .ip_address }}
.
.
.
{{- end }}
| Kubernetes | 55,213,545 | 45 |
At present I am creating a configmap from the file config.json by executing:
kubectl create configmap jksconfig --from-file=config.json
I would want the ConfigMap to be created as part of the deployment and tried to do this:
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4 }}
But doesn't seem to work. What should be going into configmap.yaml so that the same configmap is created?
---UPDATE---
when I do a helm install dry run:
# Source: mychartv2/templates/jks-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |
Note: I am using minikube as my kubernetes cluster
| Your config.json file should be inside your mychart/ directory, not inside mychart/templates
Chart Template Guide
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4}}
config.json
{
"val": "key"
}
helm install --dry-run --debug mychart
[debug] Created tunnel using local port: '52091'
[debug] SERVER: "127.0.0.1:52091"
...
NAME: dining-saola
REVISION: 1
RELEASED: Fri Nov 23 15:06:17 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
{}
...
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: dining-saola-configmap
data:
config.json: |-
{
"val": "key"
}
EDIT:
But I want it the values in the config.json file to be taken from values.yaml. Is that possible?
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{
{{- range $key, $val := .Values.json }}
{{ $key | quote | indent 6}}: {{ $val | quote }}
{{- end}}
}
values.yaml
json:
key1: val1
key2: val2
key3: val3
helm install --dry-run --debug mychart
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mangy-hare-configmap
data:
config.json: |-
{
"key1": "val1"
"key2": "val2"
"key3": "val3"
}
| Kubernetes | 53,429,486 | 45 |
I am using kubectl with bash completion , but I prefer to use a shorter alias for kubectl such as ks , what changes I need to make to get the bash completion work with alias ks
| from the official docs
# after installing bash completion
alias k=kubectl
complete -F __start_kubectl k
https://kubernetes.io/docs/reference/kubectl/cheatsheet/#bash
| Kubernetes | 52,905,811 | 45 |
When I try any kubectl command, it always returns:
Unable to connect to the server: EOF
I followed these tutorials:
https://kubernetes.io/docs/tasks/tools/install-kubectl/
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
But they have not helped me. According to the first link, by default, kubectl configuration is located at
~/.kube/config
But in that path I don't have anything. I don't know if this is causing the issue.
Other thing is when I try to check the kubectl configuration:
M:.kube candres$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: EOF
M:.kube candres$ kubectl cluster-info dump
Unable to connect to the server: EOF
The versions I have installed are:
Kubernetes - kubectl
M:.kube candres$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"X", GitTreeState:"clean", BuildDate:"2018-02-09T21:51:06Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: EOF
Minikube
M:.kube candres$ minikube version
minikube version: v0.25.0
Docker:
M:.kube candres$ docker version
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: X
Built: Wed Dec 27 20:03:51 2017
OS/Arch: darwin/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: X
Built: Wed Dec 27 20:12:29 2017
OS/Arch: linux/amd64
Experimental: true
Does anyone know how to resolve this?
| After Minikube is started, kubectl is configured automatically.
minikube start
Starting local Kubernetes cluster...
Kubernetes is available at https://192.168.99.100:8443.
Kubectl is now configured to use the cluster.
You can verify and validate the cluster and context with following commands.
kubectl config view
| Kubernetes | 48,928,330 | 45 |
I'm using kubectl cp to copy a jar file from my local file system into a the home directory of a POD in my minikube environment. However, the only way I can confirm that the copy succeeded is to issue a new kubectl cp command to copy the file back to a temp directory and compare the checksums. Is there a way to view the copied files directly?
| You can execute commands in a container using kubectl exec command.
For example:
to check files in any folder:
kubectl exec <pod_name> -- ls -la /
or to calculate md5sum of any file:
kubectl exec <pod_name> -- md5sum /some_file
| Kubernetes | 48,084,476 | 45 |
Say I have, my-namespace -> my-pod -> my-container
and I have a file located at my-container:/opt/tomcat/logs/catalina.2017-05-02.log. I have applied the below command to copy the file which isn't working,
kubectl cp my-namepace/my-pod:/opt/tomcat/logs/catalina.2017-05-02.log -c my-container .
Note: I have the tar binary on my container
Error:
tar: Removing leading `/' from member names
error: open .: is a directory
| What you are asking kubectl to do is copy the file catalina.2017-05-02.log to the current context, but the current context is a directory. The error is stating that you can not copy a file to have the name of a directory.
Try giving the copied version of the file a name:
kubectl cp my-namepace/my-pod:/opt/tomcat/logs/catalina.2017-05-02.log -c my-container ./catalina.2017-05-02.log.
| Kubernetes | 43,732,342 | 45 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.