question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I am getting below error while trying to apply patch :
core@dgoutam22-1-coreos-5760 ~ $ kubectl apply -f ads-central-configuration.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
Error from server (Conflict): error when applying patch:
{"data":{"default":"{\"dedicated_redis_cluster\": {\"nodes\": [{\"host\": \"192.168.1.94\", \"port\": 6379}]}}"},"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"data\":{\"default\":\"{\\\"dedicated_redis_cluster\\\": {\\\"nodes\\\": [{\\\"host\\\": \\\"192.168.1.94\\\", \\\"port\\\": 6379}]}}\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"creationTimestamp\":\"2018-06-27T07:19:13Z\",\"labels\":{\"acp-app\":\"acp-discovery-service\",\"version\":\"1\"},\"name\":\"ads-central-configuration\",\"namespace\":\"acp-system\",\"resourceVersion\":\"1109832\",\"selfLink\":\"/api/v1/namespaces/acp-system/configmaps/ads-central-configuration\",\"uid\":\"64901676-79da-11e8-bd65-fa163eaa7a28\"}}\n"},"creationTimestamp":"2018-06-27T07:19:13Z","resourceVersion":"1109832","uid":"64901676-79da-11e8-bd65-fa163eaa7a28"}}
to:
&{0xc4200bb380 0xc420356230 acp-system ads-central-configuration ads-central-configuration.yaml 0xc42000c970 4434 false}
**for: "ads-central-configuration.yaml": Operation cannot be fulfilled on configmaps "ads-central-configuration": the object has been modified; please apply your changes to the latest version and try again**
core@dgoutam22-1-coreos-5760 ~ $
| It seems likely that your yaml configurations were copy pasted from what was generated, and thus contains fields such as creationTimestamp (and resourceVersion, selfLink, and uid), which don't belong in a declarative configuration file.
Go through your yaml and clean it up. Remove things that are instance specific. Your final yaml should be simple enough that you can easily understand it.
| Kubernetes | 51,297,136 | 73 |
I'm debugging log output from kubectl that states:
Error from server (BadRequest): a container name must be specified for pod postgres-operator-49202276-bjtf4, choose one of: [apiserver postgres-operator]
OK, so that's an explanatory error message, but looking at my JSON template it ought to just create both containers specified, correct? What am I missing? (please forgive my ignorance.)
I'm using just a standard kubectl create -f command to create the JSON file within a shell script. The JSON deployment file is as follows:
{
"apiVersion": "extensions/v1beta1",
"kind": "Deployment",
"metadata": {
"name": "postgres-operator"
},
"spec": {
"replicas": 1,
"template": {
"metadata": {
"labels": {
"name": "postgres-operator"
}
},
"spec": {
"containers": [{
"name": "apiserver",
"image": "$CCP_IMAGE_PREFIX/apiserver:$CO_IMAGE_TAG",
"imagePullPolicy": "IfNotPresent",
"env": [{
"name": "DEBUG",
"value": "true"
}],
"volumeMounts": [{
"mountPath": "/config",
"name": "apiserver-conf",
"readOnly": true
}, {
"mountPath": "/operator-conf",
"name": "operator-conf",
"readOnly": true
}]
}, {
"name": "postgres-operator",
"image": "$CCP_IMAGE_PREFIX/postgres-operator:$CO_IMAGE_TAG",
"imagePullPolicy": "IfNotPresent",
"env": [{
"name": "DEBUG",
"value": "true"
}, {
"name": "NAMESPACE",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.namespace"
}
}
}, {
"name": "MY_POD_NAME",
"valueFrom": {
"fieldRef": {
"fieldPath": "metadata.name"
}
}
}],
"volumeMounts": [{
"mountPath": "/operator-conf",
"name": "operator-conf",
"readOnly": true
}]
}],
"volumes": [{
"name": "operator-conf",
"configMap": {
"name": "operator-conf"
}
}, {
"name": "apiserver-conf",
"configMap": {
"name": "apiserver-conf"
}
}]
}
}
}
}
| If a pod has more than 1 containers then you need to provide the name of the specific container.
in your case, There is a pod (postgres-operator-49202276-bjtf4) which has 2 containers (apiserver and postgres-operator ).
following commands will provide logs for the specific containers
kubectl logs deployment/postgres-operator -c apiserver
kubectl logs deployment/postgres-operator -c postgres-operator
| Kubernetes | 47,129,376 | 73 |
I have problem login into one container of a multi-container pod.
I get the container id from the kubectl describe pod <pod-name>
kubectl describe pod ipengine-net-benchmark-488656591-gjrpc -c <container id>
When i try:
kubectl exec -ti ipengine-net-benchmark-488656591-gjrpc -c 70761432854f /bin/bash
It says: Error from server: container 70761432854f is not valid for pod ipengine-net-benchmark-488656591-gjrpc
| Ah once more detailed reading the man page of kubectl exec :
Flags:
-c, --container="": Container name. If omitted, the first container in the pod will be chosen
-p, --pod="": Pod name
-i, --stdin[=false]: Pass stdin to the container
-t, --tty[=false]: Stdin is a TTY
So i just used the container name from my manifest.yaml and it worked like charm.
Name of the container: ipengine-net-benchmark-iperf-server
kubectl exec -ti ipengine-net-benchmark-488656591-gjrpc -c ipengine-net-benchmark-iperf-server /bin/bash
| Kubernetes | 39,979,880 | 73 |
I have 3 nodes in kubernetes cluster. I create a daemonset and deployed it in all the 3 devices. This daemonset created 3 pods and they were successfully running. But for some reasons, one of the pod failed.
I need to know how can we restart this pod without affecting other pods in the daemon set, also without creating any other daemon set deployment?
Thanks
| kubectl delete pod <podname> it will delete this one pod and Deployment/StatefulSet/ReplicaSet/DaemonSet will reschedule a new one in its place
| Kubernetes | 49,405,924 | 72 |
In a container inside a pod, how can I run a command using kubectl? For example, if i need to do something like this inside a container:
kubectl get pods
I have tried this : In my dockerfile, I have these commands :
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN sudo mv ./kubectl /usr/local/bin/kubectl
EDIT : I was trying the OSX file, I have corrected it to the linux binary file. (corrected by @svenwltr
While creating the docker file, this is successful, but when I run the kubectl get pods inside a container,
kubectl get pods
I get this error :
The connection to the server : was refused - did you specify the right host or port?
When I was deploying locally, I was encountering this error if my docker-machine was not running, but inside a container how can a docker-machine be running?
Locally, I get around this error by running the following commands:
(dev is the name of the docker-machine)
docker-machine env dev
eval $(docker-machine env dev)
Can someone please tell me what is it that I need to do?
| I would use kubernetes api, you just need to install curl, instead of kubectl and the rest is restful.
curl http://localhost:8080/api/v1/namespaces/default/pods
Im running above command on one of my apiservers. Change the localhost to apiserver ip address/dns name.
Depending on your configuration you may need to use ssl or provide client certificate.
In order to find api endpoints, you can use --v=8 with kubectl.
example:
kubectl get pods --v=8
Resources:
Kubernetes API documentation
Update for RBAC:
I assume you already configured rbac, created a service account for your pod and run using it. This service account should have list permissions on pods in required namespace. In order to do that, you need to create a role and role binding for that service account.
Every container in a cluster is populated with a token that can be used for authenticating to the API server. To verify, Inside the container run:
cat /var/run/secrets/kubernetes.io/serviceaccount/token
To make request to apiserver, inside the container run:
curl -ik \
-H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods
| Kubernetes | 42,642,170 | 72 |
I have defined a Deployment for my app:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: 172.20.34.206:5000/myapp_img:2.0
ports:
- containerPort: 8080
Now, if I want update my app's image 2.0 to 3.0, I do this:
$ kubectl edit deployment/myapp-deployment
vim is open. I change the image version from 2.0 to 3.0 and save.
How can it be automated? Is there a way to do it just running a command? Something like:
$ kubectl edit deployment/myapp-deployment --image=172.20.34.206:5000/myapp:img:3.0
I thought using Kubernetes API REST but I don't understand the documentation.
| You could do it via the REST API using the PATCH verb. However, an easier way is to use kubectl patch. The following command updates your app's tag:
kubectl patch deployment myapp-deployment -p \
'{"spec":{"template":{"spec":{"containers":[{"name":"myapp","image":"172.20.34.206:5000/myapp:img:3.0"}]}}}}'
According to the documentation, YAML format should be accepted as well. See Kubernetes issue #458 though (and in particular this comment) which may hint at a problem.
| Kubernetes | 36,920,171 | 72 |
I am trying to test my development helm chat deployment output using --dry-run option. when I run the below command its trying to connect to Kubernetes API server.
Is dry run option required to connect Kubernetes cluster? all I want to check the deployment yaml file output.
helm install mychart-0.1.0.tgz --dry-run --debug
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
| There is also an option to run helm template ./mychart to render the generated YAMLs without needing the connection to tiller.
Combined with helm lint it's a great set to verify validity of your chart.
| Kubernetes | 47,894,318 | 71 |
Both Kubernetes Pods and the results of Docker Compose scripts (henceforth: "Compositions") appear to result in clusters of virtual computers.
The computers in the clusters can all be configured to talk to each other so you can write a single script that mirrors your entire end-to-end production config. A single script allows you to deploy that cluster on any container-host.
Given the similarities between the two systems, I'm struggling to understand what the differences are between the two.
Why would I choose one over the other? Are they mutually exclusive systems or can I run compositions in kubernetes.
Are there any critical considerations that need to be accounted for when designing for a container system? If I am designing the architecture for a site today and would like to try and build a container-based system. What are the highest priority things I should design for? (as compared to building on a single machine system)
| docker compose is just a way to declare the container you have to start: it has no notion of node or cluster, unless it launches swarm master and swarm nodes, but that is docker swarm)
Update July 2016, 7 months later: docker 1.12 blurs the lines and includes a "swarm mode".
It is vastly different from kubernetes, a google tool to manage thousands of containers groups as Pod, over tens or hundreds of machines.
A Kubernetes Pod would be closer from a docker swarm:
Imagine individual Docker containers as packing boxes. The boxes that need to stay together because they need to go to the same location or have an affinity to each other are loaded into shipping containers.
In this analogy, the packing boxes are Docker containers, and the shipping containers are Kubernetes pods.
As commented below by ealeon:
I think pod is equivalent to compose except that kubernetes can orchestrated pods, whereas there is nothing orchestrating compose unless it is used with swarm like you've mentioned.
You can launch kubernetes commands with docker-compose by the way.
In terms of how Kubernetes differs from other container management systems out there, such as Swarm, Kubernetes is the third iteration of cluster managers that Google has developed.
You can hear more about kubernetes in the episode #3 of Google Cloud Platform Podcast.
While it is true both can create a multi-container application, a Pod also serves as a unit of deployment and horizontal scaling/replication, which docker compose does not provide.
Plus, you don't create a pod directly, but use controllers (like replication controllers).
POD lives within a larger platform which offers Co-location (co-scheduling), fate sharing, coordinated replication, resource sharing, and dependency management.
Docker-compose lives... on its own, with its docker-compose.yml file
| Kubernetes | 33,946,144 | 71 |
kubectl get command has this flag -o to format the output.
Is there a similar way to format the output of the kubectl describe command?
For example:
kubectl describe -o="jsonpath={...}" pods my-rc
would print a JSON format for the list of pods in my-rc replication controller. But -o is not accepted for the describe command.
| kubectl describe doesn't support -o or equivalent. It's meant to be human-readable rather than script-friendly. You can achieve what you described with kubectl get pods -l <selector_of_your_rc> -o <output_format>, for example:
$ kubectl get pods -l app=guestbook,tier=frontend -o name
pod/frontend-a4kjz
pod/frontend-am1ua
pod/frontend-yz2dq
| Kubernetes | 37,464,518 | 69 |
I recently updated my Docker for Desktop to latest Edge channel version: 2.1.1.0 on a Windows 10 machine. Unfortunately, after updating, Kubernetes was no longer working as it is always stuck at "Kubernetes is Starting".
I have tried the following so far.
Restarting Docker
Resetting the Kubernetes Cluster
Restoring Factory Default settings
Restarting machine
Uninstalling and reinstalling Docker
Nothing seems to be working. How can I resolve it?
| After hours of trying out different things, here is what finally helped me:
Restore Docker to Factory Default settings and Quit Docker for Desktop
Delete the folder C:\ProgramData\DockerDesktop\pki (Make a backup of it just in case). Note that many have reported the folder to be located elsewhere: C:\Users\<user_name>\AppData\Local\Docker\pki
Delete the folder ~\.kube\ (Again make a backup to be safe)
Start Docker again, open Docker settings, make the necessary configuration changes (adding proxy, setting resource limits, etc..), Enable Kubernetes and let it start
Wait a while and both Docker and Kubernetes will be up now.
When you try to connect to Kubernetes using kubectl, you might face another issue like
Unable to connect to the server: x509: certificate signed by unknown authority
You can solve this by
Open ~.kube\config in a text editor
Replace https://kubernetes.docker.internal:6443 to https://localhost:6443
Try connecting again.
Or if you are behind a (corporate) proxy: add kubernetes.docker.internal to NO_PROXY (eg export NO_PROXY=kubernetes.docker.internal), given that the proxy is configured correctly.
If this still doesn't resolve your issue, go through the logs at C:\ProgramData\DockerDesktop\log\ to debug the issue further
| Kubernetes | 57,711,639 | 68 |
I don't mean being able to route to a specific port, I mean to actually change the port the ingress listens on.
Is this possible? How? Where is this documented?
| No. From the kubernetes documentation:
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
It may be possible to customize a LoadBalancer on a cloud provider like AWS to listen on other ports.
| Kubernetes | 56,243,121 | 68 |
I have a kind: Namespace template YAML, as per below:
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Values.namespace }}
namespace: ""
How do I make helm install create the above-given namespace ({{ .Values.namespace }}) if and only if above namespace ({{ .Values.namespace }}) doesn't exits in the pointed Kubernetes cluster?
| This feature is implemented in helm >= 3.2 (Pull Request)
Use --create-namespace in addition to --namespace <namespace>
| Kubernetes | 51,783,651 | 68 |
I have a docker compose file with the following entries
version: '2.1'
services:
mysql:
container_name: mysql
image: mysql:latest
volumes:
- ./mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3306"]
interval: 30s
timeout: 10s
retries: 5
test1:
container_name: test1
image: test1:latest
ports:
- '4884:4884'
- '8443'
depends_on:
mysql:
condition: service_healthy
links:
- mysql
The Test-1 container is dependent on mysql and it needs to be up and running.
In docker this can be controlled using health check and depends_on attributes.
The health check equivalent in kubernetes is readinessprobe which i have already created but how do we control the container startup in the pod's?????
Any directions on this is greatly appreciated.
My Kubernetes file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
template:
metadata:
labels:
app: deployment
spec:
containers:
- name: mysqldb
image: "dockerregistry:mysqldatabase"
imagePullPolicy: Always
ports:
- containerPort: 3306
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 10
- name: test1
image: "dockerregistry::test1"
imagePullPolicy: Always
ports:
- containerPort: 3000
| That's the beauty of Docker Compose and Docker Swarm... Their simplicity.
We came across this same Kubernetes shortcoming when deploying the ELK stack.
We solved it by using a side-car (initContainer), which is just another container in the same pod thats run first, and when it's complete, kubernetes automatically starts the [main] container. We made it a simple shell script that is in loop until Elasticsearch is up and running, then it exits and Kibana's container starts.
Below is an example of a side-car that waits until Grafana is ready.
Add this 'initContainer' block just above your other containers in the Pod:
spec:
initContainers:
- name: wait-for-grafana
image: darthcabs/tiny-tools:1
args:
- /bin/bash
- -c
- >
set -x;
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://grafana:3000/login)" != "200" ]]; do
echo '.'
sleep 15;
done
containers:
.
.
(your other containers)
.
.
| Kubernetes | 49,368,047 | 68 |
I have been trying to wrap my head around how Rancher (or DC/OS) is different from Kubernetes. Both of them say they are Container management tools. Why we do we need both? How are they different?
| Author's note
This question was originally posted 3 years ago. Since then the technology landscape has moved on.
For example Mesosphere, the company behind DCOS has renamed itself and refocused it's efforts on Kubernetes. Similarily Rancher positioned itself as a Kubernetes installation and management layer.
If this issue is still a puzzle I'd suggest posing new question.
Original answer
Rancher is a neat tool that is best described as a deployment tool for Kubernetes that additionally has integrated itself to provide networking and load balancing support.
Rancher initially created it's own framework, called Cattle, to coordinate docker containers across multiple hosts. At that time Docker was limited to running on a single host. Rancher offered an interesting solution to this problem by providing networking between hosts, something that was eventually to become part of Docker Swarm.
Now Rancher enables users to deploy a choice of Cattle, Docker Swarm, Apache Mesos (upstream project for DCOS) or Kubernetes to manage your containers.
Response to jdc0589
You're quite correct. To the container user Kubernetes abstracts away the underlying implementation details of compute, networking and storage. It's in the setup of this underlying detail where Rancher helps. Rancher's networking provides a consistent solution across a variety of platforms. I have found it particularly useful when running on bare metal or standard (non cloud) virtual servers.
If you're only using AWS, I would use kops and take advantage the native integration you've mentioned.
While I'm k8s fixated, it must be acknowledged that Rancher also allows the easy install of other frameworks (Swarm and Mesos). I recommend trying it out, if only to understand why you don't need it.
http://docs.rancher.com/rancher/v1.5/en/quick-start-guide/
http://docs.rancher.com/rancher/v1.5/en/kubernetes/
Update 2017-10-11
Rancher have announced a preview of Rancher 2.0. The new answer to your question is that soon Rancher will be an admin UI and set of additional services designed to be deployed on top of Kubernetes.
| Kubernetes | 39,608,002 | 68 |
I've created a Kubernetes job that has now failed. Where can I find the logs to this job?
I'm not sure how to find the associated pod (I assume once the job fails it deletes the pod)?
Running kubectl describe job does not seem to show any relevant information:
Name: app-raiden-migration-12-19-58-21-11-2018
Namespace: localdev
Selector: controller-uid=c2fd06be-ed87-11e8-8782-080027eeb8a0
Labels: jobType=database-migration
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"labels":{"jobType":"database-migration"},"name":"app-raiden-migration-12-19-58-21-1...
Parallelism: 1
Completions: 1
Start Time: Wed, 21 Nov 2018 12:19:58 +0000
Pods Statuses: 0 Running / 0 Succeeded / 1 Failed
Pod Template:
Labels: controller-uid=c2fd06be-ed87-11e8-8782-080027eeb8a0
job-name=app-raiden-migration-12-19-58-21-11-2018
Containers:
app:
Image: pp3-raiden-app:latest
Port: <none>
Command:
php
artisan
migrate
Environment:
DB_HOST: local-mysql
DB_PORT: 3306
DB_DATABASE: raiden
DB_USERNAME: <set to the key 'username' in secret 'cloudsql-db-credentials'> Optional: false
DB_PASSWORD: <set to the key 'password' in secret 'cloudsql-db-credentials'> Optional: false
LOG_CHANNEL: stderr
APP_NAME: Laravel
APP_KEY: ABCDEF123ERD456EABCDEF123ERD456E
APP_URL: http://192.168.99.100
OAUTH_PRIVATE: <set to the key 'oauth_private.key' in secret 'laravel-oauth'> Optional: false
OAUTH_PUBLIC: <set to the key 'oauth_public.key' in secret 'laravel-oauth'> Optional: false
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m job-controller Created pod: app-raiden-migration-12-19-58-21-11-2018-pwnjn
Warning BackoffLimitExceeded 2m job-controller Job has reach the specified backoff limit
| One other approach:
kubectl describe job $JOB
Pod name is shown under "Events"
kubectl logs $POD
| Kubernetes | 53,411,958 | 67 |
by using kubectl exec -ti POD_NAME bash I am able to access the terminal inside the container and execute the command.
I can understand the usability and convenient of the above command. As K8s Operator I use exec regularly.
However, What is the use case of kubectl attach POD_NAME?
How can it be utilised?
What is the real purpose of it?
In what situation or circumstance it can be used?
| The use cases for kubectl attach are discussed in kubernetes/issue 23335.
It can attach to the main process run by the container, which is not always bash.
As opposed to exec, which allows you to execute any process within the container (often: bash)
# Get output from running pod 123456-7890, using the first container by default
kubectl attach 123456-7890
# Get output from ruby-container from pod 123456-7890
kubectl attach 123456-7890 -c ruby-container
This article proposes:
In addition to interactive execution of commands, you can now also attach to any running process. Like kubectl logs, you’ll get stderr and stdout data, but with attach, you’ll also be able to send stdin from your terminal to the program.
Awesome for interactive debugging, or even just sending ctrl-c to a misbehaving application.
$> kubectl attach redis -i
Again, the main difference is in the process you interact with in the container:
exec: any one you want to create
attach: the one currently running (no choice)
| Kubernetes | 50,030,252 | 67 |
Using kubernetes-kafka as a starting point with minikube.
This uses a StatefulSet and a headless service for service discovery within the cluster.
The goal is to expose the individual Kafka Brokers externally which are internally addressed as:
kafka-0.broker.kafka.svc.cluster.local:9092
kafka-1.broker.kafka.svc.cluster.local:9092
kafka-2.broker.kafka.svc.cluster.local:9092
The constraint is that this external service be able to address the brokers specifically.
Whats the right (or one possible) way of going about this? Is it possible to expose a external service per kafka-x.broker.kafka.svc.cluster.local:9092?
| We have solved this in 1.7 by changing the headless service to Type=NodePort and setting the externalTrafficPolicy=Local. This bypasses the internal load balancing of a Service and traffic destined to a specific node on that node port will only work if a Kafka pod is on that node.
apiVersion: v1
kind: Service
metadata:
name: broker
spec:
externalTrafficPolicy: Local
ports:
- nodePort: 30000
port: 30000
protocol: TCP
targetPort: 9092
selector:
app: broker
type: NodePort
For example, we have two nodes nodeA and nodeB, nodeB is running a kafka pod. nodeA:30000 will not connect but nodeB:30000 will connect to the kafka pod running on nodeB.
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport
Note this was also available in 1.5 and 1.6 as a beta annotation, more can be found here on feature availability: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
Note also that while this ties a kafka pod to a specific external network identity, it does not guarantee that your storage volume will be tied to that network identity. If you are using the VolumeClaimTemplates in a StatefulSet then your volumes are tied to the pod while kafka expects the volume to be tied to the network identity.
For example, if the kafka-0 pod restarts and kafka-0 comes up on nodeC instead of nodeA, kafka-0's pvc (if using VolumeClaimTemplates) has data that it is for nodeA and the broker running on kafka-0 starts rejecting requests thinking that it is nodeA not nodeC.
To fix this, we are looking forward to Local Persistent Volumes but right now we have a single PVC for our kafka StatefulSet and data is stored under $NODENAME on that PVC to tie volume data to a particular node.
https://github.com/kubernetes/features/issues/121
https://kubernetes.io/docs/concepts/storage/volumes/#local
| Kubernetes | 46,456,239 | 67 |
I'm running a MySQL deployment on Kubernetes however seems like my allocated space was not enough, initially I added a persistent volume of 50GB and now I'd like to expand that to 100GB.
I already saw the a persistent volume claim is immutable after creation, but can I somehow just resize the persistent volume and then recreate my claim?
| Yes, as of 1.11, persistent volumes can be resized on certain cloud providers. To increase volume size:
Edit the PVC (kubectl edit pvc $your_pvc) to specify the new size. The key to edit is spec.resources.requests.storage:
Terminate the pod using the volume.
Once the pod using the volume is terminated, the filesystem is expanded and the size of the PV is increased. See the above link for details.
| Kubernetes | 40,335,179 | 67 |
I am creating an InfluxDB deployment in a Kubernetes cluster (v1.15.2), this is my yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: monitoring-influxdb
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
task: monitoring
k8s-app: influxdb
spec:
containers:
- name: influxdb
image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
task: monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- port: 8086
targetPort: 8086
selector:
k8s-app: influxdb
And this is the pod status:
$ kubectl get deployment -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 1/1 1 1 163d
kubernetes-dashboard 1/1 1 1 164d
monitoring-grafana 0/1 0 0 12m
monitoring-influxdb 0/1 0 0 11m
Now, I've been waiting 30 minutes and there is still no pod available, how do I check the deployment log from command line? I could not access the Kubernetes dashboard now. I am searching a command to get the pod log, but now there is no pod available. I already tried to add label in node:
kubectl label nodes azshara-k8s03 k8s-app=influxdb
This is my deployment describe content:
$ kubectl describe deployments monitoring-influxdb -n kube-system
Name: monitoring-influxdb
Namespace: kube-system
CreationTimestamp: Wed, 04 Mar 2020 11:15:52 +0800
Labels: k8s-app=influxdb
task=monitoring
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"monitoring-influxdb","namespace":"kube-system"...
Selector: k8s-app=influxdb,task=monitoring
Replicas: 1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: k8s-app=influxdb
task=monitoring
Containers:
influxdb:
Image: registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.5.2
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/data from influxdb-storage (rw)
Volumes:
influxdb-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
OldReplicaSets: <none>
NewReplicaSet: <none>
Events: <none>
This is another way to get logs:
$ kubectl -n kube-system logs -f deployment/monitoring-influxdb
error: timed out waiting for the condition
There is no output for this command:
kubectl logs --selector k8s-app=influxdb
There is all my pod in kube-system namespace:
~/Library/Mobile Documents/com~apple~CloudDocs/Document/k8s/work/heapster/heapster-deployment ⌚ 11:57:40
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-569fd64d84-5q5pj 1/1 Running 0 46h
kubernetes-dashboard-6466b68b-z6z78 1/1 Running 0 11h
traefik-ingress-controller-hx4xd 1/1 Running 0 11h
| kubectl logs deployment/<name-of-deployment> # logs of deployment
kubectl logs -f deployment/<name-of-deployment> # follow logs
| Kubernetes | 60,518,658 | 66 |
In Kelsey Hightower's Kubernetes Up and Running, he gives two commands :
kubectl get daemonSets --namespace=kube-system kube-proxy
and
kubectl get deployments --namespace=kube-system kube-dns
Why does one use daemonSets and the other deployments?
And what's the difference?
| Kubernetes deployments manage stateless services running on your cluster (as opposed to for example StatefulSets which manage stateful services). Their purpose is to keep a set of identical pods running and upgrade them in a controlled way. For example, you define how many replicas(pods) of your app you want to run in the deployment definition and kubernetes will make that many replicas of your application spread over nodes. If you say 5 replica's over 3 nodes, then some nodes will have more than one replica of your app running.
DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. A Daemonset will not run more than one replica per node. Another advantage of using a Daemonset is that, if you add a node to the cluster, then the Daemonset will automatically spawn a pod on that node, which a deployment will not do.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd
Lets take the example you mentioned in your question: why iskube-dns a deployment andkube-proxy a daemonset?
The reason behind that is that kube-proxy is needed on every node in the cluster to run IP tables, so that every node can access every pod no matter on which node it resides. Hence, when we make kube-proxy a daemonset and another node is added to the cluster at a later time, kube-proxy is automatically spawned on that node.
Kube-dns responsibility is to discover a service IP using its name and only one replica of kube-dns is enough to resolve the service name to its IP. Hence we make kube-dns a deployment, because we don't need kube-dns on every node.
| Kubernetes | 53,888,389 | 66 |
I am fairly new to the Google Cloud platform and Docker and set-up a cluster of nodes, made a Dockerfile that copies a repo and runs a Clojure REPL on a public port. I can connect to it from my IDE and play around with my code, awesome!
That REPL should however probably be tunneled through SSH, but here is where my problem starts. I can't find a suitable place to SSH into making changes to the repo that Docker runs the REPL on:
The exposed IP just exposes the REPL service (correct kubernetes term?) and does not allow me to SSH in.
Neither does the cluster master endpoint, it gives me a public key error even though I've followed the Adding or removing SSH keys for all of the instances in your project part here.
I'd like to modify the source files via SSH, but I'll need to access the Docker code repository. I'm not sure how to proceed.
I understand this isn't exactly a typical way to deploy applications so I am not even sure it's possible to have multiple nodes work with a modified docker codebase (do the nodes share the JVM somehow?).
Concretely my question is how do I SSH into the docker container to access the codebase?
| For more recent Kubernetes versions the shell command should be separated by the --:
kubectl exec -it <POD NAME> -c <CONTAINER NAME> -- bash
Please note that bash needs to be availalble for execution inside of the container. For different OS flavours you might need to use /bin/sh, /bin/bash (or others) instead.
The command format for Kubernetes 1.5.0:
kubectl exec -it <POD NAME> -c <CONTAINER NAME> bash
| Kubernetes | 38,485,771 | 66 |
Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
| https://github.com/kubernetes/ingress-gce#frontend-https
You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https)
Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet. But keep in mind this doesn't apply for L7 (internal) load balancers.
| Kubernetes | 37,001,557 | 66 |
I am setting up the kubernetes setup for django webapp.
I am passing environment variable while creating deployment as below
kubectl create -f deployment.yml -l key1=value1
I am getting error as below
error: no objects passed to create
Able to create the deployment successfully, If i remove the env variable -l key1=value1 while creating deployment.
deployment.yaml as below
#Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: sigma-service
name: $key1
What will be the reason for causing the above error while creating deployment?
| I used envsubst (https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html) for this. Create a deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: $NAME
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then:
export NAME=my-test-nginx
envsubst < deployment.yaml | kubectl apply -f -
Not sure what OS are you using to run this. On macOS, envsubst installed like:
brew install gettext
brew link --force gettext
| Kubernetes | 56,003,777 | 65 |
Can I set the default namespace? That is:
$ kubectl get pods -n NAMESPACE
It saves me having to type it in each time especially when I'm on the one namespace for most of the day.
| Yes, you can set the namespace as per the docs like so:
$ kubectl config set-context --current --namespace=NAMESPACE
Alternatively, you can use kubectx for this.
| Kubernetes | 54,902,676 | 65 |
In most examples about using secrets in Kubernetes, you can find similar examples:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: User
password: **********
What is the purpose of type: Opaque in the definition above? What other types (and for which use cases) are possible to specify there?
| type: Opaque means that from kubernetes's point of view the contents of this Secret is unstructured, it can contain arbitrary key-value pairs.
In contrast, there is the Secret storing ServiceAccount credentials, or the ones used as ImagePullSecret. These have a constrained contents.
| Kubernetes | 45,120,359 | 65 |
I've been trying to figure out what happens when the Kubernetes master fails in a cluster that only has one master. Do web requests still get routed to pods if this happens, or does the entire system just shut down?
According to the OpenShift 3 documentation, which is built on top of Kubernetes, (https://docs.openshift.com/enterprise/3.2/architecture/infrastructure_components/kubernetes_infrastructure.html), if a master fails, nodes continue to function properly, but the system looses its ability to manage pods. Is this the same for vanilla Kubernetes?
| In typical setups, the master nodes run both the API and etcd and are either largely or fully responsible for managing the underlying cloud infrastructure. When they are offline or degraded, the API will be offline or degraded.
In the event that they, etcd, or the API are fully offline, the cluster ceases to be a cluster and is instead a bunch of ad-hoc nodes for this period. The cluster will not be able to respond to node failures, create new resources, move pods to new nodes, etc. Until both:
Enough etcd instances are back online to form a quorum and make progress (for a visual explanation of how this works and what these terms mean, see this page).
At least one API server can service requests
In a partially degraded state, the API server may be able to respond to requests that only read data.
However, in any case, life for applications will continue as normal unless nodes are rebooted, or there is a dramatic failure of some sort during this time, because TCP/ UDP services, load balancers, DNS, the dashboard, etc. Should all continue to function for at least some time. Eventually, these things will all fail on different timescales. In single master setups or complete API failure, DNS failure will probably happen first as caches expire (on the order of minutes, though the exact timing is configurable, see the coredns cache plugin documentation). This is a good reason to consider a multi-master setup–DNS and service routing can continue to function indefinitely in a degraded state, even if etcd can no longer make progress.
There are actions that you could take as an operator which would accelerate failures, especially in a fully degraded state. For instance, rebooting a node would cause DNS queries and in fact probably all pod and service networking functionality until at least one master comes back online. Restarting DNS pods or kube-proxy would also be bad.
If you'd like to test this out yourself, I recommend kubeadm-dind-cluster, kind or, for more exotic setups, kubeadm on VMs or bare metal. Note: kubectl proxy will not work during API failure, as that routes traffic through the master(s).
| Kubernetes | 39,172,131 | 65 |
Why do I need 3 different kind of probes in kubernetes:
startupProbe
readinessProbe
livenessProbe
There are some questions (k8s - livenessProbe vs readinessProbe, Setting up a readiness, liveness or startup probe) and articles about this topic. But this is not so clear:
Why do I need 3 different kind of probes?
What are the use cases?
What are the best practises?
| These 3 kind of probes have 3 different use cases. That's why we need 3 kind of probes.
Liveness Probe
If the Liveness Probe fails, the pod will be restarted (read more about failureThreshold).
Use case: Restart pod, if the pod is dead.
Best practices: Only include basic checks in the liveness probe. Never include checks on connections to other services (e.g. database). The check shouldn't take too long to complete.
Always specify a light Liveness Probe to make sure that the pod will be restarted, if the pod is really dead.
Startup Probe
Startup Probes check, when the pod is available after startup.
Use case: Send traffic to the pod, as soon as the pod is available after startup. Startup probes might take longer to complete, because they are only called on initializing. They might call a warmup task (but also consider init containers for initialization). After the Startup probe succeeds, the liveliness probe is called.
Best practices: Specify a Startup Probe, if the pod takes a long time to start. The Startup and Liveness Probe can use the same endpoint, but the Startup Probe can have a less strict failure threshhold which prevents a failure on startup (s. Kubernetes in Action).
Readiness Probe
In contrast to Startup Probes Readiness Probes check, if the pod is available during the complete lifecycle.
In contrast to Liveness Probes only the traffic to the pod is stopped, if the Readiness probe fails, but there will be no restart.
Use case: Stop sending traffic to the pod, if the pod can not temporarily serve because a connection to another service (e.g. database) fails and the pod will recover later.
Best practices: Include all necessary checks including connections to vital services. Nevertheless the check shouldn't take too long to complete.
Always specify a Readiness Probe to make sure that the pod only gets traffic, if the pod can properly handle incoming requests.
Documentation
This article explains very well the differences between the 3 kind of probes.
The Official kubernetes documentation gives a good overview about all configuration options.
Best practises for probes.
The book Kubernetes in Action gives most detailed insights about the best practises.
| Kubernetes | 65,858,309 | 64 |
I am trying to deploy my microservices into Kubernetes cluster. My cluster having one master and one worker node. I created this cluster for my R&D of Kubernetes deployment. When I am trying to deploy I am getting the even error message like the following,
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate
My attempt
When I am exploring about the error, I found some comments in forums for restarting the docker in the node etc. So after that I restarted Docker. But still the error is the same.
When I tried the command kubectl get nodes it showing like that both nodes are master and both are ready state.
NAME STATUS ROLES AGE VERSION
mildevkub020 Ready master 6d19h v1.17.0
mildevkub040 Ready master 6d19h v1.17.0
I did not found worker node here. I created one master (mildevkub020) and one worker node (mildev040) with one load balancer. And I followed the official documentation of Kubernetes from the following link,
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
My question
Is this error is because of the cluster problem? Because I am not finding the cluster worker node. Only master node.
| You can run below command to remove the taint from master node and then you should be able to deploy your pod on that node
kubectl taint nodes mildevkub020 node-role.kubernetes.io/master-
kubectl taint nodes mildevkub040 node-role.kubernetes.io/master-
Now regarding why its showing as master node check the command you ran to join the node with kubeadm. There are separate commands for master and worker node joining.
| Kubernetes | 59,484,509 | 64 |
I have a helm repo:
helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
and I want to list all the charts available or search the charts under stable helm repo.
How do I do this?
No command so far to list available charts under a helm repo or just verify that a chart exists.
| First, always update your local cache:
helm repo update
Then, you can list all charts by doing:
helm search repo
Or, you can do a case insensitive match on any part of chart name using the following:
helm search repo [your_search_string]
Lastly, if you want to list all the versions you can use the -l/--version argument:
# Lists all versions of all charts
helm search repo -l
# Lists all versions of all chart names that contain search string
helm search repo -l [your_search_string]
| Kubernetes | 55,973,901 | 64 |
Recently, prometheus-operator has been promoted to stable helm chart (https://github.com/helm/charts/tree/master/stable/prometheus-operator).
I'd like to understand how to add a custom application to monitoring by prometheus-operator in a k8s cluster. An example for say gitlab runner which by default provides metrics on 9252 would be appreciated (https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server).
I have a rudimentary yaml that obviously doesn't work but also not provides any feedback on what isn't working:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: gitlab-monitor
# Change this to the namespace the Prometheus instance is running in
namespace: default
labels:
app: gitlab-runner-gitlab-runner
release: prometheus
spec:
selector:
matchLabels:
app: gitlab-runner-gitlab-runner
namespaceSelector:
# matchNames:
# - default
any: true
endpoints:
- port: http-metrics
interval: 15s
This is the prometheus configuration:
> kubectl get prometheus -o yaml
...
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
release: prometheus
...
So the selectors should match. By "not working" I mean that the endpoints do not appear in the prometheus UI.
| Thanks to Peter who showed me that it idea in principle wasn't entirely incorrect I've found the missing link. As a servicemonitor does monitor services (haha), I missed the part of creating a service which isn't part of the gitlab helm chart. Finally this yaml did the trick for me and the metrics appear in Prometheus:
# Service targeting gitlab instances
apiVersion: v1
kind: Service
metadata:
name: gitlab-metrics
labels:
app: gitlab-runner-gitlab-runner
spec:
ports:
- name: metrics # expose metrics port
port: 9252 # defined in gitlab chart
targetPort: metrics
protocol: TCP
selector:
app: gitlab-runner-gitlab-runner # target gitlab pods
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: gitlab-metrics-servicemonitor
# Change this to the namespace the Prometheus instance is running in
# namespace: default
labels:
app: gitlab-runner-gitlab-runner
release: prometheus
spec:
selector:
matchLabels:
app: gitlab-runner-gitlab-runner # target gitlab service
endpoints:
- port: metrics
interval: 15s
Nice to know: the metrics targetPort is defined in the gitlab runner chart.
| Kubernetes | 52,991,038 | 64 |
My Understanding of this doc page is, that I can configure service accounts with Pods and hopefully also deployments, so I can access the k8s API in Kubernetes 1.6+. In order not to alter or use the default one I want to create service account and mount certificate into the pods of a deployment.
How do I achieve something similar like in this example for a deployment?
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
| As you will need to specify 'podSpec' in Deployment as well, you should be able to configure the service account in the same way. Something like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
# Below is the podSpec.
metadata:
name: ...
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
| Kubernetes | 44,505,461 | 64 |
I want to give my application limited access to get the replicas of different statefulsets (and maybe deployment) and if necessary scale them up or down.
I have created ServiceAccount, Rolebinding and Role for this but I can't find the complete list of rule verbs ("get", "watch", "list", "update") and what are their limitations, for example can I use update for scaling or I need another verb? And where can I find a list or table that described these verbs?
My yaml file:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: scaler-role
namespace: {{ .Release.Namespace | quote }}
rules:
- apiGroups: ["apps"]
resources: ["statefulset"]
verbs: ["get", "watch", "list", "update"]
| You can get quite a bit of info via this:
kubectl api-resources --sort-by name -o wide
The above api-resources command is explicit and easy to grep. The complete list of possible verbs can be obtained thus:
$ kubectl api-resources --no-headers --sort-by name -o wide | sed 's/.*\[//g' | tr -d "]" | tr " " "\n" | sort | uniq
create
delete
deletecollection
get
list
patch
update
watch
The Resource Operations section of API reference docs (eg https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/) talks a little bit about them but doesn't mention deletecollection (btw: see interesting info about deletecollection; suggests that whenever you give delete, you should give deletecollection permission too, if the resource supports it).
The Determine the Request Verb section of Authorization Overview does briefly mention deletecollection, as well as a half a dozen more verbs (such as escalate as pointed out rightfully by @RoryMcCune) which, unfortunately, do not show up in output of kubectl api-resources -o wide command.
BTW the api-resources command also lists the short names of commands, such as svc for services.
Update May 2023:
Another less user-friendly but more complete way of getting the verbs is by directly querying the API server:
in one terminal, start a proxy for the API server; eg kubectl proxy --port=8080
in another terminal, use curl on /api/v1 and /apis
For core resources (configmaps, etc):
Use curl -s lo calhost:8080 /api/v1 to get json with the verbs for each core resource type name. Eg (if you have jq)
$ curl -s http://localhost:8080/api/v1 | jq '.resources[] | [.name, (.verbs | join(" "))] | join(" = ")' -r
bindings = create
componentstatuses = get list
configmaps = create delete deletecollection get list patch update watch
endpoints = create delete deletecollection get list patch update watch
...
For the non-core resources (deployments, CRD, etc):
Say you want the verbs for deployments, you know that the API group for deployments is apps. First get the versioned group name for that API using curl -s http://localhost:8080/apis. Eg (if you have jq)
```
$ curl -s http://localhost:8080/apis | jq '.groups[].preferredVersion.groupVersion' -r | grep ^apps
apps/v1
```
Use this to query the API of that group for verbs by using curl -s http://localhost:8080/apis/VERSIONED_API ie in the above example curl -s http://localhost:8080/apis/apps/v1. Eg (if you have jq, the jq is the same),
```
$ curl -s http://localhost:8080/apis/apps/v1 | jq '.resources[] | [.name, (.verbs | join(" "))] | join(" = ")' -r
controllerrevisions = create delete deletecollection get list patch update watch
daemonsets = create delete deletecollection get list patch update watch
daemonsets/status = get patch update
deployments = create delete deletecollection get list patch update watch
deployments/scale = get patch update
deployments/status = get patch update
...
```
BTW the page https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/ documents how to use Python, Java etc instead of curl.
I created a kubectl plugin, for the use case where one wants to get the verbs for a specific resource type: https://github.com/schollii/my-devops-lab/blob/main/kubernetes/kubectl-verbs. Eg
$ kubectl verbs configmaps
configmaps = create delete deletecollection get list patch update watch
$ kubectl verbs deployments apps
deployments = create delete deletecollection get list patch update watch
deployments/scale = get patch update
deployments/status = get patch update
The file has instructions to install it as a plugin. It is a simple bash script.
| Kubernetes | 57,661,494 | 63 |
Is it possible to set the working directory when launching a container with Kubernetes ?
| Yes, through the workingDir field of the container spec. Here's an example replication controller with an nginx container that has workingDir set to /workdir:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: mynginximage
workingDir: /workdir
| Kubernetes | 36,634,614 | 63 |
Using fleet I can specify a command to be run inside the container when it is started. It seems like this should be easily possible with Kubernetes as well, but I can't seem to find anything that says how. It seems like you have to create the container specifically to launch with a certain command.
Having a general purpose container and launching it with different arguments is far simpler than creating many different containers for specific cases, or setting and getting environment variables.
Is it possible to specify the command a kubernetes pod runs within the Docker image at startup?
| I spend 45 minutes looking for this. Then I post a question about it and find the solution 9 minutes later.
There is an hint at what I wanted inside the Cassandra example. The command line below the image:
id: cassandra
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: cassandra
containers:
- name: cassandra
image: kubernetes/cassandra
command:
- /run.sh
cpu: 1000
ports:
- name: cql
containerPort: 9042
- name: thrift
containerPort: 9160
env:
- key: MAX_HEAP_SIZE
value: 512M
- key: HEAP_NEWSIZE
value: 100M
labels:
name: cassandra
Despite finding the solution, it would be nice if there was somewhere obvious in the Kubernetes project where I could see all of the possible options for the various configuration files (pod, service, replication controller).
| Kubernetes | 28,976,455 | 63 |
I have a cronjob that sends out emails to customers. It occasionally fails for various reasons. I do not want it to restart, but it still does.
I am running Kubernetes on GKE. To get it to stop, I have to delete the CronJob and then kill all the pods it creates manually.
This is bad, for obvious reasons.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: 2018-06-21T14:48:46Z
name: dailytasks
namespace: default
resourceVersion: "20390223"
selfLink: [redacted]
uid: [redacted]
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- kubernetes/daily_tasks.sh
env:
- name: DB_HOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
envFrom:
- secretRef:
name: my-secrets
image: [redacted]
imagePullPolicy: IfNotPresent
name: dailytasks
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 14 * * *
successfulJobsHistoryLimit: 3
suspend: true
status:
active:
- apiVersion: batch
kind: Job
name: dailytasks-1533218400
namespace: default
resourceVersion: "20383182"
uid: [redacted]
lastScheduleTime: 2018-08-02T14:00:00Z
| It turns out that you have to set a backoffLimit: 0 in combination with restartPolicy: Never in combination with concurrencyPolicy: Forbid.
backoffLimit means the number of times it will retry before it is considered failed. The default is 6.
concurrencyPolicy set to Forbid means it will run 0 or 1 times, but not more.
restartPolicy set to Never means it won't restart on failure.
You need to do all 3 of these things, or your cronjob may run more than once.
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
[ADD THIS -->]backoffLimit: 0
template:
... MORE STUFF ...
| Kubernetes | 51,657,105 | 62 |
I have two applications - app1 and app2, where app1 is a config server that holds configs for app2. I have defined /readiness endpoint in app1 and need to wait till it returns OK status to start up pods of app2.
It's crucial that deployment of app2 wait till kubernetes receives Http Status OK from /readiness endpoint in app1 as it's a configuration server and holds crucial configs for app2.
Is it possible to do this kind of deployment dependency?
| You can use initContainers. Following is an example of how you can do in your YAML file
initContainers:
- name: wait-for-other-pod
image: docker.some.image
args:
- /bin/sh
- -c
- >
set -x;
while [ $(curl -sw '%{http_code}' "http://www.<your_pod_health_check_end_point>.com" -o /dev/null) -ne 200 ]; do
sleep 15;
done
I have used curl to hit the health check endpoint, you can use any other UNIX command to check if the other pod is ready.
If you have a dependency on k8s resources, you can make use of stackanetes/kubernetes-entrypoint example:
initContainers:
- command:
- kubernetes-entrypoint
name: init-dependency-check
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: DEPENDENCY_SERVICE
- name: DEPENDENCY_DAEMONSET
- name: DEPENDENCY_CONTAINER
- name: DEPENDENCY_POD_JSON
value: '[{"labels":{"app.kubernetes.io/name":"postgres"}}]'
- name: COMMAND
value: echo done
image: projects.registry.vmware.com/tcx/snapshot/stackanetes/kubernetes-entrypoint:latest
securityContext:
privileged: true
runAsUser: 0
In the above example, the pod with initContainer init-dependency-check will wait until pod with label "app.kubernetes.io/name":"postgres" is in the Running state. Likewise you can make use of DEPENDENCY_SERVICE, DEPENDENCY_DAEMONSET, DEPENDENCY_CONTAINER
| Kubernetes | 51,079,849 | 62 |
I want to clear cache in all the pods in my Kubernetes namespace. I want to send one request to the end-point which will then send a HTTP call to all the pods in the namespace to clear cache. Currently, I can hit only one pod using Kubernetes and I do not have control over which pod would get hit.
Even though the load-balancer is set to RR, continuously hitting the pods(n number of times, where n is the total number of pods) doesn't help as some other requests can creep in.
The same issue was discussed here, but I couldn't find a solution for the implementation:
https://github.com/kubernetes/kubernetes/issues/18755
I'm trying to implement the clearing cache part using Hazelcast, wherein I will store all the cache and Hazelcast automatically takes care of the cache update.
If there is an alternative approach for this problem, or a way to configure kubernetes to hit all end-points for some specific requests, sharing here would be a great help.
| Provided you got kubectl in your pod and have access to the api-server, you can get all endpoint adressess and pass them to curl:
kubectl get endpoints <servicename> \
-o jsonpath="{.subsets[*].addresses[*].ip}" | xargs curl
Alternative without kubectl in pod:
the recommended way to access the api server from a pod is by using kubectl proxy: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#accessing-the-api-from-a-pod this would of course add at least the same overhead. alternatively you could directly call the REST api, you'd have to provide the token manually.
APISERVER=$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")
TOKEN=$(kubectl describe secret $(kubectl get secrets \
| grep ^default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d " ")
if you provide the APISERVER and TOKEN variables, you don't need kubectl in your pod, this way you only need curl to access the api server and "jq" to parse the json output:
curl $APISERVER/api/v1/namespaces/default/endpoints --silent \
--header "Authorization: Bearer $TOKEN" --insecure \
| jq -rM ".items[].subsets[].addresses[].ip" | xargs curl
UPDATE (final version)
APISERVER usually can be set to kubernetes.default.svc and the token should be available at /var/run/secrets/kubernetes.io/serviceaccount/token in the pod, so no need to provide anything manually:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token); \
curl https://kubernetes.default.svc/api/v1/namespaces/default/endpoints --silent \
--header "Authorization: Bearer $TOKEN" --insecure \
| jq -rM ".items[].subsets[].addresses[].ip" | xargs curl
jq is available here: https://stedolan.github.io/jq/download/ (< 4 MiB, but worth it for easily parsing JSON)
| Kubernetes | 49,612,412 | 62 |
I am trying to debug a pod with the status "ImagePullBackOff".
The pod is in the namespace minio-operator, but when I try to to describe the pod, it is apparently not found.
Why does that happen?
[psr-admin@zon-psr-2-u001 ~]$ kubectl get all -n minio-operator
NAME READY STATUS RESTARTS AGE
pod/minio-operator-5dd99dd858-n6fdj 0/1 ImagepullBackoff 0 7d
NAME READY. UP-TO-DATE AVAILABLE AGE
deployment.apps/minio-operator 0 1 0 7d
NAME DESIRED CURRENT READY AGE
replicaset.apps/minio-operator-5dd99dd858 1 1 0 7d
[psr-admin@zon-psr-2-u001 ~]$ kubectl describe pod minio-operator-5dd99dd858-n6fdj
Error from server (NotFound): pods "minio-operator-5dd99dd858-n6fdj" not found
Error from server (NotFound): pods "minio-operator-5dd99dd858-n6fdj" not found
| You've not specified the namespace in your describe pod command.
You did kubectl get all -n minio-operator, which gets all resources in the minio-operator namespace, but your kubectl describe has no namespace, so it's looking in the default namespace for a pod that isn't there.
kubectl describe pod -n minio-operator <pod name>
Should work OK.
Most resources in kubernetes are namespaced, so will require the -n <namespace> argument unless you switch namespaces.
| Kubernetes | 61,270,313 | 61 |
I want to export already templated Helm Charts as YAML files. I can not use Tiller on my Kubernetes Cluster at the moment, but still want to make use of Helm Charts. Basically, I want Helm to export the YAML that gets send to the Kubernetes API with values that have been templated by Helm. After that, I will upload the YAML files to my Kubernetes cluster.
I tried to run .\helm.exe install --debug --dry-run incubator\kafka but I get the error Error: Unauthorized.
Note that I run Helm on Windows (version helm-v2.9.1-windows-amd64).
| We need logs to check the Unauthorized issue.
But you can easily generate templates locally:
helm template mychart
Render chart templates locally and display the output.
This does not require Tiller. However, any values that would normally
be looked up or retrieved in-cluster will be faked locally.
Additionally, none of the server-side testing of chart validity (e.g.
whether an API is supported) is done.
More info: https://helm.sh/docs/helm/helm_template/
| Kubernetes | 50,584,091 | 61 |
I want to calculate the cpu usage of all pods in a kubernetes cluster. I found two metrics in prometheus may be useful:
container_cpu_usage_seconds_total: Cumulative cpu time consumed per cpu in seconds.
process_cpu_seconds_total: Total user and system CPU time spent in seconds.
Cpu Usage of all pods = increment per second of sum(container_cpu_usage_seconds_total{id="/"})/increment per second of sum(process_cpu_seconds_total)
However, I found every second's increment of container_cpu_usage{id="/"} larger than the increment of sum(process_cpu_seconds_total). So the usage may be larger than 1...
| This I'm using to get CPU usage at cluster level:
sum (rate (container_cpu_usage_seconds_total{id="/"}[1m])) / sum (machine_cpu_cores) * 100
I also track the CPU usage for each pod.
sum (rate (container_cpu_usage_seconds_total{image!=""}[1m])) by (pod_name)
I have a complete kubernetes-prometheus solution on GitHub, maybe can help you with more metrics: https://github.com/camilb/prometheus-kubernetes
| Kubernetes | 40,327,062 | 61 |
With the rise of containers, Kuberenetes, 12 Factor etc, it has become easier to replicate an identical environment across dev, staging and production. However, what there appears to be no common standard to domain name conventions.
As far as I can see it, there are two ways of doing it:
Use subdomains:
*.dev.foobar.tld
*.staging.foobar.tld
*.foobar.tld
Use separate domains:
*.foobar-dev.tld
*.foobar-staging.tld
*.foobar.tld
I can see up and downs with both approaches, but I'm curious what the common practise is.
As a side-note, Cloudflare will not issue you certificates for sub-sub domains (e.g. *.stage.foobar.tld).
|
There are only two hard things in Computer Science: cache invalidation
and naming things.
-- Phil Karlton
Depends on the company size.
Small businesses usually go for dashes and get the wildcard certificate.
So they would have dev.example.com, test.example.com
In larger enterprises they usually have a DNS infrastructure rolled out and the provisioning processes takes care of the assignment. It usually looks like
aws-eu-central-1.appName.staging.[teamName].example.com
They would either use their own self-signed certs with the CA on all servers or have the money for the SANs.
For more inspiration:
https://blog.serverdensity.com/server-naming-conventions-and-best-practices/
https://mnx.io/blog/a-proper-server-naming-scheme/
https://namingschemes.com/
| Kubernetes | 39,336,130 | 61 |
I have been digging through the Kubernetes documentation for hours. I understand the core design, and the notion of services, controllers, pods, etc.
What I don't understand, however, is the process in which I can declaratively configure the cluster. That is, a way for me to write a config file (or a set thereof) to define the makeup, and scaling options of the cloud deployment. I want to be able to declare which containers I want in which pods, how they will communicate, how they will scale, etc. without running a ton of cli commands.
Is there docker-compose functionality for Kubernetes?
I want my application to be defined in git—to be version controlled–without relying on manual cli interactions.
Is this possible to do in a concise way? Is there a reference that is more clear than the official documentation?
| If you're still looking, maybe this tool can help: https://github.com/kelseyhightower/compose2kube
You can create a compose file:
# sample compose file with 3 services
web:
image: nginx
ports:
- "80"
- "443"
database:
image: postgres
ports:
- "5432"
cache:
image: memcached
ports:
- "11211"
Then use the tool to convert it to kubernetes objects:
compose2kube -compose-file docker-compose.yml -output-dir output
Which will create these files:
output/cache-rc.yaml
output/database-rc.yaml
output/web-rc.yaml
Then you can use kubectl to apply them to kubernetes.
| Kubernetes | 37,845,715 | 61 |
I have a kubernetes cluster running on azure. What is the way to access the cluster from local kubectl command. I referred to here but on the kubernetes master node there is no kube config file. Also, kubectl config view results in
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
| Found a way to access remote kubernetes cluster without ssh'ing to one of the nodes in cluster. You need to edit ~/.kube/config file as below :
apiVersion: v1
clusters:
- cluster:
server: http://<master-ip>:<port>
name: test
contexts:
- context:
cluster: test
user: test
name: test
Then set context by executing:
kubectl config use-context test
After this you should be able to interact with the cluster.
Note : To add certification and key use following link : http://kubernetes.io/docs/user-guide/kubeconfig-file/
Alternately, you can also try following command
kubectl config set-cluster test-cluster --server=http://<master-ip>:<port> --api-version=v1
kubectl config use-context test-cluster
| Kubernetes | 36,306,904 | 60 |
When I am using below it deletes the running POD after matching the pattern from commandline:
kubectl get pods -n bi-dev --no-headers=true | awk '/group-react/{print $1}' | xargs kubectl delete -n bi-dev pod
However when I am using this command as an alias in .bash_profile it doesn't execute .
This is how I defined it :
alias kdpgroup="kubectl get pods -n bi-dev --no-headers=true | awk '/group-react/{print $1}'| kubectl delete -n bi-dev pod"
when execute this as below I get below error in commandline:
~ $ kdpgroup
error: resource(s) were provided, but no name, label selector, or --all flag specified
When I define this in .bash_profile I get this :
~ $ . ./.bash_profile
-bash: alias: }| xargs kubectl delete -n bi-dev pod: not found
~ $
Am I missing something to delete POD using Pattern Match or with Wilcard ?
thanks
|
Am I missing something to delete POD using Pattern Match or with Wilcard?
When using Kubernetes it is more common to use labels and selectors. E.g. if you deployed an application, you usually set a label on the pods e.g. app=my-app and you can then get the pods with e.g. kubectl get pods -l app=my-app.
Using this aproach, it is easier to delete the pods you are interested in, with e.g.
kubectl delete pods -l app=my-app
or with namespaces
kubectl delete pods -l app=my-app -n default
See more on Kubernetes Labels and Selectors
Set-based selector
I have some pod's running in the name of "superset-react" and "superset-graphql" and I want to search my wildcard superset and delete both of them in one command
I suggest that those pods has labels app=something-react and app=something-graphql. If you want to classify those apps, e.g. if your "superset" varies, you could add a label app-type=react and app-type=graphql to all those type of apps.
Then you can delete pods for both app types with this command:
kubectl delete pods -l 'app-type in (react, graphql)'
| Kubernetes | 59,473,707 | 59 |
I am using the following command to create a configMap.
kubectl create configmap test --from-file=./application.properties --from-file=./mongo.properties --from-file=./logback.xml
Now, I have modified a value for a key from mongo.properties which I need to update in Kubernetes.
Option 1:
kubectl edit test
Here, it opens the entire file. But, I want to just update mongo.properties and hence want to see only the mongo.properties. Is there any other way?
Note: I don't want to have mongo.properties in a separate configMap.
| Now you can. Just throw: kubectl edit configmap <name of the configmap> on your command line. Then you can edit your configuration.
| Kubernetes | 49,989,943 | 59 |
None of the systemd commands are working inside WSL( Ubuntu Bash 18.04). When I ran sudo systemctl is-active kubelet, error is output: System has not been booted with systemd as init system (PID 1). Can't operate.
: running command: sudo systemctl is-active kubelet
How to enable systemd feature in WSL? Whats the way to get rid of System has not been booted with systemd
| When using WSL2 you can use:
sudo service docker start
This command basically execute the script /etc/init.d/docker.
Some customization, like specifying HTTP proxy, is possible via the script /etc/default/docker.
| Kubernetes | 55,579,342 | 58 |
I'm trying to change the client_max_body_size value, so my NGINX ingress will not return the HTTP 413 Content Too Large error (as seen in the logs).
I've tested a few solutions.
Here is my config map:
kind: ConfigMap
apiVersion: v1
data:
proxy-connect-timeout: "15"
proxy-read-timeout: "600"
proxy-send-timeout: "600"
proxy-body-size: "8m"
hsts-include-subdomains: "false"
body-size: "64m"
server-name-hash-bucket-size: "256"
client-max-body-size: "50m"
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
These changes had no effect at all: in NGINX controller's log I can see the information about reloading the config map, but the values in nginx.conf are the same:
$ cat /etc/nginx/nginx.conf | grep client_max
client_max_body_size "8m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
My nginx-controller config uses this image:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0
How can I force NGINX to change this setting? I need to change it globally, for all my ingresses.
| You can use the annotation nginx.ingress.kubernetes.io/proxy-body-size to set the max-body-size option right in your Ingress object instead of changing a base ConfigMap.
Here is the example of usage:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
...
| Kubernetes | 49,918,313 | 58 |
I have a job definition based on example from kubernetes website.
apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-timeout-6
spec:
activeDeadlineSeconds: 30
completions: 1
parallelism: 1
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["exit", "1"]
restartPolicy: Never
I would like run this job once and not restart if fails. With comand exit 1 kubernetes trying to run new pod to get exit 0 code until reach activeDeadlineSeconds timeout. How can avoid that? I would like run build commands in kubernetes to check compilation and if compilation fails I'll get exit code different than 0. I don't want run compilation again.
Is it possible? How?
| By now this is possible for Jobs by setting backoffLimit: 0 which tells the controller to do 0 retries. default is 6
| Kubernetes | 39,893,238 | 58 |
I want to store files in Kubernetes Secrets but I haven't found how to do it using a yaml file.
I've been able to make it using the cli with kubectl:
kubectl create secret generic some-secret --from-file=secret1.txt=secrets/secret1.txt
But when I try something similar in a yaml:
apiVersion: v1
kind: Secret
metadata:
name: some-secret
type: Opaque
data:
secret1.txt: secrets/secret1.txt
I´ve got this error:
[pos 73]: json: error decoding base64 binary 'assets/elasticsearch.yml': illegal base64 data at input byte 20
I'm following this guide http://kubernetes.io/docs/user-guide/secrets/. It explains how to create a secret using a yaml but not how to create a secret from a file using yaml.
Is it possible? If so, how can I do it?
| As answered on previous post, we need to provide the certificate/key encoded as based64 to the file.
Here is generic example for a certiticate (in this case SSL):
The secret.yml.tmpl:
apiVersion: v1
kind: Secret
metadata:
name: test-secret
namespace: default
type: Opaque
data:
server.crt: SERVER_CRT
server.key: SERVER_KEY
Pre-process the file to include the certificate/key:
sed "s/SERVER_CRT/`cat server.crt|base64 -w0`/g" secret.yml.tmpl | \
sed "s/SERVER_KEY/`cat server.key|base64 -w0`/g" | \
kubectl apply -f -
Note that the certificate/key are encoded using base64 without whitespaces (-w0).
For the TLS can be simply:
kubectl create secret tls test-secret-tls --cert=server.crt --key=server.key
| Kubernetes | 36,887,946 | 58 |
As per this official document, Kubernetes Persistent Volumes support three types of access modes.
ReadOnlyMany
ReadWriteOnce
ReadWriteMany
The given definitions of them in the document is very high-level. It would be great if someone can explain them in little more detail along with some examples of different use cases where we should use one vs other.
| You should use ReadWriteX when you plan to have Pods that will need to write to the volume, and not only read data from the volume.
You should use XMany when you want the ability for Pods to access the given volume while those workloads are running on different nodes in the Kubernetes cluster. These Pods may be multiple replicas belonging to a Deployment, or may be completely different Pods. There are many cases where it's desirable to have Pods running on different nodes, for instance if you have multiple Pod replicas for a single Deployment, then having them run on different nodes can help ensure some amount of continued availability even if one of the nodes fails or is being updated.
If you don't use XMany, but you do have multiple Pods that need access to the given volume, that will force Kubernetes to schedule all those Pods to run on whatever node the volume gets mounted to first, which could overload that node if there are too many such pods, and can impact the availability of Deployments whose Pods need access to that volume as explained in the previous paragraph.
So putting all that together:
If you need to write to the volume, and you may have multiple Pods needing to write to the volume where you'd prefer the flexibility of those Pods being scheduled to different nodes, and ReadWriteMany is an option given the volume plugin for your K8s cluster, use ReadWriteMany.
If you need to write to the volume but either you don't have the requirement that multiple pods should be able to write to it, or ReadWriteMany simply isn't an available option for you, use ReadWriteOnce.
If you only need to read from the volume, and you may have multiple Pods needing to read from the volume where you'd prefer the flexibility of those Pods being scheduled to different nodes, and ReadOnlyMany is an option given the volume plugin for your K8s cluster, use ReadOnlyMany.
If you only need to read from the volume but either you don't have the requirement that multiple pods should be able to read from it, or ReadOnlyMany simply isn't an available option for you, use ReadWriteOnce. In this case, you want the volume to be read-only but the limitations of your volume plugin have forced you to choose ReadWriteOnce (there's no ReadOnlyOnce option). As a good practice, consider the containers.volumeMounts.readOnly setting to true in your Pod specs for volume mounts corresponding to volumes that are intended to be read-only.
| Kubernetes | 57,798,267 | 57 |
I came to the realization that Windows 10 Docker has the Kubernetes options in it now, so I want to completely uninstall minikube and use the Kubernetes version that comes with docker windows instead.
How can I completely uninstall minikube in windows 10?
| This as simple as running:
minikube stop & REM stops the VM
minikube delete & REM deleted the VM
Then delete the .minikube and .kube directories usually under:
C:\users\{user}\.minikube
and
C:\users\{user}\.kube
Or if you are using chocolatey:
C:\ProgramData\chocolatey\bin\minikube stop
C:\ProgramData\chocolatey\bin\minikube delete
choco uninstall minikube
choco uninstall kubectl
| Kubernetes | 53,263,586 | 57 |
I am using Docker for Mac with Kubernetes support and I'm struggling to create a Kubernetes Deployment that references a locally built image.
Output of docker images:
REPOSITORY TAG IMAGE
test latest 2c3bdb36a5ed
My deployment.yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
spec:
selector:
matchLabels:
app: helloworld
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: aaa
image: test:latest
ports:
- containerPort: 8080
When i run kubectl apply -f deplyment.yaml pods are created but:
helloworld-deployment-764b8b85d8-2c4kl 0/1 ImagePullBackOff 0
helloworld-deployment-764b8b85d8-rzq7l 0/1 ImagePullBackOff 0
kubectl describe of one of these pods gives:
Normal Scheduled 20s default-scheduler Successfully assigned helloworld-deployment-79f66d97c6-7tj2x to docker-for-desktop
Normal SuccessfulMountVolume 19s kubelet, docker-for-desktop MountVolume.SetUp succeeded for volume "default-token-72f44"
Normal BackOff 16s kubelet, docker-for-desktop Back-off pulling image "test:latest"
Warning Failed 16s kubelet, docker-for-desktop Error: ImagePullBackOff
Normal Pulling 4s (x2 over 19s) kubelet, docker-for-desktop pulling image "test:latest"
Warning Failed 2s (x2 over 17s) kubelet, docker-for-desktop Failed to pull image "test:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for test, repository does not exist or may require 'docker login'
Warning Failed 2s (x2 over 17s) kubelet, docker-for-desktop Error: ErrImagePull
What is interesting is that if i try to run some image hosted on dockerhub then everything is fine,
I also tried to use skaffold and it also works like a charm...
I see some similar issues regarding minikube where the solution is to use the minikube docker daemon to build images so that they can be referenced from the Kubernetes cluster.
I would like to avoid setting up local repo, so how can I make it work with Docker's Kubernetes ?
| I was able to run a local image by setting the imagePullPolicy to Never.
For example:
apiVersion: v1
kind: Pod
metadata:
name: local-image-test
spec:
containers:
- name: local-image-container
image: local-image:latest
imagePullPolicy: Never
(Credit to https://github.com/kubernetes/kubernetes/issues/1293#issuecomment-357326426 for this solution)
| Kubernetes | 50,739,405 | 57 |
I am trying to create an S3 bucket using
aws s3api create-bucket —bucket kubernetes-aws-wthamira-io
It gives this error:
An error occurred (IllegalLocationConstraintException) when calling
the CreateBucket operation: The unspecified location constraint is
incompatible for the region specific endpoint this request was sent
to.
I set the region using aws configure to eu-west-1
Default region name [eu-west-1]:
but it gives the same error. How do I solve this?
I use osx terminal to connect aws
| try this:
aws s3api create-bucket --bucket kubernetes-aws-wthamira-io --create-bucket-configuration LocationConstraint=eu-west-1
Regions outside of us-east-1 require the appropriate LocationConstraint to be specified in order to create the bucket in the desired region.
https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html
| Kubernetes | 49,174,673 | 57 |
I know this is some kind of syntax/yaml structure related error but the message is so cryptic I have no idea what the issue is:
Error: render error in "mychart/templates/ingress.yaml": template: mychart/templates/ingress.yaml:35:37: executing "mychart/templates/ingress.yaml" at <.Values.network.appP...>: can't evaluate field Values in type interface {}
This is in my values.yaml:
network:
appPort: 4141
This is the ingress.yaml:
{{- $fullName := include "mychart.fullname" . -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app.kubernetes.io/name: {{ include "mychart.name" . }}
helm.sh/chart: {{ include "mychart.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ .Values.network.appPort }}
{{- end }}
{{- end }}
Why doesn't {{ .Values.network.appPort }} work? I have used values with this same structure in other places
| In the range block . refers to the current value in the execution time. Instead of . you can use $ to access to the root data object in the range block instead of declaring top level variables.
Example:
{{- range $host := .Values.ingress.hosts }}
- host: {{ $host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: {{ $.Values.frontend.service.port }}
{{- end}}
| Kubernetes | 56,671,452 | 56 |
I am getting the following error
ERROR 2013 (HY000): Lost connection to MySQL server at
'reading authorization packet', system error: 0
when trying to connect to my MySQL server.
What I am doing:
I have Master - Slave replication in MySQL that is working and just added load balance capabilities using F5.
I have configured the F5 according to their site.
But when I am trying to connect to my MySQL server using the IP that the F5 was configured with I get
ERROR 2013 (HY000): Lost connection to MySQL server at
'reading authorization packet', system error: 0
Any ideas?
Update on my progress : ZERO
- i am getting the same error
I get no entries in the /var/log/secure as if somebody would try to authenticate coming form the ip where i had created my load balance server.
No enties in the mysql error log.
The command - returns nothing
mysql> SHOW GLOBAL STATUS LIKE 'Aborted_connections';
Empty set (0.00 sec)
I've already altered my my.cnf file and add the
[mysqld]
skip-name-resolve
Alterd the connect_timeout to 10.
So it seems i get no response for the server i have created on my F5
I finally convinced the F5 admin to pass me the log for the F5 server and i have exctraced all i need form it.
Here is the output :
Jan 28 15:46:39 tmm debug tmm[6459]: Rule /Common/iRule-f5_mysql_proxy <CLIENT_ACCEPTED>: BIG-IP MySQL Proxy -- clientside initial connection
Jan 28 15:46:39 tmm debug tmm[6459]: Rule /Common/iRule-f5_mysql_proxy <CLIENT_ACCEPTED>: BIG-IP MySQL Proxy -- clientside responding with server WELCOME packet
Jan 28 15:46:39 tmm debug tmm[6459]: Rule /Common/iRule-f5_mysql_proxy <CLIENT_DATA>: BIG-IP MySQL Proxy -- clientside authenticated flag not set
Jan 28 15:46:39 tmm err tmm[6459]: Rule /Common/iRule-f5_mysql_proxy <CLIENT_DATA>: BIG-IP MySQL Proxy -- mysql client: attempting to do something before authentication
Jan 28 15:46:39 tmm debug tmm[6459]: Rule /Common/iRule-f5_mysql_proxy <LB_SELECTED>: BIG-IP MySQL Proxy -- serverside selected pool /Common/foss-mysql-slave_pool node SLAVE-IP
Jan 28 15:46:39 tmm debug tmm[6459]: Rule /Common/iRule-f5_mysql_proxy <CLIENT_CLOSED>: BIG-IP MySQL Proxy -- clientside connection closed from MASTER-IP(XXXXXXX)
Jan 28 15:46:39 tmm debug tmm[6459]: Rule /Common/iRule-f5_mysql_proxy <SERVER_CLOSED>: BIG-IP MySQL Proxy -- serverside connection closed from node SLAVE-IP(XXXXXXXX)
I've replaced the ip for security sake !
just as an extra - and i think is here the problem - my mysql version is 5.1.69-log
Thx All
| From documentation:
More rarely, it can happen when the client is attempting the initial
connection to the server. In this case, if your connect_timeout value
is set to only a few seconds, you may be able to resolve the problem
by increasing it to ten seconds, perhaps more if you have a very long
distance or slow connection. You can determine whether you are
experiencing this more uncommon cause by using SHOW STATUS LIKE
'aborted_connections'. It will increase by one for each initial
connection attempt that the server aborts. You may see “reading
authorization packet” as part of the error message; if so, that also
suggests that this is the solution that you need.
Try increasing connect_timeout in your my.cnf file
Another style:
MySQL: Lost connection to MySQL server at 'reading initial communication packet'
At some point, it was impossible for remote clients to connect to
the MySQL server.
The client (some application on a Windows platform) gave a vague
description like Connection unexpectedly terminated.
When remotely logging in with the MySQL client the following error
appeared:
ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 0
On FreeBSD this happens because there was no match found in /etc/hosts.allow. Adding the following line before the line saying ALL:ALL fixes this:
mysqld: ALL: allow
On non-FreeBSD Unix systems, it is worth to check the files /etc/hosts.allow and /etc/hosts.deny. If you are restricting connections, make sure this line is in /etc/hosts.allow:
mysqld: ALL
or check if the host is listed in /etc/hosts.deny.
In Arch Linux, a similar line can be added to /etc/hosts.allow:
mysqld: ALL
| F5 | 21,091,850 | 49 |
We are currently using TeamCity for CI builds and we are trying to set up automated deployments as well.
The project I'm currently trying to deploy is a Windows service that sits under an F5 load balancer. In the future we would also like to automate the deployment of our IIS websites which also sit under the F5.
From TeamCity we can execute PowerShell scripts to unistall the windows service on the desired server, push our files to it, then reinstall the service.
However, I'm having trouble figuring out how to deal with the load balancer. We would want to disable 1 node at a time, watch for all the connections to drop, then deploy our code and bring the node back up.
This seems like it would be a very common issue, but I'm finding surprisingly little information about how to do it.
Thanks!
Answered
Thanks Jonathon Rossi for the iControl Powershell cmdlets!
For other users' sakes, here is a sample of shutting down, monitoring for connections to drop, pushing code, and then turning back on the F5 load balancer through a powershell script
For these scripts to work you will first have to install the F5 iControl cmdlets from the links provided in the Answer below
#PULL IN OUR F5 UTILITY FUNCTIONS
. .\F5Functions.ps1
#DEFINE LOGIC TO DEPLOY CODE TO A NODE THAT HAS ALREADY BEEN REMOVED FROM THE LOAD BALANCER
function Deploy(
[F5Node]$Node
)
{
Write-Host "Deploying To: "$Node.Name
#TODO: Remotely shut down services, push code, start back up services
}
#DEFINE NODES
$nodes = @()
$nodes += New-Object F5Node -ArgumentList @("TestNode1", "1.1.1.1")
$nodes += New-Object F5Node -ArgumentList @("TestNode2", "1.1.1.2")
#DEPLOY
DeployToNodes -Nodes $nodes -F5Host $F5Host -F5UserName $F5UserName -F5Password $F5Password
And here is the reusable F5Functions script
#Load the F5 powershell iControl snapin
Add-PSSnapin iControlSnapin;
Write-Host "Imported F5 function!!!"
Add-Type @'
public class F5Node
{
public F5Node(string name, string address){
Address = address;
Name = name;
}
public string Address {get;set;}
public string Name {get;set;}
public string QualifiedName {get{return "/Common/" + Name;}}
}
'@
function DeployToNodes(
[string]$F5Host = $(throw "Missing Required Parameter"),
[string]$F5UserName = $(throw "Missing Required Parameter"),
[string]$F5Password = $(throw "Missing Required Parameter"),
[F5Node[]]$Nodes = $(throw "Missing Required Parameter"),
[int]$MaxWaitTime = 300 #seconds... defaults to 5 minutes
){
Authenticate -F5Host $F5Host -F5UserName $F5UserName -F5Password $F5Password
foreach($node in $Nodes){
DisableNode -Node $node
WaitForConnectionsToDrop -Node $node -MaxWaitTime $MaxWaitTime
#Assume the Script that included this script defined a Deploy Method with a Node param
Deploy -Node $node
EnableNode -Node $node
}
}
function Authenticate(
[string]$F5Host = $(throw "Missing Required Parameter"),
[string]$F5UserName = $(throw "Missing Required Parameter"),
[string]$F5Password = $(throw "Missing Required Parameter")
)
{
Write-Host "Authenticating to F5..."
Initialize-F5.iControl -HostName $F5Host -Username $F5UserName -Password $F5Password
Write-Host "Authentication Success!!!"
}
function ParseStatistic(
[iControl.CommonStatistic[]]$StatsCollection = $(throw "Missing Required Parameter"),
[string]$StatName = $(throw "Missing Required Parameter")
)
{
for($i=0; $i -lt $StatsCollection.Count; $i++){
if($StatsCollection[$i].type.ToString() -eq $StatName){
return $StatsCollection[$i].value.low
break
}
}
}
function GetStats(
[F5Node]$Node = $(throw "Missing Required Parameter")
)
{
$arr = @($Node.QualifiedName)
$nodeStats = (Get-F5.iControl).LocalLBNodeAddressV2.get_statistics($arr)
return $nodeStats.statistics.statistics
#foreach($memberStats in $poolStats.statistics){
# if($memberStats.member.address.ToString() -eq $Node -and $memberStats.member.port -eq $Port){
# return $memberStats.statistics
# }
#}
}
function GetStatistic(
[F5Node]$Node = $(throw "Missing Required Parameter"),
[string]$StatName = $(throw "Missing Required Parameter")
)
{
$stats = GetStats -Node $Node
$stat = ParseStatistic -StatsCollection $stats -StatName $StatName
return $stat
}
function DisableNode(
[F5Node]$Node = $(throw "Missing Required Parameter")
)
{
Disable-F5.LTMNodeAddress -Node $Node.Address
Write-Host "Disabled Node '$Node'"
}
function EnableNode(
[F5Node]$Node = $(throw "Missing Required Parameter")
)
{
Enable-F5.LTMNodeAddress -Node $Node.Address
Write-Host "Enabled Node '$Node'"
}
function WaitForConnectionsToDrop(
[F5Node]$Node = $(throw "Missing Required Parameter"),
[int]$MaxWaitTime = 300
)
{
$connections = GetCurrentConnections -Node $Node
$elapsed = [System.Diagnostics.Stopwatch]::StartNew();
while($connections -gt 0 -and $elapsed.ElapsedMilliseconds -lt ($MaxWaitTime * 1000)){
Start-Sleep -Seconds 10
$connections = GetCurrentConnections -Node $Node
}
}
function GetCurrentConnections(
[F5Node]$Node = $(throw "Missing Required Parameter")
)
{
$connections = GetStatistic -Node $Node -StatName "STATISTIC_SERVER_SIDE_CURRENT_CONNECTIONS"
$name = $Node.Name + ":" + $Node.Address
Write-Host "$connections connections remaining on '$name'"
return $connections
}
| I haven't used it, but have you looked at the F5 iControl web service API and the F5 iControl PowerShell cmdlets provided by F5. The PowerShell cmdlets have been around since 2007 and can be downloaded from F5 DevCentral.
It looks like there are Enable-Member and Disable-Member cmdlets that you'll be able to use.
| F5 | 17,244,218 | 12 |
Sources such as this Okta sponsored site (see "Per-Request Customization" section) mention that the redirect_uri parameter of a autorization request SHOULD NEVER have a dynamic query part (ex: for session matching uses).
Quote:
The server should reject any authorization requests with redirect URLs
that are not an exact match of a registered URL.
Our OAuth AZ provider is BIG-IP F5. We are setting it up, and they seem to comply with the above view.
Our client is a web application built elsewhere, and they seem to not follow the above rule.
Here is a complete representation of the Authorization Endpoint (redacted):
https://ourownF5host.ca/f5-oauth2/v1/authorize?client_id=theIDofOurClient&redirect_uri=https%3A%2F%2FourClientAppHostname%2FClientRessource%2FRessource%3FSessionId%3D76eab448-52d1-4adb-8eba-e9ec1b9432a3&state=2HY-MLB0ST34wQUPCyHM-A&scope=RessourceData&response_type=code
They use a redirect_uri with a format similar to (I don't urlencode here, for simplicity's sake) : redirect_uri=https://ourClientAppHostname/ClientRessource/Ressource?SessionId=SOMELONGSESSIONID, with the SOMELONGSESSIONID value being DIFFERENT for each call.
We dug DEEP into RFC6749 (OAuth2), and found this in section 3.1.2.2:
The authorization server SHOULD require the client to provide the
complete redirection URI (the client MAY use the "state" request
parameter to achieve per-request customization). If requiring the
registration of the complete redirection URI is not possible, the
authorization server SHOULD require the registration of the URI
scheme, authority, and path (allowing the client to dynamically vary
only the query component of the redirection URI when requesting
authorization).
What I understand, and would like to validate here, is that the first source, Okta and F5 accept ONLY the first part of the rule above, and require the redirection uri to be COMPLETELY registered without any dynamic part.
Am I right to affirm that they (Okta and F5) DO NOT comply with the second part of the excerpt, citing that they should "allow(ing) the client to dynamically vary
only the query component of the redirection URI when requesting
authorization" ?
OR, is there any kind of official correction/evolution of the RFC6749, that warrants both companies design position ?
| TL;DR:
No, the redirect uri must be static for security reasons. If the client needs to keep a state between the authorization request and its asynchronous response, use the OAuth 2.0 state parameter.
Long version :
RFC6749 (the initial OAuth 2.0 specification) has been published in 2012 and OAuth security landscape has evolved a lot since then.
RFC6819, an OAuth 2.0 security review from 2013 already mentioned that refusing dynamically crafted redirect uris was a good way to protect against XSS and client impersonation attacks.
OpenID Connect, from 2014, a commonly used extension of OAuth 2.0 with authentication capabilities, already takes that recommendation into account and mandates exact string matching for all redirect uris.
The current draft recommendation for OAuth 2.0 Best Security Practice confirms that by enforcing redirect_uris preregistration and mandating the use simple string comparison by the AS when validating the redirect_uri passed in the request. So a dynamic redirect_uri must not be used.
Your client definitely makes a wrong move by using the redirect_uri as a "state keeper" between the Authorization request and response, by using a dynamically crafted SessionID attribute inside the redirect_uri. OAuth2.0 has a dedicated authorization request parameter for that purpose, which is "state". The client should use it. The AS will append that state in the parameters of the redirect_uri when it issues the response, so the client will be able to find back this state inside the response.
The proper authorization request would be:
https://youras/authorize?client_id=your_client_id&response_type=code&state=SOMELONGSTATE&redirect_uri=https%3A%2F%2Fsomehost%2Fauthcallback
The response will look like:
https://somehost/authcallback?state=SOMELONGSTATE&code=anazcode
This way the redirect_uri is static, so a simple string comparison is enough to validate that uri on AS side. Any algorithm more complex than simple string comparison would be subject to security flaws.
| F5 | 55,524,480 | 10 |
I can't access to public IP assigned by MetalLB load Balancer
I created a Kubernetes cluster in Contabo. Its 1 master and 2 workers. Each one has its own public IP.
I did it with kubeadm + flannel. Later I did install MetalLB to use Load Balancing.
I used this manifest for installing nginx:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
It works, pods are running. I see the external IP adress after:
kubectl get services
From each node/host I can curl to that ip and port and I can get nginx's:
<h1>Welcome to nginx!</h1>
So far, so good. BUT:
What I still miss is to access to that service (nginx) from my computer.
I can try to access to each node (master + 2 slaves) by their IP:PORT and nothing happens. The final goal is to have a domain that access to that service but I can't guess witch IP should I use.
What I'm missing?
Should MetalLB just expose my 3 possible IPs?
Should I add something else on each server as a reverse proxy?
I'm asking this here because all articles/tutorials on baremetal/VPS (non aws,GKE, etc...) do this on a kube on localhost and miss this basic issue.
Thanks.
| What you are missing is a routing policy
Your external IP addresses must belong to the same network as your nodes or instead of that you can add a route to your external address at your default gateway level and use a static NAT for each address
| MetalLB | 57,099,703 | 10 |
I have a question about my haproxy config:
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 syslog emerg
maxconn 4000
quiet
user haproxy
group haproxy
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option abortonclose
option dontlognull
option httpclose
option httplog
option forwardfor
option redispatch
timeout connect 10000 # default 10 second time out if a backend is not found
timeout client 300000 # 5 min timeout for client
timeout server 300000 # 5 min timeout for server
stats enable
listen http_proxy localhost:81
balance roundrobin
option httpchk GET /empty.html
server server1 myip:80 maxconn 15 check inter 10000
server server2 myip:80 maxconn 15 check inter 10000
As you can see it is straight forward, but I am a bit confused about how the maxconn properties work.
There is the global one and the maxconn on the server, in the listen block. My thinking is this: the global one manages the total number of connections that haproxy, as a service, will queue or process at one time. If the number gets above that, it either kills the connection, or pools in some linux socket? I have no idea what happens if the number gets higher than 4000.
Then you have the server maxconn property set at 15. First off, I set that at 15 because my php-fpm, this is forwarding to on a separate server, only has so many child processes it can use, so I make sure I am pooling the requests here, instead of in php-fpm. Which I think is faster.
But back on the subject, my theory about this number is each server in this block will only be sent 15 connections at a time. And then the connections will wait for an open server. If I had cookies on, the connections would wait for the CORRECT open server. But I don't.
So questions are:
What happens if the global connections get above 4000? Do they die? Or pool in Linux somehow?
Are the global connection related to the server connections, other than the fact you can't have a total number of server connections greater than global?
When figuring out the global connections, shouldn't it be the amount of connections added up in the server section, plus a certain percentage for pooling? And obviously you have other restrains on the connections, but really it is how many you want to send to the proxies?
Thank you in advance.
| Willy got me an answer by email. I thought I would share it. His answers are in bold.
I have a question about my haproxy config:
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 syslog emerg
maxconn 4000
quiet
user haproxy
group haproxy
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option abortonclose
option dontlognull
option httpclose
option httplog
option forwardfor
option redispatch
timeout connect 10000 # default 10 second time out if a backend is not found
timeout client 300000 # 5 min timeout for client
timeout server 300000 # 5 min timeout for server
stats enable
listen http_proxy localhost:81
balance roundrobin
option httpchk GET /empty.html
server server1 myip:80 maxconn 15 check inter 10000
server server2 myip:80 maxconn 15 check inter 10000
As you can see it is straight forward, but I am a bit confused about how the
maxconn properties work.
There is the global one and the maxconn on the server, in the listen block.
And there is also another one in the listen block which defaults to something
like 2000.
My thinking is this: the global one manages the total number of connections
that haproxy, as a service, will que or process at one time.
Correct. It's the per-process max number of concurrent connections.
If the number
gets above that, it either kills the connection, or pools in some linux
socket?
The later, it simply stops accepting new connections and they remain in the
socket queue in the kernel. The number of queuable sockets is determined
by the min of (net.core.somaxconn, net.ipv4.tcp_max_syn_backlog, and the
listen block's maxconn).
I have no idea what happens if the number gets higher than 4000.
The excess connections wait for another one to complete before being
accepted. However, as long as the kernel's queue is not saturated, the
client does not even notice this, as the connection is accepted at the
TCP level but is not processed. So the client only notices some delay
to process the request.
But in practice, the listen block's maxconn is much more important,
since by default it's smaller than the global one. The listen's maxconn
limits the number of connections per listener. In general it's wise to
configure it for the number of connections you want for the service,
and to configure the global maxconn to the max number of connections
you let the haproxy process handle. When you have only one service,
both can be set to the same value. But when you have many services,
you can easily understand it makes a huge difference, as you don't
want a single service to take all the connections and prevent the
other ones from working.
Then you have the server maxconn property set at 15. First off, I set that at
15 because my php-fpm, this is forwarding to on a separate server, only has
so many child processes it can use, so I make sure I am pooling the requests
here, instead of in php-fpm. Which I think is faster.
Yes, not only it should be faster, but it allows haproxy to find another
available server whenever possible, and also it allows it to kill the
request in the queue if the client hits "stop" before the connection is
forwarded to the server.
But back on the subject, my theory about this number is each server in this
block will only be sent 15 connections at a time. And then the connections
will wait for an open server. If I had cookies on, the connections would wait
for the CORRECT open server. But I don't.
That's exactly the principle. There is a per-proxy queue and a per-server
queue. Connections with a persistence cookie go to the server queue and
other connections go to the proxy queue. However since in your case no
cookie is configured, all connections go to the proxy queue. You can look
at the diagram doc/queuing.fig in haproxy sources if you want, it explains
how/where decisions are taken.
So questions are:
What happens if the global connections get above 4000? Do they die? Or
pool in Linux somehow?
They're queued in linux. Once you overwhelm the kernel's queue, then they're
dropped in the kernel.
Are the global connection related to the server connections, other than
the fact you can't have a total number of server connections greater than
global?
No, global and server connection settings are independant.
When figuring out the global connections, shouldn't it be the amount of
connections added up in the server section, plus a certain percentage for
pooling? And obviously you have other restrains on the connections, but
really it is how many you want to send to the proxies?
You got it right. If your server's response time is short, there is nothing
wrong with queueing thousands of connections to serve only a few at a time,
because it substantially reduces the request processing time. Practically,
establishing a connection nowadays takes about 5 microseconds on a gigabit
LAN. So it makes a lot of sense to let haproxy distribute the connections
as fast as possible from its queue to a server with a very small maxconn.
I remember one gaming site queuing more than 30000 concurrent connections
and running with a queue of 30 per server ! It was an apache server, and
apache is much faster with small numbers of connections than with large
numbers. But for this you really need a fast server, because you don't
want to have all your clients queued waiting for a connection slot because
the server is waiting for a database for instance.
Also something which works very well is to dedicate servers. If your site
has many statics, you can direct the static requests to a pool of servers
(or caches) so that you don't queue static requests on them and that the
static requests don't eat expensive connection slots.
Hoping this helps,
Willy
| HAProxy | 8,750,518 | 101 |
So according to the haproxy author, who knows a thing or two about http:
Keep-alive was invented to reduce CPU
usage on servers when CPUs were 100
times slower. But what is not said is
that persistent connections consume a
lot of memory while not being usable
by anybody except the client who
openned them. Today in 2009, CPUs are
very cheap and memory is still limited
to a few gigabytes by the architecture
or the price. If a site needs
keep-alive, there is a real problem.
Highly loaded sites often disable
keep-alive to support the maximum
number of simultaneous clients. The
real downside of not having keep-alive
is a slightly increased latency to
fetch objects. Browsers double the
number of concurrent connections on
non-keepalive sites to compensate for
this.
(from http://haproxy.1wt.eu/)
Is this in line with other peoples experience? ie without keep-alive - is the result barely noticable now? (its probably worth noting that with websockets etc - a connection is kept "open" regardless of keep-alive status anyway - for very responsive apps).
Is the effect greater for people who are remote from the server - or if there are many artifacts to load from the same host when loading a page? (I would think things like CSS, images and JS are increasingly coming from cache friendly CDNs).
Thoughts?
(not sure if this is a serverfault.com thing, but I won't cross post until someone tells me to move it there).
| Hey since I'm the author of this citation, I'll respond :-)
There are two big issues on large sites : concurrent connections and latency. Concurrent connection are caused by slow clients which take ages to download contents, and by idle connection states. Those idle connection states are caused by connection reuse to fetch multiple objects, known as keep-alive, which is further increased by latency. When the client is very close to the server, it can make an intensive use of the connection and ensure it is almost never idle. However when the sequence ends, nobody cares to quickly close the channel and the connection remains open and unused for a long time. That's the reason why many people suggest using a very low keep-alive timeout. On some servers like Apache, the lowest timeout you can set is one second, and it is often far too much to sustain high loads : if you have 20000 clients in front of you and they fetch on average one object every second, you'll have those 20000 connections permanently established. 20000 concurrent connections on a general purpose server like Apache is huge, will require between 32 and 64 GB of RAM depending on what modules are loaded, and you can probably not hope to go much higher even by adding RAM. In practice, for 20000 clients you may even see 40000 to 60000 concurrent connections on the server because browsers will try to set up 2 to 3 connections if they have many objects to fetch.
If you close the connection after each object, the number of concurrent connections will dramatically drop. Indeed, it will drop by a factor corresponding to the average time to download an object by the time between objects. If you need 50 ms to download an object (a miniature photo, a button, etc...), and you download on average 1 object per second as above, then you'll only have 0.05 connection per client, which is only 1000 concurrent connections for 20000 clients.
Now the time to establish new connections is going to count. Far remote clients will experience an unpleasant latency. In the past, browsers used to use large amounts of concurrent connections when keep-alive was disabled. I remember figures of 4 on MSIE and 8 on Netscape. This would really have divided the average per-object latency by that much. Now that keep-alive is present everywhere, we're not seeing that high numbers anymore, because doing so further increases the load on remote servers, and browsers take care of protecting the Internet's infrastructure.
This means that with todays browsers, it's harder to get the non-keep-alive services as much responsive as the keep-alive ones. Also, some browsers (eg: Opera) use heuristics to try to use pipelinining. Pipelining is an efficient way of using keep-alive, because it almost eliminates latency by sending multiple requests without waiting for a response. I have tried it on a page with 100 small photos, and the first access is about twice as fast as without keep-alive, but the next access is about 8 times as fast, because the responses are so small that only latency counts (only "304" responses).
I'd say that ideally we should have some tunables in the browsers to make them keep the connections alive between fetched objects, and immediately drop it when the page is complete. But we're not seeing that unfortunately.
For this reason, some sites which need to install general purpose servers such as Apache on the front side and which have to support large amounts of clients generally have to disable keep-alive. And to force browsers to increase the number of connections, they use multiple domain names so that downloads can be parallelized. It's particularly problematic on sites making intensive use of SSL because the connection setup is even higher as there is one additional round trip.
What is more commonly observed nowadays is that such sites prefer to install light frontends such as haproxy or nginx, which have no problem handling tens to hundreds of thousands of concurrent connections, they enable keep-alive on the client side, and disable it on the Apache side. On this side, the cost of establishing a connection is almost null in terms of CPU, and not noticeable at all in terms of time. That way this provides the best of both worlds : low latency due to keep-alive with very low timeouts on the client side, and low number of connections on the server side. Everyone is happy :-)
Some commercial products further improve this by reusing connections between the front load balancer and the server and multiplexing all client connections over them. When the servers are close to the LB, the gain is not much higher than previous solution, but it will often require adaptations on the application to ensure there is no risk of session crossing between users due to the unexpected sharing of a connection between multiple users. In theory this should never happen. Reality is much different :-)
| HAProxy | 4,139,379 | 98 |
Is there any way to validate the HAProxy haproxy.cfg file before restarting the HAProxy service? For example: There might be a small spelling/syntax error in a larger haproxy.cfg file. I searched through several forums, but was unable to find anything in relation to validating the haproxy.cfg files for syntax errors.
As of now, I use a trial and error basis on a developer machine before I upload the changes to a Production Server.
| The official HaProxy configuration file check was buried in the help sections.
/usr/local/sbin/haproxy --help
There are two ways to check the haproxy.cfg syntax is to use..
One way is the /usr/local/sbin/haproxy -c -V -f /etc/haproxy/haproxy.cfg
which validates the file syntax. The -c switch in the command represents the Check, while the others denote "Verbose" & "file".
Another way is to sudo service haproxy configtest
I hope this helps anyone looking to check the syntax of the haproxy.cfg file before restarting the service.
| HAProxy | 39,609,178 | 82 |
haproxy does not start anymore, it shows the error
bind <ip>:443' : unable to load SSL private key from PEM file ...
We did not change anything on the certificates or configuration. Since the last start we only made normal updates to the system.
To find the error, I generated a completely new certificate (self signed) but the error still exists.
This is the structure of the PEM file:
-----BEGIN CERTIFICATE-----
MIIDXjCCAkY...
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
MIIEpgIBAAKC....
-----END RSA PRIVATE KEY-----
I also tried to convert the private key with
openssl pkcs8 -topk8 -inform pem -in server.key -outform pem -nocrypt -out server_new.key
but haproxy still shows the same error.
I'm trying for hours now but I can not find the reason. Please help! Thank you!
Update:
The problem has something to do with file access. The PEM file was stored at /data/ssl/domainname/domainname.pem. File rights are ok. When I move the PEM file to /etc/haproxy then everything is ok.
| The order in which the cert and key files appear in the pem is important. Use the following to create the pem file.
cat example.com.crt example.com.key > example.com.pem
| HAProxy | 27,947,982 | 62 |
We've been fighting with HAProxy for a few days now in Amazon EC2; the experience has so far been great, but we're stuck on squeezing more performance out of the software load balancer. We're not exactly Linux networking whizzes (we're a .NET shop, normally), but we've so far held our own, attempting to set proper ulimits, inspecting kernel messages and tcpdumps for any irregularities.
So far though, we've reached a plateau of about 1,700 requests/sec, at which point client timeouts abound (we've been using and tweaking httperf for this purpose). A coworker and I were listening to the most recent Stack Overflow podcast, in which the Reddit founders note that their entire site runs off one HAProxy node, and that it so far hasn't become a bottleneck. Ack! Either there's somehow not seeing that many concurrent requests, we're doing something horribly wrong, or the shared nature of EC2 is limiting the network stack of the Ec2 instance (we're using a large instance type). Considering the fact that both Joel and the Reddit founders agree that network will likely be the limiting factor, is it possible that's the limitation we're seeing?
Any thoughts are greatly appreciated!
Edit It looks like the actual issue was not, in fact, with the load balancer node! The culprit was actually the nodes running httperf, in this instance. As httperf builds and tears down a socket for each request, it spends a good amount of CPU time in the kernel. As we bumped the request rate higher, the TCP FIN TTL (being 60s by default) was keeping sockets around too long, and the ip_local_port_range's default was too low for this usage scenario. Basically, after a few minutes of the client (httperf) node constantly creating and destroying new sockets, the number of unused ports ran out, and subsequent 'requests' errored-out at this stage, yielding low request/sec numbers and a large amount of errors.
We also had looked at nginx, but We've been working with RighScale, and they've got drop-in scripts for HAProxy. Oh, and we've got too tight a deadline [of course] to switch out components unless it proves absolutely necessary. Mercifully, being on AWS allows us to test out another setup using nginx in parallel (if warranted), and make the switch overnight later on.
This page describes each of the sysctl variables fairly well (ip_local_port_range and tcp_fin_timeout were tuned, in this case).
| Not answering the question directly, but EC2 now supports load balancing through Elastic Load Balancing rather than running your own load balancer in an EC2 instance.
EDIT: Amazon's Route 53 DNS service now offers a way to point a top-level domain at an ELB with an "alias" record. Since Amazon knows the current IP address of the ELB, it can return an A record for that current IP rather than having to use a CNAME record, while still being free to change the IP from time to time.
| HAProxy | 260,413 | 47 |
I am using HAProxy to send requests, on a subdomain, to a node.js app.
I am unable to get WebSockets to work. So far I have only been able to get the client to establish a WebSocket connection but then there is a disconnection which follows very soon after.
I am on ubuntu. I have been using various versions of socket.io and node-websocket-server. The client is either the latest versions of Safari or Chrome. HAProxy version is 1.4.8
Here is my HAProxy.cfg
global
maxconn 4096
pidfile /var/run/haproxy.pid
daemon
defaults
mode http
maxconn 2000
option http-server-close
option http-pretend-keepalive
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend HTTP_PROXY
bind *:80
timeout client 86400000
#default server
default_backend NGINX_SERVERS
#node server
acl host_node_sockettest hdr_beg(host) -i mysubdomain.mydomain
use_backend NODE_SOCKETTEST_SERVERS if host_node_sockettest
backend NGINX_SERVERS
server THIS_NGINX_SERVER 127.0.0.1:8081
backend NODE_SOCKETTEST_SERVERS
timeout queue 5000
timeout server 86400000
server THIS_NODE_SERVER localhost:8180 maxconn 200 check
I've trawled the web and mailing list but can not get any of the suggested solutions to work.
(p.s. this could be for serverfault, but there are other HAProxy question on S.O, so I have chosen to post here)
| Upgrade to latest version of socket.io (0.6.8 -> npm install [email protected], which is patched to work with HAProxy)
and download the latest version of HAProxy.
Here is an example config file:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
defaults
mode http
frontend all 0.0.0.0:80
timeout client 5000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 5000
timeout connect 4000
server server1 localhost:8081 weight 1 maxconn 1024 check
server server2 localhost:8082 weight 1 maxconn 1024 check
server server3 localhost:8083 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout queue 5000
timeout server 5000
timeout connect 5000
server server1 localhost:8081 weight 1 maxconn 1024 check
server server2 localhost:8082 weight 1 maxconn 1024 check
server server3 localhost:8083 weight 1 maxconn 1024 check
| HAProxy | 4,360,221 | 44 |
I'm building an haproxy config file that has multiple front and backends. It's going to be several hundred lines long and I'd rather split it up into separate files for each of the different websites that I want to loadbalance.
Does HAProxy offer the ability to link to partial config files from the main haproxy.cfg file?
| Configuration files can't be linked together from a configuration directive.
However HAProxy can load multiple configuration files from its command line, using the -f switch multiple times:
haproxy -f conf/http-defaults -f conf/http-listeners -f conf/tcp-defaults -f conf/tcp-listeners
If you want to be flexible with the amount of config files you can even specify a directory like this: -f /etc/haproxy. The files will then be used in their lexical order, newer files overriding older files.
See the mailing list for an example, if provides links to the documentation. This information can be found in the management guide, not the regular docs.
| HAProxy | 25,775,682 | 37 |
I'm trying to start haproxy (version 1.5.8 2014/10/31) with an "empty" config file and I get:
user@server:~$ sudo service haproxy start
[....] Starting haproxy: haproxy[ALERT] 126/120540 (7363) : Starting frontend GLOBAL: cannot bind UNIX socket [/run/haproxy/admin.sock]
altough it's enabled:
user@server:~$ cat /etc/default/haproxy
# Set ENABLED to 1 if you want the init script to start haproxy.
ENABLED=1
Configuration file:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
ssl-default-bind-options no-sslv3
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
Does anyone have an idea why it can't start?
| Haproxy needs to write to /run/haproxy/admin.sock but it wont create the directory for you. Create the directory /run/haproxy/ first or set stats socket to a different path.
| HAProxy | 30,101,075 | 37 |
Google Cloud Network load balancer is a pass-through load balancer and not a proxy load balancer. ( https://cloud.google.com/compute/docs/load-balancing/network/ ).
I can not find any resources in general on a pass through LB. Both HAProxy and Nginx seems to be proxy LBs. I'm guessing that pass through LB would be redirecting the clients directly to the servers. In what scenarios it would be beneficial?
Are there any other type of load balancers except pass-through and proxy?
| It's hard to find resources for pass-through load balancing because everyone came up with a different way of calling it: pass-though, direct server return(DSR), direct routing,...
We'll call it pass-through here.
Let me try to explain the thing:
The IP packets are forwarded unmodified to the VM, there is no address or port translation.
The VM thinks that the load balancer IP is one of its own IPs.
In the specific case of Compute Engine Network Load Balancing https://cloud.google.com/compute/docs/load-balancing/: For Linux this is done by adding a route to this IP in the "local" routing table, Windows by adding a secondary IP on the network interface.
The routing logic has to make sure that packets for a TCP connection or UDP "connection" are always sent to the same VM.
For GCE network LB see here https://cloud.google.com/compute/docs/load-balancing/network/target-pools#sessionaffinity
Regarding other load balancer types there can't be a definitive list, here are a few examples:
NAT. An example with iptables is here https://tipstricks.itmatrix.eu/use-iptables-to-load-balance-web-trafic/.
TCP Proxy. In Google Cloud Platform you can use TCP Proxy Load Balancing https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/tcp-proxy
HTTP Proxy. In Google Cloud Platform you can use HTTP(s) Load Balancing https://cloud.google.com/compute/docs/load-balancing/http/
DNS, called "DNS forwarder". For example: dnsmasq http://www.thekelleys.org.uk/dnsmasq/doc.html, or bind in "forwarding" mode https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-caching-or-forwarding-dns-server-on-ubuntu-14-04
Database communication protocols. For example the MySQL Protocol with https://github.com/mysql/mysql-proxy
SIP protocol. Big list of implementations here https://www.voip-info.org/wiki/view/Open+Source+VOIP+Software#SIPProxies
As for the advantages of pass-through over other methods:
Some applications won't work or need to be adapted if the addresses on the IP packets is changing, for example the SIP protocol. See the Wikipedia for more on applications that don't play along well with NAT https://en.wikipedia.org/wiki/Network_address_translation#NAT_and_TCP/UDP.
Here the advantage pass-through is that it does not change the source and destination IPs.
Note that there is a trick for a load balancer working at a higher layer to keep the IPs: the load balancer spoofs the IP of the client when connecting to the backends. As of this writing no load balancing product uses this method in Compute Engine.
If you need more control over the TCP connection from the client, for example to tune the TCP parameters. This is an advantage of pass-through or NAT over TCP (or higher layer) proxy.
| HAProxy | 43,205,917 | 31 |
I am really new to sys admin stuff, and have only provisioned a VPS with nginx(serving the static files) and gunicorn as the web server.
I have lately been reading about different other stuff. I came to know about other tools:
nginx : high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server
haproxy : high performance load balancer
varnish : caching HTTP reverse proxy
gunicorn : python WSGI http server
uwsgi : another python WSGI server
I have been reading about all the above 5 tools and I have confused myself as to which one is used for what purpose? Could someone please explain me in lay man terms what use is each of the tool put in, when used together and which specific concern do they address?
| Let's say you plan to host a few websites on your new VPS. Let's look at the tools you might need for each site.
HTTP Servers
Website 'Alpha' just consists of a some pure HTML, CSS and Javascript. The content is static.
When someone visits website Alpha, their browser will issue an HTTP request. You have configured (via DNS and name server configuration) that request to be directed to the IP address of your VPS. Now you need your VPS to be able to accept that HTTP request, decide what to do with it, and issue a response that the visitor's browser can understand. You need an HTTP server, such as Apache httpd or NGINX, and let's say you do some research and eventually decide on NGINX.
Application Servers
Website 'Beta' is dynamic, written using the Django Web Framework.
WSGI is an protocol that describes the interface between a Python application (the django app) and an application server. So what you need now is an WSGI app server, which will be able to understand web requests, make appropriate 'calls' to the application's various objects, and return the results. You have many options here, including gunicorn and uWSGI. Let's say you do some research and eventually decide on uWSGI.
uWSGI can accept and handle HTTPS requests for static content as well, so if you wanted to you could have website Alpha served entirely by NGINX and website Beta served entirely by uWSGI. And that would be that.
Reverse Proxy Servers
But uWSGI has poor performance in dealing with static content, so you would rather use NGINX for static content like images, even on website Beta. But then something would have to distinguish between requests and send them to the right place. Is that possible?
It turns out NGINX is not just an HTTP server but also a reverse proxy server: it is capable of redirecting incoming requests to another place, like your uWSGI application server, or many other places, collecting the response(s) and sending them back to the original requester. Awesome! So you configure all incoming requests to go to NGINX, which will serve up static content or, when required, redirect it to the app server.
Load Balancing with multiple web servers
You are also hosting Website Gamma, which is a blog that is popular internationally and receives a ton of traffic.
For Gamma you decide to set up multiple web servers. All incoming requests are going to your original VPS with NGINX, and you configure NGINX to redirect the request to one of several other web servers based in round-robin fashion, and return the response to the original requester.
HAProxy is web server that specializes in balancing loads for high traffic sites. In this case, you were able to use NGINX to handle traffic for site Gamma. In other scenarios, one may choose to set up a high-availability cluster: e.g., send all requests to a server like HAProxy, which intelligently redirects traffic to a cluster of nginx servers similar to your original VPS.
Cache Server
Website Gamma exceeded the capacity of your VPS due to the sheer volume of traffic. Let's say you instead hosted website Delta, and the reason your web server is unable to handle Delta is due to a popular feature that is very content-heavy.
A cache server is able to understand what media content is being frequently requested and store this content differently, such that it can be more quickly served. This is achieved by reducing disk IO operations; the popular content can be stored in memory or virtual memory instead. You might decide to combine your existing NGINX stack with a technology like Varnish or Memchached to achieve this type of optimization and server website Gamma more effectively.
| HAProxy | 13,210,636 | 30 |
We have recently shifted from HTTP to HTTPS. As we have already moved to HTTPS, we are thinking of moving to HTTP/2 to get performance benefits.
As explained above that requests between browser and LB are secured (HTTPS) while communication between LB and app server still using HTTP
What is the possibility of enabling HTTP /2 with the current setup? Can we enable HTTP/2 between browser and LB while communication between LB and app servers remain on HTTP?
| 2017 update: HAProxy 1.8 supports HTTP/2
From the 1.8 announcement:
HAProxy 1.8 now supports HTTP/2 on the client side (in the frontend sections) and can act as a gateway between HTTP/2 clients and your HTTP/1.1 and HTTP/1.0 applications.
You'll need the h2 directive in your haproxy.conf. From CertSimple's HAProxy HTTP/2 and dynamic load balancing guide:
frontend myapp
bind :443 ssl crt /path/to/cert.crt alpn h2,http/1.1
mode http
Older versions of HAProxy
Older versions of HAProxy like 1.6 and 1.7 only support pass-through HTTP/2 - ie, directing traffic onto a seperate app server that supports HTTP/2. This is significantly more complicated - see other answers on how to do this. To terminate HTTP/2 and read the traffic on HAProxy, you'll need HAProxy 1.8.
| HAProxy | 40,656,406 | 30 |
The server is receiving thousands of OPTIONS requests due to CORS (Cross-Origin Resource Sharing). Right now, every options request is being sent to one of the servers, which is a bit wasteful, knowing that HAProxy can add the CORS headers itself without the help of a web server.
frontend https-in
...
use_backend cors_headers if METH_OPTIONS
...
backend cors_headers
rspadd Access-Control-Allow-Origin:\ https://www.example.com
rspadd Access-Control-Max-Age:\ 31536000
However for this to work I need to specify at least one live server in cors_headers backend and that server will still receive the requests.
How can I handle the request in the backend without specifying any servers? How can I stop the propagation of the request to servers, while sending the response to the browser and keeping the connection alive?
| Good news, HAProxy 2.2 just introduced the "Native Response Generator" feature. It works with the http-request return directive, and can be used for serving static files or text strings, including dynamic parameters.
The goal is to avoid the usual hacks with errorfile.
Taking advantage of another directive introduced in version 2.2 (http-after-response), the OP goal could be achieved with the following:
backend cors_headers
# http-response won't work here as the response is generated by HAP
http-after-response set-header Access-Control-Allow-Origin \
"%[req.hdr(Origin)]"
http-after-response set-header Access-Control-Max-Age "31536000"
http-request return status 200 content-type "text/plain" string "" if TRUE
The set-header and http-request return can be made conditional with an if clause based on the request headers or origin, depending on your needs (see the doc for examples).
With this technique the headers and response can use variables:
http-request return status 200 content-type "text/plain" \
lf-string "Hello, you are: %[src]"
| HAProxy | 26,556,838 | 26 |
I installed HAProxy for load ballancing before. How can I check what is my haproxy version which installed before in mu ubuntu?
| haproxy --version
If you want to see the package info on Ubuntu, you can use 'aptitude show haproxy' to get version and other details about what's installed.
| HAProxy | 10,908,929 | 25 |
I need redirect www.foo.com and foo.com to www.bar.com in haproxy, this is my configuration:
frontend http-in
bind *:80
acl bar.com hdr(host) -i www.bar.com
...
use_backend bar.com_cluster if bar.com
...
redirect prefix http://foo.com code 301 if { hdr(host) -i www.bar.com }
redirect prefix http://www.foo.com code 301 if { hdr(host) -i www.bar.com }
...
backend bar.com_cluster
balance roundrobin
option httpclose
option forwardfor
server bar 10.0.0.1:80 check
I have tried with redirect prefix but don't work, any idea?
| Change order of the hostname:
redirect prefix http://www.bar.com code 301 if { hdr(host) -i foo.com }
redirect prefix http://www.bar.com code 301 if { hdr(host) -i www.foo.com }
instead of
redirect prefix http://foo.com code 301 if { hdr(host) -i www.bar.com }
redirect prefix http://www.foo.com code 301 if { hdr(host) -i www.bar.com }
| HAProxy | 28,530,087 | 25 |
I build a HAProxy on CentOS 7 and enable statistics page with port 8080. It seems work properly.
When I set port as 8888, the HAProxy is not working and gives me some feedback.
After that, I tried many ways to solve this problem, but the problem is still there.
Does anyone can help me deal with this issue?
Here is the system information
haprxoy.cfg
/etc/haproxy/haproxy.cfg
Port 8080 is fine, 8888 is not working.
# [HAPROXY DASHBOARD]
listen stats :8888
mode http
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth haproxy:haproxy
stats refresh 10s
Service Status
service haproxy status
systemd[1]: Started HAProxy Load Balancer.
haproxy-systemd-wrapper[2358]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy.cf...id -Ds
haproxy-systemd-wrapper[2358]: [ALERT] 012/095413 (2359) : Starting proxy stats: cannot bind socket [0.0.0.0:8888]
haproxy-systemd-wrapper[2358]: haproxy-systemd-wrapper: exit, haproxy RC=256
/etc/sysctl.conf
Someone said that could be a Virtual IP problem, so I follow the instruction and add the setting below then run sysctl -p
net.ipv4.ip_nonlocal_bind=1
Network Confgiuration
ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
link/ether 00:15:5d:0a:09:05 brd ff:ff:ff:ff:ff:ff
inet 192.168.4.117/24 brd 192.168.4.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::215:5dff:fe0a:905/64 scope link
valid_lft forever preferred_lft forever
Listening Ports
ss --listening
[root@localhost ~]# ss --listening
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
nl UNCONN 0 0 rtnl:NetworkManager/792 *
nl UNCONN 0 0 rtnl:kernel *
nl UNCONN 0 0 rtnl:avahi-daemon/671 *
nl UNCONN 0 0 rtnl:4195096 *
nl UNCONN 4352 0 tcpdiag:ss/3772 *
nl UNCONN 768 0 tcpdiag:kernel *
nl UNCONN 0 0 6:kernel *
nl UNCONN 0 0 7:kernel *
nl UNCONN 0 0 7:systemd/1 *
nl UNCONN 0 0 7:dbus-daemon/680 *
nl UNCONN 0 0 9:auditd/640 *
nl UNCONN 0 0 9:kernel *
nl UNCONN 0 0 9:systemd/1 *
nl UNCONN 0 0 10:kernel *
nl UNCONN 0 0 11:kernel *
nl UNCONN 0 0 15:iprdump/723 *
nl UNCONN 0 0 15:systemd/1 *
nl UNCONN 0 0 15:-4124 *
nl UNCONN 0 0 15:systemd-logind/679 *
nl UNCONN 0 0 15:NetworkManager/792 *
nl UNCONN 0 0 15:iprinit/713 *
nl UNCONN 0 0 15:-4107 *
nl UNCONN 0 0 15:-4125 *
nl UNCONN 0 0 15:-4119 *
nl UNCONN 0 0 15:iprupdate/710 *
nl UNCONN 0 0 15:-4118 *
nl UNCONN 0 0 15:kernel *
nl UNCONN 0 0 15:-4117 *
nl UNCONN 0 0 15:tuned/676 *
nl UNCONN 0 0 16:kernel *
nl UNCONN 0 0 18:kernel *
u_str LISTEN 0 128 /run/lvm/lvmetad.socket 11542 * 0
u_str LISTEN 0 128 /run/systemd/journal/stdout 6697 * 0
u_dgr UNCONN 0 0 /run/systemd/journal/socket 6700 * 0
u_dgr UNCONN 0 0 /dev/log 6702 * 0
u_dgr UNCONN 0 0 /run/systemd/shutdownd 11321 * 0
u_dgr LISTEN 0 128 /run/udev/control 11338 * 0
u_str LISTEN 0 100 public/flush 18726 * 0
u_str LISTEN 0 100 public/showq 18741 * 0
u_str LISTEN 0 30 /var/run/NetworkManager/private-dhcp 17003 * 0
u_dgr UNCONN 0 0 @/org/freedesktop/systemd1/notify 11259 * 0
u_str LISTEN 0 100 private/tlsmgr 18708 * 0
u_str LISTEN 0 30 /var/run/NetworkManager/private 16518 * 0
u_str LISTEN 0 128 /var/run/avahi-daemon/socket 13986 * 0
u_str LISTEN 0 128 /var/run/dbus/system_bus_socket 13998 * 0
u_str LISTEN 0 100 private/rewrite 18711 * 0
u_str LISTEN 0 100 private/bounce 18714 * 0
u_str LISTEN 0 100 private/defer 18717 * 0
u_str LISTEN 0 100 private/trace 18720 * 0
u_str LISTEN 0 100 private/verify 18723 * 0
u_str LISTEN 0 100 private/proxymap 18729 * 0
u_str LISTEN 0 100 private/proxywrite 18732 * 0
u_str LISTEN 0 100 private/smtp 18735 * 0
u_str LISTEN 0 100 private/relay 18738 * 0
u_str LISTEN 0 100 private/error 18744 * 0
u_str LISTEN 0 100 private/retry 18747 * 0
u_str LISTEN 0 100 private/discard 18750 * 0
u_str LISTEN 0 100 private/local 18753 * 0
u_str LISTEN 0 100 private/virtual 18756 * 0
u_str LISTEN 0 100 private/lmtp 18759 * 0
u_str LISTEN 0 100 private/anvil 18762 * 0
u_str LISTEN 0 100 private/scache 18765 * 0
u_str LISTEN 0 100 public/pickup 18697 * 0
u_str LISTEN 0 100 public/cleanup 18701 * 0
u_str LISTEN 0 100 public/qmgr 18704 * 0
u_str LISTEN 0 30 /run/systemd/private 11261 * 0
u_dgr UNCONN 0 0 * 14733 * 6700
u_dgr UNCONN 0 0 * 15011 * 6702
u_dgr UNCONN 0 0 * 12659 * 12658
u_dgr UNCONN 0 0 * 18818 * 6702
u_dgr UNCONN 0 0 * 15244 * 6702
u_dgr UNCONN 0 0 * 16991 * 6702
u_dgr UNCONN 0 0 * 12644 * 6700
u_dgr UNCONN 0 0 * 12658 * 12659
u_dgr UNCONN 0 0 * 19513 * 6700
u_dgr UNCONN 0 0 * 29994 * 6702
u_dgr UNCONN 0 0 * 13899 * 6702
u_dgr UNCONN 0 0 * 16528 * 6702
u_dgr UNCONN 0 0 * 30457 * 6702
u_dgr UNCONN 0 0 * 18632 * 6702
u_dgr UNCONN 0 0 * 16504 * 6702
raw UNCONN 0 0 :::ipv6-icmp :::*
tcp UNCONN 0 0 *:ipproto-5353 *:*
tcp UNCONN 0 0 *:ipproto-50900 *:*
tcp LISTEN 0 100 127.0.0.1:smtp *:*
tcp LISTEN 0 128 *:ssh *:*
tcp LISTEN 0 100 ::1:smtp :::*
tcp LISTEN 0 128 :::ssh :::*
| Thanks for you guys at first.
I have solved this issue by following command.
setsebool -P haproxy_connect_any=1
It works for me!
| HAProxy | 34,793,885 | 25 |
It's been on the cards for a while, but now that Amazon have released Elastic Load balancing (ELB), what are your thoughts on deploying this solution for a high-traffic web application?
Should we replace HAProxy or consider ELB as a complimentary service in front of HAProxy?
| I've been running an ELB instead of HAProxy for about a month now on a site that gets about 100,000 visits per day, and I've been pretty pleased with the results.
A gotcha though (UPDATE, this issue has been fixed by Amazon AWS, see comments below):
You can't load balance the root of a domain as you have to create a CNAME alias to your load balancer. Once solution is to redirect all traffic from http://mysite.com to http://www.mysite.com.
Apart from that I really can't speak highly enough of the AWS ELB offerings. I'm also using the Cloudwatch monitoring and autoscaling. Oh and don't forget it's cheaper than running a small EC2 instance ($0.025 per hour instead of $0.10).
| HAProxy | 877,468 | 23 |
I'm currently using Haproxy to balance several express.js nodes. I know that it's possible to redirect using express.js, but I was hoping to do so with Haproxy.
I was wondering how I can do a permanent redirect from www.mysite.com to mysite.com?
| redirect prefix http://example.com code 301 if { hdr(host) -i www.example.com }
Please see the documentation of the redirect prefix rule for more information.
If you are using a newer version of HAProxy, i.e. at least 1.6, you can use a more generic syntax which allows to redirect any host, not just explicitly named
http-request redirect prefix http://%[hdr(host),regsub(^www\.,,i)] code 301 if { hdr_beg(host) -i www. }
Here, we are using the regsub filter to dynamically generate the correct hostname without the www. prefix.
In case you want to perform a redirect the other way around, i.e. to add a www if there is none already, the rule becomes simpler:
http-request redirect prefix http://www.%[hdr(host)] code 301 unless { hdr_beg(host) -i www. }
| HAProxy | 10,401,621 | 22 |
I'm trying to match various conditions inside one backend, like this:
acl rule1 hdr_dom(host) -i ext1
acl rule2 utl_beg /img
default_backend back-server-http if rule1 and rule2
but, how can I put this "and" between the two rules?
| Yes, this is the solution:
acl rule1 hdr_dom(host) -i www.uno.es hdr_dom(host) -i www.one.com
use_backend uno.com if rule1
| HAProxy | 11,140,021 | 21 |
After to much googling, i finally made my haproxy ssl to works. But now i got problem because root and intermediate certificate is not installed so my ssl don`t have green bar.
My haproxy config
global
maxconn 4096
nbproc 1
#debug
daemon
log 127.0.0.1 local0
defaults
mode http
option httplog
log global
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend unsecured
bind 192.168.0.1:80
timeout client 86400000
reqadd X-Forwarded-Proto:\ http
default_backend www_backend
frontend secured
mode http
bind 192.168.0.1:443 ssl crt /etc/haproxy/cert.pem
reqadd X-Forwarded-Proto:\ https
default_backend www_backend
backend www_backend
mode http
balance roundrobin
#cookie SERVERID insert indirect nocache
#option forwardfor
server server1 192.168.0.2:80 weight 1 maxconn 1024 check
server server2 192.168.0.2:80 weight 1 maxconn 1024 check
192.168.0.1 is my load balancer ip. /etc/haproxy/cert.pem contain private key and domain certificate eg. www.domain.com
There is another question with ssl configuration, which include bundle.crt. When i contacted my ssl support, they told me i need to install root and intermediate certificate.
From Comodo Documentation, creating bundle is simple as merging their crt, which i made.
But when i try to reconfig my haproxy config as
bind 192.168.0.1:443 ssl crt /etc/haproxy/cert.pem ca-file /path/to/bundle.crt
Im getting error that i cant use that config parameter on bind.
p.s im using 1.5 dev12 version. With latest dev17 version i had problems even starting haproxy as on this post
| It looks like you'll need to recompile like so:
make clean
make \
TARGET="linux26" \
USE_STATIC_PCRE=1 \
USE_OPENSSL=1
make install PREFIX="/opt/haproxy"
After that, bind should recognise your crt option.
In my case, I used:
bind 0.0.0.0:443 ssl crt /envs/production/ssl/haproxy.pem
I concatenated all ssl files into 1 big file in the order certificate chain, private key. e.g.:
-----BEGIN MY CERTIFICATE-----
-----END MY CERTIFICATE-----
-----BEGIN INTERMEDIATE CERTIFICATE-----
-----END INTERMEDIATE CERTIFICATE-----
-----BEGIN INTERMEDIATE CERTIFICATE-----
-----END INTERMEDIATE CERTIFICATE-----
-----BEGIN ROOT CERTIFICATE-----
-----END ROOT CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
Restart, and test with openssl s_client -connect 127.0.0.1:443 -servername www.transloadit.com |head.
It should return the correct certificate information.
Edit: I just found this tutorial via HackerNews: https://serversforhackers.com/c/using-ssl-certificates-with-haproxy. Thought it would be useful to add as it goes into more detail.
| HAProxy | 15,048,346 | 18 |
I am trying to use certificate signed for another server. I have both private key and certificate.
My PEM file order is :
subject=/C=***/L=*****/O=**********/CN=*********
issuer=/C=***/O=*****Inc/CN=********Secure Server CA
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
subject=/C=US/O=******** Inc/CN=********* SHA2 Secure Server CA
issuer=/C=US/O=********* Inc/OU=*********/CN=******** Global Root CA
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
subject=/C=US/O=********* Inc/OU=***********/CN=*********** Global Root CA
issuer=/C=US/O=********* Inc/OU=************/CN=******** Global Root CA
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
-----BEGIN RSA PRIVATE KEY-----
-----END RSA PRIVATE KEY-----
When i tried to deploy it to my haproxy, i got this error.
[ALERT] 188/141626 (2322) : parsing [/etc/haproxy/haproxy.cfg:32] : 'bind *:443' : inconsistencies between private key and certificate loaded from PEM file ................
[ALERT] 188/141626 (2322) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 188/141626 (2322) : Proxy 'www-https': no SSL certificate specified for bind '*:443' at [/etc/haproxy/haproxy.cfg:32] (use 'crt').
[ALERT] 188/141626 (2322) : Fatal errors found in configuration.
Errors in configuration file, check with haproxy check.
And my haproxy version is:
HA-Proxy version 1.5.2 2014/07/12
Copyright 2000-2014 Willy Tarreau <[email protected]>
I can start my haproxy with self-signed cert. Why does occur this inconsistency? I am sure that private key belongs to certificate.
I'm trying for hours now but I can not find the reason.
Please help! Thank you!
| The order of the certificates in your file is wrong. It seems you are putting the intermediate certificate (i.e. Secure Server CA) first which is thus expected to be the server certificate. The order of the certificates needs to be:
server certificate
server private key (without any password)
intermediate certificate 1
intermediate certificate 2
It's actually not that important where you put the private key. However, the order of the certificates strictly needs to be ordered from leaf to root, i.e. first the server certificate, then the intermediate, then it's parent. Basically, you put the server certificate first, then its signer, then its signer, ...
For more information, please refer to the documentation.
| HAProxy | 31,291,243 | 18 |
I have an HAProxy HTTP Frontend in my HAProxy config like so:
frontend myaddress.net :10098
bind :80,:8080
mode http
log global
option http-server-close
timeout client 14400000
timeout connect 60000
timeout tunnel 14400000
timeout http-request 14400000
capture request header User-Agent len 64
capture request header Accept-language len 64
capture request header x-forward len 15
capture request header host len 64
capture request header X-Orig-Base len 64
capture request header X-Orig-Host len 64
capture request header X-Orig-Proto len 64
reqadd X-Original-host:\ myaddress.net
acl is-ssl hdr(X-Orig-Proto) https
acl is-http hdr(X-Orig-Proto) http
redirect code 301 prefix https://myaddress.net if is-http
default_backend BACKEND_myaddress.net:catchall
It points to a backend defined like so:
backend BACKEND_myaddress.net:catchall
timeout server 4h
balance leastconn
server myserver myserver:8080 check inter 5s rise 3 fall 1
I've got it working to listen on port 80, then forward to 8080 on the backend server, but now I'm trying to make it also listen on port 8080 on the frontend (don't ask me why, it's a lame requirement).
As you can see, I've got a line that says bind :80,:8080. I thought that would make the frontend also listen on port 8080, but it's not appearing to listen on port 8080.
Is there something I'm missing in this configuration? How can I make a frontend listen on port 8080 and 80, which then forwards to the backend server on port 8080?
| Try this in your frontend section:
bind :80
bind :8080
| HAProxy | 20,082,761 | 17 |
I am trying to setup an Haproxy to load balance requests on a few backends identified by the uri path. For example:
https://www.example.com/v1/catalog/foo/bar
Should lead to the "catalog-v1" backends.
Thing is each app responds on a different path so I must not only identify the app but rewrite the URL path. E.g.
https://www.example.com/v1/catalog/product
https://www.example.com/v2-2/checkout/cart/123
https://www.example.com/v3.1.2/checkout/cart
TO
https://www.example.com/catalog-v1/product
https://www.example.com/checkout-v2-2/cart/123
https://www.example.com/checkout-v3.1.2/cart
I know I shouldn't use Haproxy for rewriting purposes but for now this is inevitable.
Tried the following regex which worked on regex101:
([a-z.]*)\/([a-z0-9\-\.]*)\/([a-z\-]*)\/(.*)
Substitution:
\1/\3-\2/\4
And finally here's the haproxy.config:
global
daemon
user root
group root
maxconn 256000
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
stats socket /run/haproxy/stats.sock mode 777 level admin
defaults
log global
option dontlognull
maxconn 4000
retries 3
timeout connect 5s
timeout client 1m
timeout server 1m
option redispatch
balance roundrobin
listen stats :8088
mode http
stats enable
stats uri /haproxy
stats refresh 5s
backend catalog-v1
mode http
option httpchk GET /catalog-v1/ping
http-check expect status 200
reqrep ([a-z.]*)\/([a-z0-9\-\.]*)\/([a-z\-]*)\/(.*) \1/\3-\2/\4
server 127.0.0.1:8280_catalog-v1-node01 127.0.0.1:8280 check inter 2s rise 3 fall 2
backend checkout-v1
mode http
option httpchk GET /checkout-v1/ping
http-check expect status 200
reqrep ([a-z.]*)\/([a-z0-9\-\.]*)\/([a-z\-]*)\/(.*) \1/\3-\2/\4
server 127.0.0.1:8180_checkout-v1-node01 127.0.0.1:8180 check inter 2s rise 3 fall 2
frontend shared-frontend
mode http
bind localhost:80
acl is-catalog-v1-path path_dir /v1/catalog
acl is-checkout-v1-path path_dir /v1/checkout
use_backend catalog-v1 if is-catalog-v1-path
use_backend checkout-v1 if is-checkout-v1-path
Am I missing something?
I've been struggling with this for quite some time without success. The backends shows "UP" in Haproxy stats page but everytime I call the "non rewrited url" I get a 400 Bad Request error.
Thanks in advance for your help!
| If I understand you correctly, replace the example below:
reqrep ^([^\ ]\*)\ /([-.0-9A-Za-z]\*)/([a-zA-Z]\*)/(.\*) \1\ /\3-\2/\4
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#reqrep
reqrep search string [{if | unless} cond]
You have no spaces.
| HAProxy | 24,784,517 | 17 |
I have a simple condition in my HAproxy config (I tried this for frontend and backend):
acl no_index_url path_end .pdf .doc .xls .docx .xlsx
rspadd X-Robots-Tag:\ noindex if no_index_url
It should add the no-robots header to content that should not be indexed. However it gives me this WARNING when parsing the config:
acl 'no_index_url' will never match because it only involves keywords
that are incompatible with 'backend http-response header rule'
and
acl 'no_index_url' will never match because it only involves keywords
that are incompatible with 'frontend http-response header rule'
According to documentation, rspadd can be used in both frontend and backend. The path_end is used in examples within frontend. Why am I getting this error and what does it mean?
| Starting in HaProxy 1.6 you won't be able to just ignore the error message. To get this working use the temporary variable feature:
frontend main
http-request set-var(txn.path) path
backend local
http-response set-header X-Robots-Tag noindex if { var(txn.path) -m end .pdf .doc }
| HAProxy | 29,731,265 | 16 |
I'm looking for a cost effective tool for managing an web app on Ec2. Rightscale seems to the big dog and charges for it. Scalr looks like a more cost effective solution but it's hard to find out any real customer experiences..
The key aspects I'm looking for is a load balancer (http and https) and a way to automatically bring online additional web servers capacity as load increases as well as terminate the instances when load falls off.
From what I can tell, lots of people are rolling their own stuff here. We're trying to release an app and don't really want to have to fight too many heavy sys admin battles. Given the importance of performance etc I'd be grateful to hear advise and experiences from the field on this.
| I am a Scalr user, a Scalr.net subscriber, and have become a Scalr enthusiast. I cannot possibly afford Rightscale.
Scalr can do what you ask.
Scalr has three images (each with 32/64 bit versions), plus a base (generic) image:
1) A load balancer image, running nginx. A highly available setup requires two of these. Scalr will manage your nameservice, and round robin between them. If one goes down, Scalr will remove it from DNS and bring up another instance. It is possible to run other load balancers, but nginx is the default.
2) Several application server images are available, running Apache/Tomcat/Rails. You setup your application here, be it PHP/Perl/Python/Java/Ruby/whatever. nginx routes requests between these instances grouped by unique user (based on IP + browser). Scalr monitors these for upness too, and replaces broken instances.
3) A MySQL database image, with automatic master/slave replication. Just deploy your schema, and Scalr handles replication and replaces defunct servers. It will also backup your data periodically. Scalr's DNS provides master and slave hostnames, so you can have your app read from the slaves and write to the master.
All of these instance types will auto-scale based on load. You start with the base image closest to what you're doing, and then you customize them for your application. For instance, we deploy our Perl/Catalyst app on the apache server instances but we serve static content from the nginx front-end servers. We had to modify our application slightly to use read/write database handles.
All in all, it took about three weeks of working through bugs in Scalr to get our application to a reliable state where I am confident that it IS highly available with Scalr. Their support was phenomenal, so the bugs didn't bother me too much, and the system is really coming along. It is approaching serious reliability.
As a side note, the best feature of Scalr is the 'Synchronize to All' feature, which auto-bundles your AMI and re-deploys it on a new instance - all without a service interruption. This saves you the time of going through the lengthy EC2 image/AMI creation process, which can otherwise make very simple admin tasks take 20 minutes. You can use this whether you are scaling your server farm or not - it would be very handy even on a single instance.
I pay Scalr.net $50 a month to host the service for me because I think it saves me time and money. The bottom line so far is this: at my last gig, we had a systems guy working on our highly available Linux DB + app server setup for a year... and he failed to achieve the kind of reliability that I achieved in three weeks. The savings by using Scalr as compared to rolling my own are extreme.
All that being said, if I could afford Rightscale, I would be using Rightscale. But the up-front fee and $500 a month make that impossible. There has been talk of waving the up-front fee in exchange for waving the consulting that it includes, but the monthly service fee isn't going anywhere.
I should mention that at the moment, sclar.net's website is down, so if I wanted to manage any of my server farms (don't have them up atm), I simply couldn't right now. It is not clear whether scaling is working for scalr.net subscribers right now, or not. Which is to say... this is perhaps not a mature solution yet. This doesn't happen often, before tonight the only downtime I've experienced were in periods of a few minutes at a time. But yeah... its down RIGHT NOW, so I must mention it :)
I would suggest a thorough reading of the support group at http://groups.google.com/group/scalr-discuss before making your decision. If you pick Scalr, be prepared to test your setup and work through any issues you have on the google group.
| HAProxy | 366,612 | 15 |
I have configured HAProxy (1.5.4, but I tried also 1.5.14) to balance in TCP mode two server exposing AMQP protocol (WSO2 Message Broker) on 5672 port.
The clients create and use permanent connection to the AMQP Servers, via HAProxy.
I've changed the client and server TCP keepalive timeout, setting net.ipv4.tcp_keepalive_time=120 (CentOS 7).
In HAProxy I've setted timeout client/server to 200 seconds (>120 seconds of the keepalive packets) and used the option clitcpka.
Then I've started wireshark and sniffed all the tcp traffic: after the last request from the clients, the tcp keepalived packets are sente regularly after 120 seconds, but after 200 seconds after the last request from the clients the connection are closed (thus ignoring the keepalived packet).
Below the configuration:
haproxy.conf
global
log 127.0.0.1 local3
maxconn 4096
user haproxy
group haproxy
daemon
debug
listen messagebroker_balancer 172.19.19.91:5672
mode tcp
log global
retries 3
timeout connect 5000ms
option redispatch
timeout client 200000ms
timeout server 200000ms
option tcplog
option clitcpka
balance leastconn
server s1 172.19.19.79:5672 check inter 5s rise 2 fall 3
server s2 172.19.19.80:5672 check inter 5s rise 2 fall 3
| TCP keep alive is at the transport layer and is only used to do some traffic on the connection so intermediate systems like packet filters don't loose any states and that the end systems can notice if the connection to the other side broke (maybe because something crashed or a network cable broke).
TCP keep alive has nothing to do with the application level idle timeout which you have set explicitly to 200s:
timeout client 200000ms
timeout server 200000ms
This timeouts gets triggered if the connection is idle, that is if no data get transferred. TCP keep alive does not transport any data, the payload of these packets is empty.
| HAProxy | 32,634,980 | 15 |
Does HAProxy support domain name to backend mapping for path based routing.
Currently it does support maps for vhost:
frontend xyz
<other_lines>
use_backend backend1 if { hdr(Host) -i myapp.domain1.com }
use_backend backend2 if { hdr(Host) -i myapp.domain2.com }
Can be rewritten using maps as:
frontend xyz
<other_lines>
use_backend %[req.hdr(host),lower,map_dom(/path/to/map,default)]
With the contents of map file as:
#domainname backendname
myapp.domain1.com backend1
myapp.domain2.com backend2
But if the routing is based on paths as shown in the example below:
frontend xyz
acl host_server_myapp hdr(host) -i myapp.domain.com
acl path_path1 path_beg /path1
acl path_path2 path_beg /path2
use_backend backend1 if host_server_myapp path_path1
use_backend backend2 if host_server_myapp path_path2
Is it possible to have mapping for this usecase? Using base instead of hdr(host) might give the entire path but it will not have the flexibility of domains since base is string comparison. Is there an other way to convert this to haproxy maps.
| Start with the Layer 7 base fetch --
This returns the concatenation of the first Host header and the path part of
the request, which starts at the first slash and ends before the question
mark.
...then use map_beg() to match the beginning of the string to the map.
use_backend %[base,map_beg(/etc/haproxy/testmap.map,default)]
If the map file /etc/haproxy/testmap.map has a line matching the prefix, the backend in the map file is used. Otherwise, the backend called default will be used (that's the 2nd argument to map_beg() -- the value to be returned if the map doesn't match).
If the resulting backend doesn't actually exist, HAProxy continues processing the request as if this statement weren't configured at all.
So your map file would look something like this:
example.com/foo this-backend # note, also matches /foo/ba
example.com/foo/bar that-backend # note, matches /foo/bar
example.org/foo some-other-backend
To treat a subdomain as equivalent to the parent domain (e.g., treating example.com and www.example.com to be handled equivalently, without map duplication, as discussed in comments) the regsub() converter could be used to modify the value passed to the map:
use_backend %[base,regsub(^www\.,,i),map_beg(/etc/haproxy/testmap.map,default)]
| HAProxy | 37,858,336 | 15 |
As per the (verbose) topic, are there any advantages over using a Keepalived & HAProxy as an HA webserver loadbalancer vs a pure keepalived solution?
| Keepalived is working in layer 4 so doesn't have layer 7 knowledge at all. By using HAProxy and Keepalived together you can get benefit of having some options that HAProxy provides in layer 7 like Stickiness, Sampling and converting information, ACLs and conditions, Content switching, Stick-tables, Formated strings, HTTP rewriting and redirection, Server protection, etc.
If you only need to have a load balancer without any manipulating or any high level decision (layer 7) you can use only Keepalived and it will be faster because it works in layer 4.
| HAProxy | 29,292,393 | 14 |
Using HAProxy, I'm trying to (TCP) load balance Rserve(a service listening in TCP socket for calling R scripts) running at port 6311 in 2 nodes.
Below is my config file. When I run HAProxy, its statting without any issues. But when I connect to the balanced nodes, getting below error. Anything wrong with the config?
Handshake failed: expected 32 bytes header, got -1
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
log global
option httplog
option dontlognull
option http-server-close
#option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
listen haproxy_rserve
bind *:81
mode tcp
option tcplog
timeout client 10800s
timeout server 10800s
balance leastconn
server rserve1 rserveHostName1:6311
server rserve2 rserveHostName2:6311
listen stats proxyHostName:8080
mode http
stats enable
stats realm Haproxy\ Statistics
stats uri /haproxy_stats
stats hide-version
stats auth admin:password
Tried with below frontend-backend way of balancing as well. Same result.
frontend haproxy_rserve
bind *:81
mode tcp
option tcplog
timeout client 10800s
default_backend rserve
backend rserve
mode tcp
option tcplog
balance leastconn
timeout server 10800s
server rserve1 rserveHostName1:6311
server rserve2 rserveHostName2:6311
| After struggling for a week for a solution to load balance R, below (full free/open source software stack) solution worked.
If more people are referring this, I'll post a detailed blog on installation to configuration.
Was able to load balance R script requests coming to Rserve via HAProxy TCP load balancer with the below config. Pretty much similar to config in question section, but with frontend and backend separated.
#Load balancer stats page access at hostname:8080/haproxy_stats
listen stats <load_balancer_hostname>:8080
mode http
log global
stats enable
stats realm Haproxy\ Statistics
stats uri /haproxy_stats
stats hide-version
stats auth admin:admin@rserve
frontend rserve_frontend
bind *:81
mode tcp
option tcplog
timeout client 1m
default_backend rserve_backend
backend rserve_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server rserve1 <rserve hostname1>:6311 check
server rserve2 <rserve hostname2>:6311 check
server rserve3 <rserve hostname3>:6311 check
If SELinux is enabled, the below command will enable remote connections for HAproxy
/usr/sbin/setsebool -P haproxy_connect_any 1
The firewall ports might need opening too:
firewall-cmd --permanent --zone=public --add-port=81/tcp
firewall-cmd --permanent --zone=public --add-port=8080/tcp
Also, enable remote connections in Rserve with remote enable in the Rserve config file.
| HAProxy | 39,016,291 | 14 |
I have an application deployed on OpenShift Container Platform v3.6. It consists of multiple services interconnected to each other.
The frontend service calls a time consuming function of the backend service (through a REST call), but after 30 seconds it receives a "504 Gateway Timeout" message. Frontend runs over nginx, but I've already configured it with long proxy send/read timeouts, so the 504 message doesn't come from it. I think it comes from the Service Proxy component of OpenShift Platform, but I can't find out where and how configure a kind of service proxy timeout. I know the existence of HAProxy timeout for external routes, but my services leave in the same cluster application and communicate each other via OpenShift Container Platform DNS.
Could be a Service Proxy timeout issue? How can it be configured?
Thanks!
| Your route timeout is the culprit. The haproxy ingress router is terminating the request. You can configure the timeout by following the docs below:
https://docs.openshift.com/container-platform/3.5/install_config/configuring_routing.html
For example:
# Set the timeout on 'longrunningroute' to five minutes.
oc annotate route longrunningroute --overwrite haproxy.router.openshift.io/timeout=5m
| HAProxy | 47,812,807 | 14 |
I have setup a HAProxy in front of my backend server application to enable HTTPS. I have read that I need to set X-Forward-Proto https.
In the haproxy.cfg file I have tried to do that in the frontend with:
frontend haproxy
bind :8443 ssl crt frontend/server.pem
reqadd X-Forwarded-Proto:\ https
default_backend my-backend
and that seems to make it work - e.g. I can both login to my backend server and navigate to the different pages. If I DON'T have the proto option I can only login but not navigate to any other pages.
Now I if add the option in the backend instead (removing it from the front end) with:
backend my-backend
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server my-backend 127.0.0.1:9000
it also works, I can navigate the different pages in my backend server application.
So which is the correct way to do it? In the frontend or in the backend or does it not matter?
| It doesn't matter. When you have multiple backends, it usually makes sense to do this on the frontend.
You could also use http-request set-header X-Forwarded-Proto in the front-end, rather than using reqadd.
The req* directives are much older functionality than http-request so the latter is preferred, generally, but there's an important reason why you should prefer it, here and why you should be using set-header instead of add-header: you don't want the client to be able to forge headers that only the proxy should be injecting. For non-https front-ends, you should also http-request set-header X-Forwarded-Proto http so that there is no possibility of an incorrect upstream header. The add-header option, just like reqadd, does not remove any existing headers of the same name, while set-header does.
| HAProxy | 51,928,504 | 14 |
I'm very sure this problem has been solved, but I can't find any information anywhere about it...
How do sysadmins programmatically add a new node to an existing and running load balancer ? Let's say I have a load balancer running and already balancing say my API server between two EC2 instances, and suddenly there's a traffic spike and I need a third node in the load balancer but I'm asleep... It would be wonderful if I had something monitoring probably RAM usage and some key performance indicators that tell me when I should have another node, and even better if it could add a new node to the load balancer alone...
I'm confident that this is possible and even trivial to do with node-http-proxy and distribute, but I'd like to know if this is possible to do with HAproxy and/or Nginx... I know Amazon's elastic load balancing is probably my best bet but I want to do it on my own (I want to spawn instances from rackspace, EC2, Joyent and probably others as it's convenient).
Once again, spawning a node is easy, I'd like to know how to add it to haproxy.cfg or something similar with Nginx without having to reload the whole proxy, and doing that programatically. Bash scripting is my best bet for this but it still does have to reload the whole proxy which is bad because it loses connections...
| You have a few questions in there. For the "add nodes to haproxy without restarting it":
What I do for a similar problem is prepopulate the config file with server names.. e.g. web01, web02 ... web20 even if I only have 5 web servers at the time. Then in my hosts file I map those to the actual ips of the web servers.
To add a new server, you just create an entry for it in the hosts file and it will start passing health checks and get added.
For automated orchestration, it really depends on your environment and you'll probably have to write something custom that fits your needs. There are paid solutions (Scalr comes to mind) to handle orchestration too.
| HAProxy | 9,300,163 | 13 |
Our haproxy loadbalancer opens thousands of connections to its backends
even though its settings say to open no more than 10 connections per server instance (see below). When I uncomment "option http-server-close" the number of backend connection drops however I would like to have keep-alive backend connections.
Why maxconn is not respected with http-keep-alive? I verified with ss that the opened backend connections are in ESTABLISHED state.
defaults
log global
mode http
option http-keep-alive
timeout http-keep-alive 60000
timeout connect 6000
timeout client 60000
timeout server 20000
frontend http_proxy
bind *:80
default_backend backends
backend backends
option prefer-last-server
# option http-server-close
timeout http-keep-alive 1000
server s1 10.0.0.21:8080 maxconn 10
server s2 10.0.0.7:8080 maxconn 10
server s3 10.0.0.22:8080 maxconn 10
server s4 10.0.0.16:8080 maxconn 10
| In keep-alive mode idle connections are not accounted. As explained in this HAProxy mailthread
The thing is, you don't want
to leave requests waiting in a server's queue while the server has a ton
of idle connections.
This even makes more sense, knowing that browsers initiate preconnect to improve page performance. So in keep-alive mode only outstanding/active connections are taken into account.
You can still enforce maxconn limits regardless of the connection state using tcp mode, especially that I don't see a particular reason to using mode http in your current configuration (apart from having reacher logs).
Or you can use http-reuse with http mode to achieve a lowest number of concurrent connections.
| HAProxy | 44,110,899 | 13 |
I would like to ask how HAProxy can help in routing requests depending on parts of the URL.
To give you an overview of my setup, I have the HAProxy machine and the two backends:
IIS website (main site)
Wordpress blog on NGINX (a subsite)
The use-case:
I'm expecting to route requests depending on the URL:
www.website.com/lang/index.aspx -> main site
www.website.com/lang/blog/articlexx -> blog subsite
The blog access URL is "/server/blog/lang/articlexx" so I have to rewrite the original client request to that format--which is basically switching "blog" and "lang".
From how I understood the configuration documentation and some posts on the net, I could use reqrep/reqirep to change the request HTTP headers before it gets passed to a backend. And if that's right, then this configuration should work:
frontend vFrontLiner
bind x.x.x.x:x
mode http
option httpclose
default_backend iis_website
# the switch: x/lang/blog -? x/blog/lang
reqirep ^/(.*)/(blog)/(.*) /if\2/\1/\3
acl blog path_beg -i /lang/blog/
use_backend blog_website if blog
backend blog_website
mode http
option httpclose
cookie xxblogxx insert indirect nocache
server BLOG1 x.x.x.x:80 cookie s1 check inter 5s rise 2 fall 3
server BLOG2 x.x.x.x:80 cookie s2 check inter 5s rise 2 fall 3 backup
The problem: The requests being received by the blog_website backend is still the original URL "x/lang/blog".
I might have missed something on the regex part but my main concern is whether my understanding correct or not to use the reqirep in the first place. I would appreciate any help.
Thanks very much.
| Your regex is wrong, you're assuming the server is in the request path. To match the request paths in the headers use a regex like this one:
reqrep ^([^\ ]*)\ /lang/blog/(.*) \1\ /blog/lang/\2
you can use reqirep as well but that is only useful if your servers actually serve /BLog/lAnG/ as well.
| HAProxy | 8,202,574 | 12 |
I'm reading up on SockJS node server. Documentation says:
Often WebSockets don't play nicely with proxies and load balancers. Deploying a SockJS server behind Nginx or Apache could be painful. Fortunately recent versions of an excellent load balancer HAProxy are able to proxy WebSocket connections. We propose to put HAProxy as a front line load balancer and use it to split SockJS traffic from normal HTTP data.
I'm curious if anyone can expand on the problem that is being solved by HAProxy in this case? Specifically:
Why websockets don't play nice with proxies and load balancers?
Why deploying Sockjs sever behind Apache is painful?
|
1. Why websockets don't play nice with proxies and load balancers?
I'd recommend you read this article on How HTML5 Web Sockets Interact With Proxy Servers by Peter Lubbers. It should cover everything you need to know about WebSocket and proxies - and thus, load balancers.
2. Why deploying Sockjs sever behind Apache is painful?
There is a module for handling WebSocket connections but at present Apache doesn't natively support WebSocket, nor does it look like it will any time soon based on this bug filed on apache - HTML5 Websocket implementation. The suggestion is that it actually fits the module pattern better.
So, it's "painful" simply because it's not easy - there's no official support and therefore it doesn't have the community use that it otherwise may have.
There may also be other pains in the SockJS has HTTP-based fallback transports. So you need to proxy both the WebSocket connections (using the apache-websocket module) and also HTTP requests when fallback is used.
Related to this: Nginx v1.3 was released in February with WebSocket support.
| HAProxy | 15,817,784 | 12 |
Can someone please explain what is going on behind the scenes in a RabbitMQ cluster with multiple nodes and queues in mirrored fashion when publishing to a slave node?
From what I read, it seems that all actions other than publishes go only to the master and the master then broadcasts the effect of the actions to the slaves(this is from the documentation). Form my understanding it means a consumer will always consume message from the master queue. Also, if I send a request to a slave for consuming a message, that slave will do an extra hop by getting to the master for fetching that message.
But what happens when I publish to a slave node? Will this node do the same thing of sending first the message to the master?
It seems there are so many extra hops when dealing with slaves, so it seems you could have a better performance if you know only the master. But how do you handle master failure? Then one of the slaves will be elected master, so you have to know where to connect to?
Asking all of this because we are using RabbitMQ cluster with HAProxy in front, so we can decouple the cluster structure from our apps. This way, whenever a node goes done, the HAProxy will redirect to living nodes. But we have problems when we kill one of the rabbit nodes. The connection to rabbit is permanent, so if it fails, you have to recreate it. Also, you have to resend the messages in this cases, otherwise you will lose them.
Even with all of this, messages can still be lost, because they may be in transit when I kill a node (in some buffers, somewhere on the network etc). So you have to use transactions or publisher confirms, which guarantee the delivery after all the mirrors have been filled up with the message. But here another issue. You may have duplicate messages, because the broker might have sent a confirmation that never reached the producer (due to network failures, etc). Therefore consumer applications will need to perform deduplication or handle incoming messages in an idempotent manner.
Is there a way of avoiding this? Or I have to decide whether I can lose couple of messages versus duplication of some messages?
|
Can someone please explain what is going on behind the scenes in a RabbitMQ cluster with multiple nodes and queues in mirrored fashion when publishing to a slave node?
This blog outlines exactly what happens.
But what happens when I publish to a slave node? Will this node do the same thing of sending first the message to the master?
The message will be redirected to the master Queue - that is, the node on which the Queue was created.
But how do you handle master failure? Then one of the slaves will be elected master, so you have to know where to connect to?
Again, this is covered here. Essentially, you need a separate service that polls RabbitMQ and determines whether nodes are alive or not. RabbitMQ provides a management API for this. Your publishing and consuming applications need to refer to this service either directly, or through a mutual data-store in order to determine that correct node to publish to or consume from.
The connection to rabbit is permanent, so if it fails, you have to recreate it. Also, you have to resend the messages in this cases, otherwise you will lose them.
You need to subscribe to connection-interrupted events to react to severed connections. You will need to build in some level of redundancy on the client in order to ensure that messages are not lost. I suggest, as above, that you introduce a service specifically designed to interrogate RabbitMQ. You client can attempt to publish a message to the last known active connection, and should this fail, the client might ask the monitor service for an up-to-date listing of the RabbitMQ cluster. Assuming that there is at least one active node, the client may then establish a connection to it and publish the message successfully.
Even with all of this, messages can still be lost, because they may be in transit when I kill a node
There are certain edge-cases that you can't cover with redundancy, and neither can RabbitMQ. For example, when a message lands in a Queue, and the HA policy invokes a background process to copy the message to a backup node. During this process there is potential for the message to be lost before it is persisted to the backup node. Should the active node immediately fail, the message will be lost for good. There is nothing that can be done about this. Unfortunately, when we get down to the level of actual bytes travelling across the wire, there's a limit to the amount of safeguards that we can build.
herefore consumer applications will need to perform deduplication or handle incoming messages in an idempotent manner.
You can handle this a number of ways. For example, setting the message-ttl to a relatively low value will ensure that duplicated messages don't remain on the Queue for extended periods of time. You can also tag each message with a unique reference, and check that reference at the consumer level. Of course, this would require storing a cache of processed messages to compare incoming messages against; the idea being that if a previously processed message arrives, its tag will have been cached by the consumer, and the message can be ignored.
One thing that I'd stress with AMQP and Queue-based solutions in general is that your infrastructure provides the tools, but not the entire solution. You have to bridge those gaps based on your business needs. Often, the best solution is derived through trial and error. I hope my suggestions are of use. I blog about a number of RabbitMQ design solutions here, including the issues you mentioned, here if you're interested.
| HAProxy | 27,104,726 | 12 |
My problem is that I have a docker-compose.yml file and an haproxy.cfg file and I want docker-compose to copy the haproxy.cfg file to the docker container. As per the post Docker composer copy files I can use volumes to do it but in my case I'm getting the below error. Can anybody help me achieve this.
Below is the code and everything
docker-compose.yml
version: "3.3"
services:
###After all services are up, we are initializing the gateway
gateway:
container_name: gateway-haproxy
image: haproxy
volumes:
- .:/usr/local/etc/haproxy
ports:
- 80:80
network_mode: "host"
Folder Structure
Command output
root@ubuntu:/home/karunesh/Desktop/Stuff/SelfStudy/DevOps/docker# docker-compose up
Creating gateway-haproxy ...
Creating gateway-haproxy ... done
Attaching to gateway-haproxy
gateway-haproxy | <7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -f /usr/local/etc/haproxy/haproxy.cfg -Ds
gateway-haproxy | [ALERT] 219/163305 (6) : [/usr/local/sbin/haproxy.main()] No enabled listener found (check for 'bind' directives) ! Exiting.
gateway-haproxy | <5>haproxy-systemd-wrapper: exit, haproxy RC=1
gateway-haproxy exited with code 1
| Try this:
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
Instead of mounting the whole directory, this will only mount haproxy.cfg. The ro is an abbreviation for read-only, and its usage guarantees the container won't modify it after it gets mounted.
| HAProxy | 45,573,831 | 12 |
What is the difference between Nginx ingress controller and HAProxy load balancer in kubernetes?
| First, let's have a quick overview of what an Ingress Controller is in Kubernetes.
Ingress Controller: controller that responds to changes in Ingress rules and changes its internal configuration accordingly
So, both the HAProxy ingress controller and the Nginx ingress controller will listen for these Ingress configuration changes and configure their own running server instances to route traffic as specified in the targeted Ingress rules. The main differences come down to the specific differences in use cases between Nginx and HAProxy themselves.
For the most part, Nginx comes with more batteries included for serving web content, such as configurable content caching, serving local files, etc. HAProxy is more stripped down, and better equipped for high-performance network workloads.
The available configurations for HAProxy can be found here and the available configuration methods for Nginx ingress controller are here.
I would add that Haproxy is capable of doing TLS / SSL offloading (SSL termination or TLS termination) for non-http protocols such as mqtt, redis and ftp type workloads.
The differences go deeper than this, however, and these issues go into more detail on them:
https://serverfault.com/questions/229945/what-are-the-differences-between-haproxy-and-ngnix-in-reverse-proxy-mode
HAProxy vs. Nginx
| HAProxy | 55,166,389 | 12 |
here is my config file
listen solr 0.0.0.0:8983
mode http
balance roundrobin
option httpchk GET "/solr/select/?q=id:1234" HTTP/1.1
server solr_slave 1.1.1.1:8983 maxconn 5000 weight 256 check
server solr_master 2.2.2.2:8983 maxconn 5000 weight 1 check
the problem is that my solr server is protected using basic http password authentication and hence the health check fails always
how do i tell haproxy to use those credentials during the health checks?
| A little late, but I just came across the same problem, and wanted to share the solution with the world. First, you base64-encode the credentials:
$ echo -n "user:pass" | base64
dXNlcjpwYXNz
(Make sure you use the -n switch so you don't append a newline.)
The option httpchk allows you to add arbitrary HTTP headers to the request; this feature isn't documented very well. (According to this discussion, future versions of Haproxy might get a more user-friendly method.) To use basic authentication:
option httpchk GET /solr/ HTTP/1.0\r\nAuthorization:\ Basic\ dXNlcjpwYXNz
Note that I used HTTP 1.0; for 1.1, you also need a Host header.
| HAProxy | 13,325,882 | 11 |
I am trying to understand fundamental use cases between these two and when to prefer one over the other.
For example HAProxy support TCP load balancing. In practical cases when this would be a deciding factor?
Also What HTTP LB can achieve TCP can not and vice versa
Any reference architecture
Any relevant link with practical use case example would be great
|
For example HAProxy support TCP load balancing. In practical cases when this would be a deciding factor?
You would probably use this if you are using sockets/comet/full-duplex
Also What HTTP LB can achieve TCP can not and vice versa
Currently HAproxy is the most reliable LB on the market for TCP and websockets.
Any reference architecture
This would give you a start http://highscalability.com/blog/2014/8/11/the-easy-way-of-building-a-growing-startup-architecture-usin.html
| HAProxy | 25,743,419 | 11 |
When I open the haproxy statistics report page of my http proxy server, I saw something like this:
Cum. connections: 280073
Cum. sessions : 3802
Cum. HTTP requests: 24245
I'm not using 'appsession' and any other cookie related command in the configuration. So what's 'session' means here?
I guess haproxy identify http session by this order:
Use cookie or query string if it's exists in the configuration.
Use SSL/TLS session.
Use ip address and TCP connection status.
Am I Right?
| I was asking myself the very same question this morning.
Searching through http://www.haproxy.org/download/1.5/doc/configuration.txt I came accross this very short definition (hidden in a parameter description) :
A session is a connection that was accepted by the layer 4 rules.
In your case, you're obviously using Haproxy as a layer7/HTTP loadbalancer. If a session is a TCP connection, due to client-side / frontend Keep-Alive, it's normal to have more HTTP reqs than sessions.
Then I guess the high connection number shows that a lot of incoming connections were rejected even before being considered by the HTTP layer. For instance via IP-based ACLs.
As a far as I understand, the 'session' word was introduced to make sure two different concepts were not mixed :
a (TCP) connection : it's a discrete event
a (TCP) session : it's a state which tracks various metadata and has some duration; most importantly Haproxy workload (CPU and memory) should be mostly related to the number of sessions (both arrival rate and concurrent number)
| HAProxy | 33,168,469 | 11 |
We're working on scaling out our EC2 architecture to a point where we'd like to manage our own load balancing. We currently have a series of machines configured on HAProxy to do basic load balancing, but we're looking for the 'best practice' means to have a new instance come online and automatically (or nearly automatically) join HAProxy.
Ideally, we'd monitor load on our systems or rely on a few years worth of analytics data to work out a rouch schedule, and when we reach a threshold or scheduled time, have a process fire up a new instance, have that new node connect to a system on our HAProxy machine to write its hostname into the config and reload HAProxy so it becomes part of the pool.
We're considering Amazon's ELB once we grow big enough to need multiple zone coverage, but until then, we need a simple setup that can add/remove machines from HAProxy.
I know there are services out there that we can pay to manage this stuff, but Scalr seems to limit us to very specific instance types, and Rightscale is too expensive, so like many others, we're looking to roll our own solution.
Unfortunately, those who roll their own solution seem to be a little hush-hush on their process.
| You don't need to over-think this solution ;)
You can simply "pre-configure" servers in your HAProxy configuration file. They will appear "down" and will never receive requests until you actually bring them online.
Here's an example, assuming you only have 5 machines online, and expect to have 10 in the next 2 years:
listen web *:80
balance source
server web1 192.168.0.101:80 check inter 2000 fall 3
server web2 192.168.0.102:80 check inter 2000 fall 3
server web3 192.168.0.103:80 check inter 2000 fall 3
server web4 192.168.0.104:80 check inter 2000 fall 3
server web5 192.168.0.105:80 check inter 2000 fall 3
server web6 192.168.0.106:80 check inter 2000 fall 3
server web7 192.168.0.107:80 check inter 2000 fall 3
server web8 192.168.0.108:80 check inter 2000 fall 3
server web9 192.168.0.109:80 check inter 2000 fall 3
server web10 192.168.0.110:80 check inter 2000 fall 3
With this config, you won't need to restart HAProxy or do any kind of ugly hacks for at least a year (unless you need more than 10, then just add 100 and you'll be set).
You can also write a quick shell script to automatically generate this configuration, actually you SHOULD write a script for that if you're adding 100 servers to your pool.
| HAProxy | 6,604,096 | 10 |
This issue is driving me insane, so maybe someone could help me understand what the issue is. I have a tomcat web application being fronted by HAProxy. HAProxy is also doing SSL offloading, and is configured to use sticky sessions. I am using Tomcat's session replication feature which seems to be working just fine. The sessions appear on both appservers.
For some reason, Tomcat is generating a new JSESSIONID for every single web request, and then copying the contents of the old session into the new session. That is to say, my session contents are still there within the new session, but a new ID is generated and sent back to the client. But it only does this for my web application. It does not do this for the /manager application.
I have tried every trick in the book, such as setting this in my context.xml:
<Valve className="org.apache.catalina.authenticator.BasicAuthenticator" changeSessionIdOnAuthentication="false" />
And setting these attributes on my Context element:
<Context path="/myapp" reloadable="false" override="true" useNaming="false" allowLinking="true" useHttpOnly="false" sessionCookiePath="/" sessionCookiePathUsesTrailingSlash="false">
And still, the result is the same. Tomcat generates a new session id with every request and copies the contents of the old session into the new id.
I would suspect it had something to do with HAProxy, except that the /manager application is also behind HAProxy and it does not exhibit this behavior.
Why is Tomcat doing this, and what can I do to prevent it?
| Turns out that it was cause by Spring Security. We are using Spring Security 3.1x, and by default it stores the authenticated credentials in the user's session. And to counter session fixation attacks, it automatically copies the contents of the user's session to a new session id and invalidates the old session.
The fix was to add the following to the http element in the security configuration, since we don't need to use the session in our application:
create-session="stateless"
Hopefully this helps someone else down the line.
| HAProxy | 14,466,595 | 10 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.