question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
kubernetes seems to have lot of objects. I can't seem to find the full list of objects anywhere. After briefly searching on google, I can find results which mention a subset of kubernetes objects. Is the full list of objects documented somewhere, perhaps in source code? Thank you.
Following command successfully display all kubernetes objects kubectl api-resources Example [root@hsk-controller ~]# kubectl api-resources NAME SHORTNAMES KIND bindings Binding componentstatuses cs ComponentStatus configmaps cm ConfigMap endpoints ep Endpoints events ev Event limitranges limits LimitRange namespaces ns Namespace nodes no Node persistentvolumeclaims pvc PersistentVolumeClaim persistentvolumes pv PersistentVolume pods po Pod podtemplates PodTemplate replicationcontrollers rc ReplicationController resourcequotas quota ResourceQuota secrets Secret serviceaccounts sa ServiceAccount services svc Service initializerconfigurations InitializerConfiguration mutatingwebhookconfigurations MutatingWebhookConfiguration validatingwebhookconfigurations ValidatingWebhookConfiguration customresourcedefinitions crd,crds CustomResourceDefinition apiservices APIService controllerrevisions ControllerRevision daemonsets ds DaemonSet deployments deploy Deployment replicasets rs ReplicaSet statefulsets sts StatefulSet tokenreviews TokenReview localsubjectaccessreviews LocalSubjectAccessReview selfsubjectaccessreviews SelfSubjectAccessReview selfsubjectrulesreviews SelfSubjectRulesReview subjectaccessreviews SubjectAccessReview horizontalpodautoscalers hpa HorizontalPodAutoscaler cronjobs cj CronJob jobs Job brpolices br,bp BrPolicy clusters rcc Cluster filesystems rcfs Filesystem objectstores rco ObjectStore pools rcp Pool certificatesigningrequests csr CertificateSigningRequest leases Lease events ev Event daemonsets ds DaemonSet deployments deploy Deployment ingresses ing Ingress networkpolicies netpol NetworkPolicy podsecuritypolicies psp PodSecurityPolicy replicasets rs ReplicaSet nodes NodeMetrics pods PodMetrics networkpolicies netpol NetworkPolicy poddisruptionbudgets pdb PodDisruptionBudget podsecuritypolicies psp PodSecurityPolicy clusterrolebindings ClusterRoleBinding clusterroles ClusterRole rolebindings RoleBinding roles Role volumes rv Volume priorityclasses pc PriorityClass storageclasses sc StorageClass volumeattachments VolumeAttachment Note: kubernate version is v1.12* kubectl version
Kubernetes
53,053,888
44
I daily find myself doing... $ kubectl --context=foo get pods < copy text manually > $ kubectl --context=foo logs dep1-12345678-10101 I would like to cycle through matching resources with $ kubectl --context=foo logs dep1<TAB> but this doesn't seem to do anything with my stock setup. Any ideas? osx 10.12.3 kubectl v1.4.5 zsh zsh 5.2 (x86_64-apple-darwin16.0)
Both bash and zsh supports scripts that completes printed command when you press <TAB>. The feature is called Programmable completion, and you can find more details about that here: zsh completion. Fortunately, you don't need to write your own script - kubectl provides it for zsh > 5.2. Try running this command: source <(kubectl completion zsh). Another option is to use this tool: https://github.com/mkokho/kubemrr (disclaimer: I'm the author). The reason it exists is because standard completion script is too slow - it might take seconds before Kubernetes cluster replies will all pod names. But kubemrr keeps the names locally, so the response comes back almost immediately.
Kubernetes
42,356,861
44
On my GCE Kubernetes cluster I can no longer create pods. Warning FailedScheduling pod (www.caveconditions.com-f1be467e31c7b00bc983fbe5efdbb8eb-438ef) failed to fit in any node fit failure on node (gke-prod-cluster-default-pool-b39c7f0c-c0ug): Insufficient CPU Looking at the allocated stats of that node Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default dev.caveconditions.com-n80z8 100m (10%) 0 (0%) 0 (0%) 0 (0%) default lamp-cnmrc 100m (10%) 0 (0%) 0 (0%) 0 (0%) default mongo-2-h59ly 200m (20%) 0 (0%) 0 (0%) 0 (0%) default www.caveconditions.com-tl7pa 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-cloud-logging-gke-prod-cluster-default-pool-b39c7f0c-c0ug 100m (10%) 0 (0%) 200Mi (5%) 200Mi (5%) kube-system kube-dns-v17-qp5la 110m (11%) 110m (11%) 120Mi (3%) 220Mi (5%) kube-system kube-proxy-gke-prod-cluster-default-pool-b39c7f0c-c0ug 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kubernetes-dashboard-v1.1.0-orphh 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 910m (91%) 210m (21%) 370Mi (9%) 470Mi (12%) Sure I have 91% allocated and can not fit another 10% into it. But is it not possible to over commit resources? The usage of the server is at about 10% CPU average What changes do I need to make for my Kubernetes cluster to be able to create more pods?
I recently had this same issue. After some research, I found that GKE has a default LimitRange with CPU requests limit set to 100m. You can validate this by running kubectl get limitrange -o=yaml. It's going to display something like this: apiVersion: v1 items: - apiVersion: v1 kind: LimitRange metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"LimitRange","metadata":{"annotations":{},"name":"limits","namespace":"default"},"spec":{"limits":[{"defaultRequest":{"cpu":"100m"},"type":"Container"}]}} creationTimestamp: 2017-11-16T12:15:40Z name: limits namespace: default resourceVersion: "18741722" selfLink: /api/v1/namespaces/default/limitranges/limits uid: dcb25a24-cac7-11e7-a3d5-42010a8001b6 spec: limits: - defaultRequest: cpu: 100m type: Container kind: List metadata: resourceVersion: "" selfLink: "" This limit is applied to every container. So, for instance, if you have a 4 cores node and each pod creates 2 containers, it will allow only for around ~20 pods to be created (4 cpus = 4000m -> / 100m = 40 -> / 2 = 20). The "fix" here is to change the default LimitRange to one that better fits your use-case and then remove old pods allowing them to be recreated with the updated values. Another (and probably better) option is to directly set the CPU limits on each deployment/pod definition you have. Some reading material: https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/#create-a-limitrange-and-a-pod https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits
Kubernetes
38,869,673
44
I have used kubectl create serviceaccount sa1 to create service account. Then I used kubectl get serviceaccount sa1 -oyaml command to get service account info. But it returns as below. apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: "2022-05-16T08:03:50Z" name: sa1 namespace: default resourceVersion: "19651" uid: fdddacba-be9d-4e77-a849-95ca243781cc I need to get, secrets: - name: <secret> part. but it doesn't return secrets. How to fix it?
In Kubernetes 1.24, ServiceAccount token secrets are no longer automatically generated. See "Urgent Upgrade Notes" in the 1.24 changelog file: The LegacyServiceAccountTokenNoAutoGeneration feature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Use the TokenRequest API to acquire service account tokens, or if a non-expiring token is required, create a Secret API object for the token controller to populate with a service account token by following this guide. (#108309, @zshihang) This means, in Kubernetes 1.24, you need to manually create the Secret; the token key in the data field will be automatically set for you. apiVersion: v1 kind: Secret metadata: name: sa1-token annotations: kubernetes.io/service-account.name: sa1 type: kubernetes.io/service-account-token Since you're manually creating the Secret, you know its name: and don't need to look it up in the ServiceAccount object. This approach should work fine in earlier versions of Kubernetes too.
Kubernetes
72,256,006
43
I'm running Docker Desktop for MacOS and I don't know how to stop the Docker service. It runs all the time using up the MacBook battery. On a simple search, there are docs showing how to stop the containers but not the docker service itself. I might be missing something obvious, but is there a way to stop both Kubernetes and Docker service without having to kill the desktop app?
The docker desktop app starts a qemu vm, so the desktop app has no control over the PIDs. To overcome the "situation" do the following: open the Terminal app edit the file ~/.bash_profile add the following lines #macro to kill the docker desktop app and the VM (excluding vmnetd -> it's a service) function kdo() { ps ax|grep -i docker|egrep -iv 'grep|com.docker.vmnetd'|awk '{print $1}'|xargs kill } save the file Quit the terminal app and open it again. Type kdo to kill all the dependend apps (hypervisor, docker daemon etc.)
Kubernetes
64,799,841
43
I'm looking for a way to export a yaml file from a deployed component but without the cluster specific information. kubectl get MYOBJECT --export -o yaml > my.yaml but since "export" is now deprecated (since 1.14 and should normally disappear in 1.18 (didn't find it in changelog), what would be an alternative ? thanks
Using JQ does the trick. kubectl get secret <secretname> -ojson | jq 'del(.metadata.namespace,.metadata.resourceVersion,.metadata.uid) | .metadata.creationTimestamp=null' produces exactly the same JSON as kubectl get secret <secretname> -ojson --export
Kubernetes
61,392,206
43
We are using one namespace for the develop environment and one for the staging environment. Inside each one of this namespaces we have several configMaps and secrets but there are a lot of share variables between the two environments so we will like to have a common file for those. Is there a way to have a base configMap into the default namespace and refer to it using something like: - envFrom: - configMapRef: name: default.base-config-map If this is not possible, is there no other way other than duplicate the variables through namespaces?
Kubernetes 1.13 and earlier They cannot be shared, because they cannot be accessed from a pods outside of its namespace. Names of resources need to be unique within a namespace, but not across namespaces. Workaround it is to copy it over. Copy secrets between namespaces kubectl get secret <secret-name> --namespace=<source-namespace> --export -o yaml \ | kubectl apply --namespace=<destination-namespace> -f - Copy configmaps between namespaces kubectl get configmap <configmap-name>  --namespace=<source-namespace> --export -o yaml \ | kubectl apply --namespace=<destination-namespace> -f - Kubernetes 1.14+ The --export flag was deprecated in 1.14 Instead following command can be used: kubectl get secret <secret-name> --namespace=<source-namespace>  -o yaml \ | sed 's/namespace: <from-namespace>/namespace: <to-namespace>/' \ | kubectl create -f - If someone still see a need for the flag, there’s an export script written by @zoidbergwill.
Kubernetes
55,515,594
43
how can I describe this command in yaml format? kubectl create configmap somename --from-file=./conf/nginx.conf I'd expect to do something like the following yaml, but it doesn't work apiVersion: v1 kind: ConfigMap metadata: name: somename namespace: default fromfile: ./conf/nginx.conf any idea?
That won't work, because kubernetes isn't aware of the local file's path. You can simulate it by doing something like this: kubectl create configmap --dry-run=client somename --from-file=./conf/nginx.conf --output yaml The --dry-run flag will simply show your changes on stdout, and not make the changes on the server. This will output a valid configmap, so if you pipe it to a file, you can use that: kubectl create configmap --dry-run=client somename --from-file=./conf/nginx.conf --output yaml | tee somename.yaml
Kubernetes
51,268,488
43
I understood Ingress can be used when we want to expose multiple service/routes with a single Load Balancer / public IP. Now I want to expose my Nginx server to public. I have two choices Set service type as LoadBalancer voila I got public IP Use Nginx Ingress Controller Now I can get my job done with Option 1 when or why would I choose Option 2 whats the advantage of having nginx with Ingress without Ingress ?
There is a difference between ingress rule (ingress) and ingress controller. So, technically, nginx ingress controller and LoadBalancer type service are not comparable. You can compare ingress resource and LoadBalancer type service, which is below. Generally speaking: LoadBalancer type service is a L4(TCP) load balancer. You would use it to expose single app or service to outside world. It would balance the load based on destination IP address and port. Ingress type resource would create a L7(HTTP/S) load balancer. You would use this to expose several services at the same time, as L7 LB is application aware, so it can determine where to send traffic depending on the application state. ingress and ingress controller relation: Ingress, or ingress rules are the rules that ingress controller follows to distribute the load. Ingress controller get the packet, checks ingress rules and determines to which service to deliver the packet. Nginx Ingress Controller Nginx ingress controller uses LoadBalancer type service actually as entrypoint to the cluster. Then is checks ingress rules and distributes the load. This can be very confusing. You create an ingress resource, it creates the HTTP/S load balancer. It also gives you an external IP address (on GKE, for example), but when you try hitting that IP address, the connection is refused. Conclusions: You would use Loadbalancer type service if you would have a single app, say myapp.com that you want to be mapped to an IP address. You would use ingress resource if you would have several apps, say myapp1.com, myapp1.com/mypath, myapp2.com, .., myappn.com to be mapped to one IP address. As the ingress is L7 it is able to distinguish between myapp1.com and myapp1.com/mypath, it is able to route the traffic to the right service.
Kubernetes
50,966,300
43
How the OR expression can be used with selectors and labels? selector: app: myapp tier: frontend The above matches pods where labels app==myapp AND tier=frontend. But the OR expression can be used? app==myapp OR tier=frontend?
Now you can do that : kubectl get pods -l 'environment in (production, qa)' https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#list-and-watch-filtering
Kubernetes
46,028,731
43
I followed the load balancer tutorial: https://cloud.google.com/container-engine/docs/tutorials/http-balancer which is working fine when I use the Nginx image, when I try and use my own application image though the backend switches to unhealthy. My application redirects on / (returns a 302) but I added a livenessProbe in the pod definition: livenessProbe: httpGet: path: /ping port: 4001 httpHeaders: - name: X-health-check value: kubernetes-healthcheck - name: X-Forwarded-Proto value: https - name: Host value: foo.bar.com My ingress looks like: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo spec: backend: serviceName: foo servicePort: 80 rules: - host: foo.bar.com Service configuration is: kind: Service apiVersion: v1 metadata: name: foo spec: type: NodePort selector: app: foo ports: - port: 80 targetPort: 4001 Backends health in ingress describe ing looks like: backends: {"k8s-be-32180--5117658971cfc555":"UNHEALTHY"} and the rules on the ingress look like: Rules: Host Path Backends ---- ---- -------- * * foo:80 (10.0.0.7:4001,10.0.1.6:4001) Any pointers greatly received, I've been trying to work this out for hours with no luck. Update I have added the readinessProbe to my deployment but something still appears to hit / and the ingress is still unhealthy. My probe looks like: readinessProbe: httpGet: path: /ping port: 4001 httpHeaders: - name: X-health-check value: kubernetes-healthcheck - name: X-Forwarded-Proto value: https - name: Host value: foo.com I changed my service to: kind: Service apiVersion: v1 metadata: name: foo spec: type: NodePort selector: app: foo ports: - port: 4001 targetPort: 4001 Update2 After I removed the custom headers from the readinessProbe it started working! Many thanks.
You need to add a readinessProbe (just copy your livenessProbe). It's explained in the GCE L7 Ingress Docs. Health checks Currently, all service backends must satisfy either of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer: 1. Respond with a 200 on '/'. The content does not matter. 2. Expose an arbitrary url as a readiness probe on the pods backing the Service. Also make sure that the readinessProbe is pointing to the same port that you expose to the Ingress. In your case that's fine since you have only one port, if you add another one you may run into trouble.
Kubernetes
39,294,305
43
I'm new to Kubernetes. I try to scale my pods. First I started 3 pods: ./cluster/kubectl.sh run my-nginx --image=nginx --replicas=3 --port=80 There were starting 3 pods. First I tried to scale up/down by using a replicationcontroller but this did not exist. It seems to be a replicaSet now. ./cluster/kubectl.sh get rs NAME DESIRED CURRENT AGE my-nginx-2494149703 3 3 9h I tried to change the amount of replicas described in my replicaset: ./cluster/kubectl.sh scale --replicas=5 rs/my-nginx-2494149703 replicaset "my-nginx-2494149703" scaled But I still see my 3 original pods ./cluster/kubectl.sh get pods NAME READY STATUS RESTARTS AGE my-nginx-2494149703-04xrd 1/1 Running 0 9h my-nginx-2494149703-h3krk 1/1 Running 0 9h my-nginx-2494149703-hnayu 1/1 Running 0 9h I would expect to see 5 pods. ./cluster/kubectl.sh describe rs/my-nginx-2494149703 Name: my-nginx-2494149703 Namespace: default Image(s): nginx Selector: pod-template-hash=2494149703,run=my-nginx Labels: pod-template-hash=2494149703 run=my-nginx Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Why isn't it scaling up? Do I also have to change something in the deployment? I see something like this when I describe my rs after scaling up: (Here I try to scale from one running pod to 3 running pods). But it remains one running pod. The other 2 are started and killed immediatly 34s 34s 1 {replicaset-controller } Normal SuccessfulCreate Created pod: my-nginx-1908062973-lylsz 34s 34s 1 {replicaset-controller } Normal SuccessfulCreate Created pod: my-nginx-1908062973-5rv8u 34s 34s 1 {replicaset-controller } Normal SuccessfulDelete Deleted pod: my-nginx-1908062973-lylsz 34s 34s 1 {replicaset-controller } Normal SuccessfulDelete Deleted pod: my-nginx-1908062973-5rv8u
This is working for me kubectl scale --replicas=<expected_replica_num> deployment <deployment_label_name> -n <namespace> Example # kubectl scale --replicas=3 deployment xyz -n my_namespace
Kubernetes
38,344,896
43
I have created a cluster of three nodes: one master, two minions. How to check the cluster IP in Kubernetes? Is it the IP of the master node?
ClusterIP can mean 2 things: a type of service which is only accessible within a Kubernetes cluster, or the internal ("virtual") IP of components within a Kubernetes cluster. Assuming you're asking about finding the internal IP of a cluster, it can be accessed in 3 ways (using the simple-nginx example): Via command line kubectl utility: $ kubectl describe service my-nginx Name: my-nginx Namespace: default Labels: run=my-nginx Selector: run=my-nginx Type: LoadBalancer IP: 10.123.253.27 LoadBalancer Ingress: 104.197.129.240 Port: <unnamed> 80/TCP NodePort: <unnamed> 30723/TCP Endpoints: 10.120.0.6:80 Session Affinity: None No events. Via the kubernetes API (here I've used kubectl proxy to route through localhost to my cluster): $ kubectl proxy & $ curl -G http://localhost:8001/api/v1/namespaces/default/services/my-nginx { "kind": "Service", "apiVersion": "v1", "metadata": <omitted>, "spec": { "ports": [ { "protocol": "TCP", "port": 80, "targetPort": 80, "nodePort": 30723 } ], "selector": { "run": "my-nginx" }, "clusterIP": "10.123.253.27", "type": "LoadBalancer", "sessionAffinity": "None" }, "status": { "loadBalancer": { "ingress": [ { "ip": "104.197.129.240" } ] } } } Via the $<NAME>_SERVICE_HOST environment variable within a Kubernetes container (in this example my-nginx-yczg9 is the name of a pod in the cluster): $ kubectl exec my-nginx-yczg9 -- sh -c 'echo $MY_NGINX_SERVICE_HOST' 10.123.253.27 More details on service IPs can be found in the Services in Kubernetes documentation, and the previously mentioned simple-nginx example is a good example of exposing a service outside your cluster with the LoadBalancer service type.
Kubernetes
33,407,638
43
I have a configmap where I have defined the following key-value mapping in the data section: apiVersion: v1 kind: ConfigMap metadata: namespace: test name: test-config data: TEST: "CONFIGMAP_VALUE" then in the definition of my container (in the deployment / statefulset manifest) I have the following: env: - name: TEST value: "ANOTHER_VALUE" envFrom: - configMapRef: name: test-config When doing this I was expecting that the value from the configmap (TEST="CONFIGMAP_VALUE") will override the (default) value specified in the container spec (TEST="ANOTHER_VALUE"), but this is not the case (TEST always gets the value from the container spec). I couldn't find any relevant documentation about this - is it possible to achieve such env variable value overriding?
From Kubernetes API reference: envFrom : List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. So above clearly states the env will take precedence than envFrom. When a key exists in multiple sources, the value associated with the last source will take precedence. So, for overriding, don't use envFrom, but define the value twice within env, see below: apiVersion: v1 kind: ConfigMap metadata: namespace: default name: test-config data: TEST: "CONFIGMAP_VALUE" --- apiVersion: v1 kind: Pod metadata: name: busy namespace: default spec: containers: - name: busybox image: busybox env: - name: TEST value: "DEFAULT_VAULT" - name: TEST valueFrom: configMapKeyRef: name: test-config key: TEST command: - "sh" - "-c" - > while true; do echo "$(TEST)"; sleep 3600; done Check: kubectl logs busy -n default CONFIGMAP_VALUE
Kubernetes
54,398,272
42
I'm trying to build a script that can follow(-f) kubectl get pods see a realtime update when I make any changes/delete pods on Ubuntu server. What could be the easiest/efficient way to do so?
You can just use kubectl get pod <your pod name> -w whenever any update/change/delete happen to the pod, you will see the update. You can also use watch -n 1 kubectl get pod <your pod name> This will continuously run kubectl get pod ... with 1 seconds interval. So, you will see latest state.
Kubernetes
53,485,346
42
I don't really understand this after reading through the document. Some use a term like "CPU", but some use "core". I am running Kubernetes in my laptop for testing purpose. My laptop has one CPU (2.2 GHz) and four cores. If I want to set the CPU request/limit for pod, should the maximum resource that I have be 1000m or 4000m? What is the difference (CPU vs. core) here in a Kubernetes context?
To clarify what's described here in the Kubernetes context, 1 CPU is the same as a core (Also more information here). 1000m (milicores) = 1 core = 1 vCPU = 1 AWS vCPU = 1 GCP Core. 100m (milicores) = 0.1 core = 0.1 vCPU = 0.1 AWS vCPU = 0.1 GCP Core. For example, an Intel Core i7-6700 has four cores, but it has Hyperthreading which doubles what the system sees in terms of cores. So in essence, it will show up in Kubernetes as: 8000m = 8 cores = 8 vCPUs Some extra information: These resources are managed by the kube-scheduler using the Completely Fair Scheduler (CFS), and there are no guarantees in terms of overruns within the same machine and your pod may be moved around. If you'd like to have stronger guarantees, you might consider the --cpu-manager-policy=static (CPU Manager) option in the kubelet. More information is here and here. For more details on what your system sees as a CPU (and number of CPUs) on a Linux system you can see how many vCPUs you have by running cat /proc/cpuinfo.
Kubernetes
53,255,956
42
The docs states that To create a Secret from one or more files, use --from-file. You specify files in any plaintext format, such as .txt or .env, as long as the files contain key-value pairs. .test-secret NAME=martin GENDER=male Testing to create a secret based on my .test-secret file. kubectl create secret generic person --from-file .test-secret -o yml $ kubectl get secret person -o yaml apiVersion: v1 data: .test-secret: TkFNRT1tYXJ0aW4KR0VOREVSPW1hbGUK kind: Secret metadata: creationTimestamp: 2018-07-19T09:23:05Z name: person namespace: default resourceVersion: "229992" selfLink: /api/v1/namespaces/default/secrets/person uid: 579198ab-8b35-11e8-8895-42010a840008 type: Opaque Is it possible to read a list of key / values like that? Is it even possible to do so from an .env file? kubectl get pods returns CreateContainerConfigError my-app.yml 77 - name: NAME 78 valueFrom: 79 secretKeyRef: 80 name: person 81 key: NAME
Yes, use the option --from-env-file kubectl create secret generic person --from-env-file=.test-secret To consume the secrets from the initial .env file in a pod, you can use the following : apiVersion: v1 kind: Pod metadata: name: some-meta spec: containers: - name: xyz image: abc envFrom: - secretRef: name: person # <--
Kubernetes
51,419,102
42
I am trying to run a Factorio game server on Kubernetes (hosted on GKE). I have setup a Stateful Set with a Persistent Volume Claim and mounted it in the game server's save directory. I would like to upload a save file from my local computer to this Persistent Volume Claim so I can access the save on the game server. What would be the best way to upload a file to this Persistent Volume Claim? I have thought of 2 ways but I'm not sure which is best or if either are a good idea: Restore a disk snapshot with the files I want to the GCP disk which backs this Persistent Volume Claim Mount the Persistent Volume Claim on an FTP container, FTP the files up, and then mount it on the game container
It turns out there is a much simpler way: The kubectl cp command. This command lets you copy data from your computer to a container running on your cluster. In my case I ran: kubectl cp ~/.factorio/saves/k8s-test.zip factorio/factorio-0:/factorio/saves/ This copied the k8s-test.zip file on my computer to /factorio/saves/k8s-test.zip in a container running on my cluster. See kubectl cp -h for more more detail usage information and examples.
Kubernetes
50,703,727
42
In a kubernetes deployment I specify a port like so: containers: - name: nginx image: nginx:latest ports: - name: nginx-port containerPort: 80 protocol: TCP Now in a service I can reference that port like so (allows me to only specify the external port in the service): spec: type: ClusterIP ports: - name: nginx-port port: 80 targetPort: nginx-port protocol: TCP Now the question, can I reference service and port elsewhere using the following syntax nginx-service.default.svc.cluster.local:nginx-port? You know I can make reference to services using this special names, but I find myself hardcoding the port number like so nginx-service.default.svc.cluster.local:80.
Usually, you refer to a target port by its number. But you can give a specific name to each pod`s port and refer to this name in your service specification. This will make your service clearer. Here a small example: apiVersion: v1 kind: Pod metadata: name: named-port-pod labels: app: named-port-pod spec: containers: - name: echoserver image: gcr.io/google_containers/echoserver:1.4 ports: - name: pod-custom-port containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: named-port-svc spec: ports: - port: 80 targetPort: pod-custom-port selector: app: named-port-pod
Kubernetes
48,886,837
42
I can delete all jobs inside a custer running kubectl delete jobs --all However, jobs are deleted one after another which is pretty slow (for ~200 jobs I had the time to write this question and it was not even done). Is there a faster approach ?
It's a little easier to setup an alias for this bash command: kubectl delete jobs `kubectl get jobs -o custom-columns=:.metadata.name`
Kubernetes
43,675,231
42
when running helm install (helm 3.0.2) I got the following error: Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: PodSecurityPolicy, namespace: , name: po-kube-state-metrics But I don't find it and also In the error im not getting the ns, How can I remove it ? when running kubectl get all --all-namespaces I see all the resources but not the po-kub-state-metrics ... it also happen to other resources, any idea? I got the same error to: monitoring-grafana entity and the result of kubectl get PodSecurityPolicy --all-namespaces is: monitoring-grafana false RunAsAny RunAsAny RunAsAny RunAsAny false configMap,emptyDir,projected,secret,do
First of all you need to make sure you've successfully uninstalled the helm release, before reinstalling. To list all the releases, use: $ helm list --all --all-namespaces To uninstall a release, use: $ helm uninstall <release-name> -n <namespace> You can also use --no-hooks to skip running hooks for the command: $ helm uninstall <release-name> -n <namespace> --no-hooks If uninstalling doesn't solve your problem, you can try the following command to cleanup: $ helm template <NAME> <CHART> --namespace <NAMESPACE> | kubectl delete -f - Sample: $ helm template happy-panda stable/mariadb --namespace kube-system | kubectl delete -f - Now, try installing again. Update: Let's consider that your chart name is mon and your release name is po. Since you are in the charts directory (.) like below: . ├── mon │   ├── Chart.yaml │   ├── README.md │   ├── templates │   │   ├── one.yaml │   │   ├── two.yaml │   │   ├── three.yaml │   │   ├── _helpers.tpl │   │   ├── NOTES.txt │   └── values.yaml Then you can skip the helm repo name (i.e. stable) in the helm template command. Helm will use your mon chart from the directory. $ helm template po mon --namespace mon | kubectl delete -f -
Kubernetes
59,443,834
41
I have a kubernetes setup with the configuration like below: #--- kind: Service apiVersion: v1 metadata: name: myservice spec: selector: app: my-service ports: - protocol: "TCP" # Port accessible inside cluster port: 8080 # Port to forward to inside the pod targetPort: 80 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-service spec: replicas: 1 template: metadata: labels: app: my-service spec: containers: - name: my-service image: my-custom-docker-regisry/my-service:latest imagePullPolicy: Always ports: - containerPort: 8080 imagePullSecrets: - name: regcred and my ingress: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: /myservice backend: serviceName: myservice servicePort: 80 What I tried to do is pulling the image from my docker registry and run it in the kubernetes. I have configured here one deployment and one service and expose the service to the outside with the ingress. My minikube is running under ip 192.168.99.100 and when I tried to access my application with address: curl 192.168.99.100:80/myservice, I got 502 Bad Gateway. Does anyone have an idea why it happended or did I do something wrong with the configuration? Thank you in advanced!
Your ingress targets this service: serviceName: myservice servicePort: 80 but the service named myservice exposes port 8080 rather than 80: ports: - protocol: "TCP" # Port accessible inside cluster port: 8080 # Port to forward to inside the pod targetPort: 80 Your ingress should point to one of the ports exposed by the service. Also, the service itself targets port 80, but the pods in your deployment seem to expose port 8080, rather than 80: containers: - name: my-service image: my-custom-docker-regisry/my-service:latest imagePullPolicy: Always ports: - containerPort: 8080 So long story short, looks like you could swap port with targetPort in your service so that: the pods expose port 8080 the service exposes port 8080 of all the pods under service name myservice port 80, the ingress configures nginx to proxy your traffic to service myservice port 80.
Kubernetes
54,783,778
41
I have a Nginx running inside a docker container. I have a MySql running on the host system. I want to connect to the MySql from within my container. MySql is only binding to the localhost device. Is there any way to connect to this MySql or any other program on localhost from within this docker container? This question is different from "How to get the IP address of the docker host from inside a docker container" due to the fact that the IP address of the docker host could be the public IP or the private IP in the network which may or may not be reachable from within the docker container (I mean public IP if hosted at AWS or something). Even if you have the IP address of the docker host it does not mean you can connect to docker host from within the container given that IP address as your Docker network may be overlay, host, bridge, macvlan, none etc which restricts the reachability of that IP address.
Edit: If you are using Docker-for-mac or Docker-for-Windows 18.03+, connect to your mysql service using the host host.docker.internal (instead of the 127.0.0.1 in your connection string). If you are using Docker-for-Linux 20.10.0+, you can also use the host host.docker.internal if you started your Docker container with the --add-host host.docker.internal:host-gateway option, or added the following snippet in your docker-compose.yml file : extra_hosts: - "host.docker.internal:host-gateway" Otherwise, read below TLDR Use --network="host" in your docker run command, then 127.0.0.1 in your docker container will point to your docker host. Note: This mode only works on Docker for Linux, per the documentation. Note on docker container networking modes Docker offers different networking modes when running containers. Depending on the mode you choose you would connect to your MySQL database running on the docker host differently. docker run --network="bridge" (default) Docker creates a bridge named docker0 by default. Both the docker host and the docker containers have an IP address on that bridge. on the Docker host, type sudo ip addr show docker0 you will have an output looking like: [vagrant@docker:~] $ sudo ip addr show docker0 4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever So here my docker host has the IP address 172.17.42.1 on the docker0 network interface. Now start a new container and get a shell on it: docker run --rm -it ubuntu:trusty bash and within the container type ip addr show eth0 to discover how its main network interface is set up: root@e77f6a1b3740:/# ip addr show eth0 863: eth0: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 66:32:13:f0:f1:e3 brd ff:ff:ff:ff:ff:ff inet 172.17.1.192/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::6432:13ff:fef0:f1e3/64 scope link valid_lft forever preferred_lft forever Here my container has the IP address 172.17.1.192. Now look at the routing table: root@e77f6a1b3740:/# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 172.17.42.1 0.0.0.0 UG 0 0 0 eth0 172.17.0.0 * 255.255.0.0 U 0 0 0 eth0 So the IP Address of the docker host 172.17.42.1 is set as the default route and is accessible from your container. root@e77f6a1b3740:/# ping 172.17.42.1 PING 172.17.42.1 (172.17.42.1) 56(84) bytes of data. 64 bytes from 172.17.42.1: icmp_seq=1 ttl=64 time=0.070 ms 64 bytes from 172.17.42.1: icmp_seq=2 ttl=64 time=0.201 ms 64 bytes from 172.17.42.1: icmp_seq=3 ttl=64 time=0.116 ms docker run --network="host" Alternatively you can run a docker container with network settings set to host. Such a container will share the network stack with the docker host and from the container point of view, localhost (or 127.0.0.1) will refer to the docker host. Be aware that any port opened in your docker container would be opened on the docker host. And this without requiring the -p or -P docker run option. IP config on my docker host: [vagrant@docker:~] $ ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:98:dc:aa brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe98:dcaa/64 scope link valid_lft forever preferred_lft forever and from a docker container in host mode: [vagrant@docker:~] $ docker run --rm -it --network=host ubuntu:trusty ip addr show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:98:dc:aa brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe98:dcaa/64 scope link valid_lft forever preferred_lft forever As you can see both the docker host and docker container share the exact same network interface and as such have the same IP address. Connecting to MySQL from containers bridge mode To access MySQL running on the docker host from containers in bridge mode, you need to make sure the MySQL service is listening for connections on the 172.17.42.1 IP address. To do so, make sure you have either bind-address = 172.17.42.1 or bind-address = 0.0.0.0 in your MySQL config file (my.cnf). If you need to set an environment variable with the IP address of the gateway, you can run the following code in a container : export DOCKER_HOST_IP=$(route -n | awk '/UG[ \t]/{print $2}') then in your application, use the DOCKER_HOST_IP environment variable to open the connection to MySQL. Note: if you use bind-address = 0.0.0.0 your MySQL server will listen for connections on all network interfaces. That means your MySQL server could be reached from the Internet ; make sure to set up firewall rules accordingly. Note 2: if you use bind-address = 172.17.42.1 your MySQL server won't listen for connections made to 127.0.0.1. Processes running on the docker host that would want to connect to MySQL would have to use the 172.17.42.1 IP address. host mode To access MySQL running on the docker host from containers in host mode, you can keep bind-address = 127.0.0.1 in your MySQL configuration and connect to 127.0.0.1 from your containers: [vagrant@docker:~] $ docker run --rm -it --network=host mysql mysql -h 127.0.0.1 -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 36 Server version: 5.5.41-0ubuntu0.14.04.1 (Ubuntu) Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> note: Do use mysql -h 127.0.0.1 and not mysql -h localhost; otherwise the MySQL client would try to connect using a unix socket.
NGINX
24,319,662
3,415
I've set up Node.js and Nginx on my server. Now I want to use it, but, before I start there are 2 questions: How should they work together? How should I handle the requests? There are 2 concepts for a Node.js server, which one is better: a. Create a separate HTTP server for each website that needs it. Then load all JavaScript code at the start of the program, so the code is interpreted once. b. Create one single Node.js server which handles all Node.js requests. This reads the requested files and evals their contents. So the files are interpreted on each request, but the server logic is much simpler. It's not clear for me how to use Node.js correctly.
Nginx works as a front end server, which in this case proxies the requests to a node.js server. Therefore you need to set up an Nginx config file for node. This is what I have done in my Ubuntu box: Create the file yourdomain.example at /etc/nginx/sites-available/: vim /etc/nginx/sites-available/yourdomain.example In it you should have something like: # the IP(s) on which your node server is running. I chose port 3000. upstream app_yourdomain { server 127.0.0.1:3000; keepalive 8; } # the nginx server instance server { listen 80; listen [::]:80; server_name yourdomain.example www.yourdomain.example; access_log /var/log/nginx/yourdomain.example.log; # pass the request to the node.js server with the correct headers # and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://app_yourdomain/; proxy_redirect off; } } If you want Nginx (>= 1.3.13) to handle websocket requests as well, add the following lines in the location / section: proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; Once you have this setup you must enable the site defined in the config file above: cd /etc/nginx/sites-enabled/ ln -s /etc/nginx/sites-available/yourdomain.example yourdomain.example Create your node server app at /var/www/yourdomain/app.js and run it at localhost:3000 var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(3000, "127.0.0.1"); console.log('Server running at http://127.0.0.1:3000/'); Test for syntax mistakes: nginx -t Restart Nginx: sudo /etc/init.d/nginx restart Lastly start the node server: cd /var/www/yourdomain/ && node app.js Now you should see "Hello World" at yourdomain.example One last note with to starting the node server: you should use some kind of monitoring system for the node daemon. There is an awesome tutorial on node with upstart and monit.
NGINX
5,009,324
1,138
I need to serve my app through my app server at 8080, and my static files from a directory without touching the app server. # app server on port 8080 # nginx listens on port 8123 server { listen 8123; access_log off; location /static/ { # root /var/www/app/static/; alias /var/www/app/static/; autoindex off; } location / { proxy_pass http://127.0.0.1:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Now, with this config, everything is working fine. Note that the root directive is commented out. If I activate root and deactivate the alias, it stops working. However, when I remove the trailing /static/ from root, it starts working again. Can someone explain what's going on?
There is a very important difference between the root and the alias directives. This difference exists in the way the path specified in the root or the alias is processed. root the location part is appended to root part final path = root + location alias the location part is replaced by the alias part final path = alias To illustrate: Let's say we have the config location /static/ { root /var/www/app/static/; autoindex off; } In this case the final path that Nginx will derive will be /var/www/app/static/static This is going to return 404 since there is no static/ within static/ This is because the location part is appended to the path specified in the root. Hence, with root, the correct way is location /static/ { root /var/www/app/; autoindex off; } On the other hand, with alias, the location part gets dropped. So for the config location /static/ { alias /var/www/app/static/; autoindex off; ↑ } | pay attention to this trailing slash the final path will correctly be formed as /var/www/app/static In a way this makes sense. The alias just lets you define a new path to represent an existing "real" path. The location part is that new path, and so it gets replaced with the real path. Think of it as a symlink. Root, on the other hand is not a new path, it contains some information that has to be collated with some other info to make the final path. And so, the location part is used, not dropped. The case for trailing slash in alias There is no definitive guideline about whether a trailing slash is mandatory per Nginx documentation, but a common observation by people here and elsewhere seems to indicate that it is. A few more places have discussed this, not conclusively though. https://serverfault.com/questions/376162/how-can-i-create-a-location-in-nginx-that-works-with-and-without-a-trailing-slas https://serverfault.com/questions/375602/why-is-my-nginx-alias-not-working
NGINX
10,631,933
655
I am using nginx on Rackspace cloud following a tutorial and having searched the net and so far can't get this sorted. I want www.mysite.example to go to mysite.example as normal in .htaccess for SEO and other reasons. My /etc/nginx/sites-available/www.example.com.vhost config: server { listen 80; server_name www.example.com example.com; root /var/www/www.example.com/web; if ($http_host != "www.example.com") { rewrite ^ http://example.com$request_uri permanent; } I have also tried server { listen 80; server_name example.com; root /var/www/www.example.com/web; if ($http_host != "www.example.com") { rewrite ^ http://example.com$request_uri permanent; } I also tried. Both the second attempts give redirect loop errors. if ($host = 'www.example.com' ) { rewrite ^ http://example.com$uri permanent; } My DNS is setup as standard: site.example 192.192.6.8 A type at 300 seconds www.site.example 192.192.6.8 A type at 300 seconds (example IPs and folders have been used for examples and to help people in future). I use Ubuntu 11.
HTTP Solution From the documentation, "the right way is to define a separate server for example.org": server { listen 80; server_name example.com; return 301 http://www.example.com$request_uri; } server { listen 80; server_name www.example.com; ... } HTTPS Solution For those who want a solution including https://... server { listen 80; server_name www.domain.example; # $scheme will get the http protocol # and 301 is best practice for tablet, phone, desktop and seo return 301 $scheme://domain.example$request_uri; } server { listen 80; server_name domain.example; # here goes the rest of your config file # example location / { rewrite ^/cp/login?$ /cp/login.php last; # etc etc... } } Note: I have not originally included https:// in my solution since we use loadbalancers and our https:// server is a high-traffic SSL payment server: we do not mix https:// and http://. To check the Nginx version, use nginx -v. Strip www from URL with Nginx redirect server { server_name www.domain.example; rewrite ^(.*) http://domain.example$1 permanent; } server { server_name domain.example; #The rest of your configuration goes here# } So you need to have TWO server codes. Add the www to the URL with Nginx redirect If what you need is the opposite, to redirect from domain.example to www.domain.example, you can use this: server { server_name domain.example; rewrite ^(.*) http://www.domain.example$1 permanent; } server { server_name www.domain.example; #The rest of your configuration goes here# } As you can imagine, this is just the opposite and works the same way the first example. This way, you don't get SEO marks down, as it is complete perm redirect and move. The no WWW is forced and the directory shown! Some of my code shown below for a better view: server { server_name www.google.com; rewrite ^(.*) http://google.com$1 permanent; } server { listen 80; server_name google.com; index index.php index.html; #### # now pull the site from one directory # root /var/www/www.google.com/web; # done # location = /favicon.ico { log_not_found off; access_log off; } }
NGINX
7,947,030
585
I'm using Django with FastCGI + nginx. Where are the logs (errors) stored in this case?
Errors are stored in the nginx log file. You can specify it in the root of the nginx configuration file: error_log /var/log/nginx/nginx_error.log warn; On Mac OS X with Homebrew, the log file was found by default at the following location: /usr/local/var/log/nginx
NGINX
1,706,111
458
I am working with configuring Django project with Nginx and Gunicorn. While I am accessing my port gunicorn mysite.wsgi:application --bind=127.0.0.1:8001 in Nginx server, I am getting the following error in my error log file; 2014/05/30 11:59:42 [crit] 4075#0: *6 connect() to 127.0.0.1:8001 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8001/", host: "localhost:8080" Below is the content of my nginx.conf file; server { listen 8080; server_name localhost; access_log /var/log/nginx/example.log; error_log /var/log/nginx/example.error.log; location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; } } In the HTML page I am getting 502 Bad Gateway. What mistake am I doing?
Disclaimer Make sure there are no security implications for your use-case before running this. Answer I had a similar issue getting Fedora 20, Nginx, Node.js, and Ghost (blog) to work. It turns out my issue was due to SELinux. This should solve the problem: setsebool -P httpd_can_network_connect 1 Details I checked for errors in the SELinux logs: sudo cat /var/log/audit/audit.log | grep nginx | grep denied And found that running the following commands fixed my issue: sudo cat /var/log/audit/audit.log | grep nginx | grep denied | audit2allow -M mynginx sudo semodule -i mynginx.pp Option #2 (probably more secure) setsebool -P httpd_can_network_relay 1 https://security.stackexchange.com/questions/152358/difference-between-selinux-booleans-httpd-can-network-relay-and-httpd-can-net References http://blog.frag-gustav.de/2013/07/21/nginx-selinux-me-mad/ https://wiki.gentoo.org/wiki/SELinux/Tutorials/Where_to_find_SELinux_permission_denial_details http://wiki.gentoo.org/wiki/SELinux/Tutorials/Managing_network_port_labels
NGINX
23,948,527
416
All of a sudden I am getting the below nginx error * Restarting nginx * Stopping nginx nginx ...done. * Starting nginx nginx nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) nginx: [emerg] still could not bind() ...done. ...done. If I run lsof -i :80 or sudo fuser -k 80/tcp I get nothing. Nothing on port 80 Then I run the below: sudo netstat -pan | grep ":80" tcp 0 0 127.0.0.1:8070 0.0.0.0:* LISTEN 15056/uwsgi tcp 0 0 10.170.35.97:39567 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39564 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39584 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39566 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39571 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39580 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39562 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39582 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39586 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39575 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39579 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39560 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39587 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39591 10.158.58.13:8080 TIME_WAIT - tcp 0 0 10.170.35.97:39589 10.158.58.13:8080 TIME_WAIT - I am stumped. How do I debug this? I am using uwsgi with a proxy pass on port 8070. uwsgi is running. Nginx is not. I am using ubuntu 12.4 Below are the relevant portions of my nginx conf file upstream uwsgi_frontend { server 127.0.0.1:8070; } server { listen 80; server_name 127.0.0.1; location = /favicon.ico { log_not_found off; } location / { include uwsgi_params; uwsgi_buffering off; uwsgi_pass 127.0.0.1:8070; } } Here is how I install nginx on ubuntu 12.04 nginx=stable;add-apt-repository ppa:nginx/$nginx; apt-get update apt get install nginx-full
I fixed this by running: sudo apachectl stop It turns out apache was running in the background and prevented nginx from starting on the desired port. On Ubuntu, run: sudo /etc/init.d/apache2 stop
NGINX
14,972,792
404
I have worked with Apache before, so I am aware that the default public web root is typically /var/www/. I recently started working with nginx, but I can't seem to find the default public web root. Where can I find the default public web root for nginx?
If installing on Ubuntu using apt-get, try /usr/share/nginx/www. EDIT: On more recent versions the path has changed to: /usr/share/nginx/html 2019 EDIT: Might try in /var/www/html/index.nginx-debian.html too.
NGINX
10,674,867
403
I am getting these kind of errors: 2014/05/24 11:49:06 [error] 8376#0: *54031 upstream sent too big header while reading response header from upstream, client: 107.21.193.210, server: aamjanata.com, request: "GET /the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https://aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20https:/aamjanata.com/the-brainwash-chronicles-sponsored-by-gujarat-government/,%20ht Always it is the same. A url repeated over and over with comma separating. Can't figure out what is causing this. Anyone have an idea? Update: Another error: http request count is zero while sending response to client Here is the config. There are other irrelevant things, but this part was added/edited fastcgi_cache_path /var/nginx-cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; # Upstream to abstract backend connection(s) for PHP. upstream php { #this should match value of "listen" directive in php-fpm pool server unix:/var/run/php5-fpm.sock; } And then in the server block: set $skip_cache 0; # POST requests and urls with a query string should always go to PHP if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } # Don't cache uris containing the following segments if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { set $skip_cache 1; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } location / { # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass php; fastcgi_read_timeout 3000; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache WORDPRESS; fastcgi_cache_valid 60m; } location ~ /purge(/.*) { fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1"; }`
Add the following to your conf file fastcgi_buffers 16 16k; fastcgi_buffer_size 32k;
NGINX
23,844,761
348
What order do location directives fire in?
From the HTTP core module docs: Directives with the "=" prefix that match the query exactly. If found, searching stops. All remaining directives with conventional strings. If this match used the "^~" prefix, searching stops. Regular expressions, in the order they are defined in the configuration file. If #3 yielded a match, that result is used. Otherwise, the match from #2 is used. Example from the documentation: location = / { # matches the query / only. [ configuration A ] } location / { # matches any query, since all queries begin with /, but regular # expressions and any longer conventional blocks will be # matched first. [ configuration B ] } location /documents/ { # matches any query beginning with /documents/ and continues searching, # so regular expressions will be checked. This will be matched only if # regular expressions don't find a match. [ configuration C ] } location ^~ /images/ { # matches any query beginning with /images/ and halts searching, # so regular expressions will not be checked. [ configuration D ] } location ~* \.(gif|jpg|jpeg)$ { # matches any request ending in gif, jpg, or jpeg. However, all # requests to the /images/ directory will be handled by # Configuration D. [ configuration E ] } If it's still confusing, here's a longer explanation.
NGINX
5,238,377
331
I update nginx to 1.4.7 and php to 5.5.12, After that I got the 502 error. Before I update everything works fine. nginx-error.log 2014/05/03 13:27:41 [crit] 4202#0: *1 connect() to unix:/var/run/php5-fpm.sock failed (13: Permission denied) while connecting to upstream, client: xx.xxx.xx.xx, server: localhost, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "xx.xx.xx.xx" nginx.conf user www www; worker_processes 1; location / { root /usr/home/user/public_html; index index.php index.html index.htm; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/home/user/public_html$fastcgi_script_name; include fastcgi_params; }
I had a similar error after php update. PHP fixed a security bug where o had rw permission to the socket file. Open /etc/php5/fpm/pool.d/www.conf or /etc/php/7.0/fpm/pool.d/www.conf, depending on your version. Uncomment all permission lines, like: listen.owner = www-data listen.group = www-data listen.mode = 0660 Restart fpm - sudo service php5-fpm restart or sudo service php7.0-fpm restart Note: if your webserver runs as user other than www-data, you will need to update the www.conf file accordingly
NGINX
23,443,398
328
I am trying to deploy nginx on kubernetes, kubernetes version is v1.5.2, I have deployed nginx with 3 replica, YAML file is below, apiVersion: extensions/v1beta1 kind: Deployment metadata: name: deployment-example spec: replicas: 3 revisionHistoryLimit: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.10 ports: - containerPort: 80 and now I want to expose its port 80 on port 30062 of node, for that I created a service below, kind: Service apiVersion: v1 metadata: name: nginx-ils-service spec: ports: - name: http port: 80 nodePort: 30062 selector: app: nginx type: LoadBalancer this service is working good as it should be, but it is showing as pending not only on kubernetes dashboard also on terminal.
It looks like you are using a custom Kubernetes Cluster (using minikube, kubeadm or the like). In this case, there is no LoadBalancer integrated (unlike AWS or Google Cloud). With this default setup, you can only use NodePort or an Ingress Controller. With the Ingress Controller you can setup a domain name which maps to your pod; you don't need to give your Service the LoadBalancer type if you use an Ingress Controller.
NGINX
44,110,876
323
I use nginx to as the front server, I have modified the CSS files, but nginx is still serving the old ones. I have tried to restart nginx, to no success and I have Googled, but not found a valid way to clear it. Some articles say we can just delete the cache directory: var/cache/nginx, but there is no such directory on my server. What should I do now?
I had the exact same problem - I was running my nginx in Virtualbox. I did not have caching turned on. But looks like sendfile was set to on in nginx.conf and that was causing the problem. @kolbyjack mentioned it above in the comments. When I turned off sendfile - it worked fine. This is because: Sendfile is used to ‘copy data between one file descriptor and another‘ and apparently has some real trouble when run in a virtual machine environment, or at least when run through Virtualbox. Turning this config off in nginx causes the static file to be served via a different method and your changes will be reflected immediately and without question It is related to this bug: https://www.virtualbox.org/ticket/12597
NGINX
6,236,078
317
I have 3 domain names and am trying to host all 3 sites on one server (a Digital Ocean droplet) using Nginx. mysite1.name mysite2.name mysite3.name Only 1 of them works. The other two result in 403 errors (in the same way). In my nginx error log, I see: [error] 13108#0: *1 directory index of "/usr/share/nginx/mysite2.name/live/" is forbidden. My sites-enabled config is: server { server_name www.mysite2.name; return 301 $scheme://mysite2.name$request_uri; } server { server_name mysite2.name; root /usr/share/nginx/mysite2.name/live/; index index.html index.htm index.php; location / { try_files $uri $uri/ /index.html index.php; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } All 3 sites have nearly identical config files. Each site's files are in folders like /usr/share/nginx/mysite1.name/someFolder, and then /usr/share/nginx/mysite1.name/live is a symlink to that. (Same for mysite2 and mysite3.) I've looked at Nginx 403 forbidden for all files but that didn't help. Any ideas on what might be wrong?
If you have directory indexing off, and is having this problem, it's probably because the try_files you are using has a directory option: location / { try_files $uri $uri/ /index.html index.php; } ^ that is the issue Remove it and it should work: location / { try_files $uri /index.html index.php; } Why this happens TL;DR: This is caused because nginx will try to index the directory, and be blocked by itself. Throwing the error mentioned by OP. try_files $uri $uri/ means, from the root directory, try the file pointed by the uri, if that does not exists, try a directory instead (hence the /). When nginx access a directory, it tries to index it and return the list of files inside it to the browser/client, however by default directory indexing is disabled, and so it returns the error "Nginx 403 error: directory index of [folder] is forbidden". Directory indexing is controlled by the autoindex option: https://nginx.org/en/docs/http/ngx_http_autoindex_module.html
NGINX
19,285,355
307
How can I have same rule for two locations in NGINX config? I have tried the following server { location /first/location/ | /second/location/ { .. .. } } but nginx reload threw this error: nginx: [emerg] invalid number of arguments in "location" directive**
Try location ~ ^/(first/location|second/location)/ { ... } The ~ means to use a regular expression for the url. The ^ means to check from the first character. This will look for a / followed by either of the locations and then another /. Quoting from the docs, A regular expression is preceded with the tilde (~) for case-sensitive matching, or the tilde-asterisk (~*) for case-insensitive matching.
NGINX
35,320,674
276
I installed Nginx on Centos 6 and I am trying to set up virtual hosts. The problem I am having is that I can't seem to find the /etc/nginx/sites-available directory. Is there something I need to do in order to create it? I know Nginx is up and running because I can browse to it.
Well, I think nginx by itself doesn't have that in its setup, because the Ubuntu-maintained package does it as a convention to imitate Debian's apache setup. You could create it yourself if you wanted to emulate the same setup. Create /etc/nginx/sites-available and /etc/nginx/sites-enabled and then edit the http block inside /etc/nginx/nginx.conf and add this line include /etc/nginx/sites-enabled/*; Of course, all the files will be inside sites-available, and you'd create a symlink for them inside sites-enabled for those you want enabled.
NGINX
17,413,526
255
Is there a way to have the master process log to STDOUT STDERR instead of to a file? It seems that you can only pass a filepath to the access_log directive: access_log /var/log/nginx/access.log And the same goes for error_log: error_log /var/log/nginx/error.log I understand that this simply may not be a feature of nginx, I'd be interested in a concise solution that uses tail, for example. It is preferable though that it comes from the master process though because I am running nginx in the foreground.
Edit: it seems nginx now supports error_log stderr; as mentioned in Anon's answer. You can send the logs to /dev/stdout. In nginx.conf: daemon off; error_log /dev/stdout info; http { access_log /dev/stdout; ... } edit: May need to run ln -sf /proc/self/fd /dev/ if using running certain docker containers, then use /dev/fd/1 or /dev/fd/2
NGINX
22,541,333
241
In Nginx, what's the difference between variables $host and $http_host.
$host is a variable of the Core module. $host This variable is equal to line Host in the header of request or name of the server processing the request if the Host header is not available. This variable may have a different value from $http_host in such cases: 1) when the Host input header is absent or has an empty value, $host equals to the value of server_name directive; 2)when the value of Host contains port number, $host doesn't include that port number. $host's value is always lowercase since 0.8.17. $http_host is also a variable of the same module but you won't find it with that name because it is defined generically as $http_HEADER (ref). $http_HEADER The value of the HTTP request header HEADER when converted to lowercase and with 'dashes' converted to 'underscores', e.g. $http_user_agent, $http_referer...; Summarizing: $http_host equals always the HTTP_HOST request header. $host equals $http_host, lowercase and without the port number (if present), except when HTTP_HOST is absent or is an empty value. In that case, $host equals the value of the server_name directive of the server which processed the request.
NGINX
15,414,810
235
I have nginx installed with PHP-FPM on a CentOS 5 box, but am struggling to get it to serve any of my files - whether PHP or not. Nginx is running as www-data:www-data, and the default "Welcome to nginx on EPEL" site (owned by root:root with 644 permissions) loads fine. The nginx configuration file has an include directive for /etc/nginx/sites-enabled/*.conf, and I have a configuration file example.com.conf, thus: server { listen 80; Virtual Host Name server_name www.example.com example.com; location / { root /home/demo/sites/example.com/public_html; index index.php index.htm index.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME /home/demo/sites/example.com/public_html$fastcgi_script_name; include fastcgi_params; } } Despite public_html being owned by www-data:www-data with 2777 file permissions, this site fails to serve any content - [error] 4167#0: *4 open() "/home/demo/sites/example.com/public_html/index.html" failed (13: Permission denied), client: XX.XXX.XXX.XX, server: www.example.com, request: "GET /index.html HTTP/1.1", host: "www.example.com" I've found numerous other posts with users getting 403s from nginx, but most that I have seen involve either more complex setups with Ruby/Passenger (which in the past I've actually succeeded with) or are only receiving errors when the upstream PHP-FPM is involved, so they seem to be of little help. Have I done something silly here?
One permission requirement that is often overlooked is a user needs x permissions in every parent directory of a file to access that file. Check the permissions on /, /home, /home/demo, etc. for www-data x access. My guess is that /home is probably 770 and www-data can't chdir through it to get to any subdir. If it is, try chmod o+x /home (or whatever dir is denying the request). EDIT: To easily display all the permissions on a path, you can use namei -om /path/to/check
NGINX
6,795,350
230
I have reconfigured nginx but I can't get it to restart using the following configuration: server { listen 80; server_name www.example.com; return 301 $scheme://example.com$request_uri; } server { listen 80; server_name example.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location /robots.txt { alias /path/to/robots.txt; access_log off; log_not_found off; } location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 30; proxy_read_timeout 30; proxy_pass http://127.0.0.1:8000; } location /static { expires 1M; alias /path/to/staticfiles; } } After running sudo nginx -c conf -t to test the config, the following error is returned. nginx: [emerg] "server" directive is not allowed here in /etc/nginx/sites-available/config:1 nginx: configuration file /etc/nginx/sites-available/config test failed
That is not an nginx configuration file. It is part of an nginx configuration file. The nginx configuration file (usually called nginx.conf) will look like: events { ... } http { ... server { ... } } The server block is enclosed within an http block. Often the configuration is distributed across multiple files, by using the include directives to pull in additional fragments (for example from the sites-enabled directory). Use sudo nginx -t to test the complete configuration file, which starts at nginx.conf and pulls in additional fragments using the include directive. See this document for more information.
NGINX
41,766,195
228
I am transitioning my react app from webpack-dev-server to nginx. When I go to the root url "localhost:8080/login" I simply get a 404 and in my nginx log I see that it is trying to get: my-nginx-container | 2017/05/12 21:07:01 [error] 6#6: *11 open() "/wwwroot/login" failed (2: No such file or directory), client: 172.20.0.1, server: , request: "GET /login HTTP/1.1", host: "localhost:8080" my-nginx-container | 172.20.0.1 - - [12/May/2017:21:07:01 +0000] "GET /login HTTP/1.1" 404 169 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:53.0) Gecko/20100101 Firefox/53.0" "-" Where should I look for a fix ? My router bit in react looks like this: render( <Provider store={store}> <MuiThemeProvider> <BrowserRouter history={history}> <div> Hello there p <Route path="/login" component={Login} /> <App> <Route path="/albums" component={Albums}/> <Photos> <Route path="/photos" component={SearchPhotos}/> </Photos> <div></div> <Catalogs> <Route path="/catalogs/list" component={CatalogList}/> <Route path="/catalogs/new" component={NewCatalog}/> <Route path="/catalogs/:id/photos/" component={CatalogPhotos}/> <Route path="/catalogs/:id/photos/:photoId/card" component={PhotoCard}/> </Catalogs> </App> </div> </BrowserRouter> </MuiThemeProvider> </Provider>, app); And my nginx file like this: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; server { listen 8080; root /wwwroot; location / { root /wwwroot; index index.html; try_files $uri $uri/ /wwwroot/index.html; } } } EDIT: I know that most of the setup works because when I go to localhost:8080 without being logged in I get the login page as well. this is not through a redirect to localhost:8080/login - it is some react code.
The location block in your nginx config should be: location / { try_files $uri /index.html; } The problem is that requests to the index.html file work, but you're not currently telling nginx to forward other requests to the index.html file too.
NGINX
43,951,720
224
How can I redirect mydomain.example and any subdomain *.mydomain.example to www.adifferentdomain.example using Nginx?
server_name supports suffix matches using .mydomain.example syntax: server { server_name .mydomain.example; rewrite ^ http://www.adifferentdomain.example$request_uri? permanent; } or on any version 0.9.1 or higher: server { server_name .mydomain.example; return 301 http://www.adifferentdomain.example$request_uri; }
NGINX
6,045,020
221
I am installing a website in a droplet (Digital Ocean). I have an issue for install NGINX with PHP properly. I did a tutorial https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-ubuntu-14-04 but when I try to run some .php files it's just downloading it... for example... http://5.101.99.123/info.php it's working but... If I go to the main http://5.101.99.123 it's downloading my index.php :/ Any idea? -rw-r--r-- 1 agitar_user www-data 418 Jul 31 18:27 index.php -rw-r--r-- 1 agitar_user www-data 21 Aug 31 11:20 info.php My /etc/nginx/sites-available/default server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /var/www/html; index index.html index.htm index.php; # Make site accessible from http://localhost/ server_name agitarycompartir.com; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; ## NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location / { try_files $uri $uri/ =404; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } ... Other "locations" are commented on... .
Try this: Edit /etc/nginx/sites-available/default Uncomment both listen lines to make Nginx listen on port 80 IPv4 and IPv6. listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default_server ipv6only=on; ## listen for ipv6 Leave server_name alone # Make site accessible (...) server_name localhost; Add index.php to the index line root /usr/share/nginx/www; index index.php index.html index.htm; Uncomment location ~ \.php$ {} # pass the PHP scripts to FastCGI server listening on (...) # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+?\.php)(/.+)?$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: #fastcgi_pass 127.0.0.1:9000; # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } Edit /etc/php5/fpm/php.ini and make sure cgi.fix_pathinfo is set to 0 Restart Nginx and php5-fpm sudo service nginx restart && sudo service php5-fpm restart I just started using Linux a week ago, so I really hope to help you with this. I am using a nano text editor to edit the files. run apt-get install nano if you don't have it. Google it to know more.
NGINX
25,591,040
218
I'm so lost and new to building NGINX on my own but I want to be able to enable secure websockets without having an additional layer. I don't want to enable SSL on the websocket server itself but instead I want to use NGINX to add an SSL layer to the whole thing. Every web page out there says I can't do it, but I know I can! Thanks to whoever (myself) can show me how!
Just to note that nginx has now support for Websockets on the release 1.3.13. Example of use: location /websocket/ { proxy_pass ​http://backend_host; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 86400; } You can also check the nginx changelog and the WebSocket proxying documentation.
NGINX
12,102,110
217
I am running Django, FastCGI, and Nginx. I am creating an api of sorts that where someone can send some data via XML which I will process and then return some status codes for each node that was sent over. The problem is that Nginx will throw a 504 Gateway Time-out if I take too long to process the XML -- I think longer than 60 seconds. So I would like to set up Nginx so that if any requests matching the location /api will not time out for 120 seconds. What setting will accomplish that. What I have so far is: # Handles all api calls location ^~ /api/ { proxy_read_timeout 120; proxy_connect_timeout 120; fastcgi_pass 127.0.0.1:8080; } Edit: What I have is not working :)
Proxy timeouts are well, for proxies, not for FastCGI... The directives that affect FastCGI timeouts are client_header_timeout, client_body_timeout and send_timeout. Edit: Considering what's found on nginx wiki, the send_timeout directive is responsible for setting general timeout of response (which was bit misleading). For FastCGI there's fastcgi_read_timeout which is affecting the FastCGI process response timeout.
NGINX
561,946
216
I have Nginx installed on a Docker container, and am trying to run it like this: docker run -i -t -p 80:80 mydockerimage /usr/sbin/nginx The problem is that the way Nginx works, is that the initial process immediately spawns a master Nginx process and some workers, and then quits. Since Docker is only watching the PID of the original command, the container then halts. How do I prevent the container from halting? I need to be able to tell it to bind to the first child process, or stop Nginx's initial process from exiting.
To expand on Charles Duffy's answer, Nginx uses the daemon off directive to run in the foreground. If it's inconvenient to put this in the configuration file, we can specify it directly on the command line. This makes it easy to run in debug mode (foreground) and directly switch to running in production mode (background) by changing command line args. To run in foreground: nginx -g 'daemon off;' To run in background: nginx
NGINX
18,861,300
214
I am using the default config while adding the specific directory with nginx installed on my ubuntu 12.04 machine. server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html root /username/test/static; try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } ... ... } I just want a simple static nginx server to serve files out of that directory. However, checking the error.log I see 2014/09/10 16:55:16 [crit] 10808#0: *2 stat() "/username/test/static/index.html" failed (13: Permission denied), client:, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "domain" 2014/09/10 16:55:16 [error] 10808#0: *2 rewrite or internal redirection cycle while internally redirecting to "/index.html I've already done chown -R www-data:www-data on /username/test/static, I've set them to chmod 755. I don't know what else needs to be set.
Nginx operates within the directory, so if you can't cd to that directory from the nginx user then it will fail (as does the stat command in your log). Make sure the www-user can cd all the way to the /username/test/static. You can confirm that the stat will fail or succeed by running sudo -u www-data stat /username/test/static In your case probably the /username directory is the issue here. Usually www-data does not have permissions to cd to other users home directories. The best solution in that case would be to add www-data to username group: gpasswd -a www-data username and make sure that username group can enter all directories along the path: chmod g+x /username && chmod g+x /username/test && chmod g+x /username/test/static For your changes to work, restart nginx nginx -s reload
NGINX
25,774,999
213
I have a webapp on a NGinx server. I set gzip on in the conf file and now I'm trying to see if it works. YSlow says it's not, but 5 out of 6 websites that do the test say it is. How can I get a definite answer on this and why is there a difference in the results?
It looks like one possible answer is, unsurprisingly, curl: $ curl http://example.com/ --silent --write-out "%{size_download}\n" --output /dev/null 31032 $ curl http://example.com/ --silent -H "Accept-Encoding: gzip,deflate" --write-out "%{size_download}\n" --output /dev/null 2553 In the second case the client tells the server that it supports content encoding and you can see that the response was indeed shorter, compressed.
NGINX
9,140,178
210
I'm running into "413 Request Entity Too Large" errors when posting files larger than 10MB to our API running on AWS Elastic Beanstalk. I've done quite a bit of research and believe that I need to up the client_max_body_size for Nginx, however I cannot seem to find any documentation on how to do this using Elastic Beanstalk. My guess is that it needs to be modified using an ebetension file. Anyone have thoughts on how I can up the limit? 10MB is pretty weak, there has to be a way to up this manually.
There are two methods you can take for this. Unfortunately some work for some EB application types and some work for others. Supported/recommended in AWS documentation For some application types, like Java SE, Go, Node.js, and maybe Ruby (it's not documented for Ruby, but all the other Nginx platforms seem to support this), Elasticbeanstalk has a built-in understanding of how to configure Nginx. To extend Elastic Beanstalk's default nginx configuration, add .conf configuration files to a folder named .ebextensions/nginx/conf.d/ in your application source bundle. Elastic Beanstalk's nginx configuration includes .conf files in this folder automatically. ~/workspace/my-app/ |-- .ebextensions | `-- nginx | `-- conf.d | `-- myconf.conf `-- web.jar Configuring the Reverse Proxy - Java SE To increase the maximum upload size specifically, then create a file at .ebextensions/nginx/conf.d/proxy.conf setting the max body size to whatever size you would prefer: client_max_body_size 50M; Create the Nginx config file directly For some other application types, after much research and hours of working with the wonderful AWS support team, I created a config file inside of .ebextensions to supplement the nginx config. This change allowed for a larger post body size. Inside of the .ebextensions directory, I created a file called 01_files.config with the following contents: files: "/etc/nginx/conf.d/proxy.conf" : mode: "000755" owner: root group: root content: | client_max_body_size 20M; This generates a proxy.conf file inside of the /etc/nginx/conf.d directory. The proxy.conf file simply contains the one liner client_max_body_size 20M; which does the trick. Note that for some platforms, this file will be created during the deploy, but then removed in a later deployment phase. You can specify other directives which are outlined in Nginx documentation. http://wiki.nginx.org/Configuration
NGINX
18,908,426
209
Working on a client's server where there are two different versions of nginx installed. I think one of them was installed with the brew package manager (its an osx box) and the other seems to have been compiled and installed with the nginx packaged Makefile. I searched for all of the nginx.conf files on the server, but none of these files define the parameters that nginx is actually using when I start it on the server. Where is the nginx.conf file that I'm unaware of?
Running nginx -t through your commandline will issue out a test and append the output with the filepath to the configuration file (with either an error or success message).
NGINX
19,910,042
204
I'm in the process of setting up a new server. The web server of my choice is NGINX. I want to add the domain (e.g. example.com) as a virtual host. I already have two other domains in there and it works fine, but when I try to add the above mentioned domain and start the server it gives me: Job failed. See system journal and 'systemctl status' for details. I thought it was because of the dashes, so I tried just various other domains with and without hyphens, but no luck. Same error. what could be causing this? I also tried rebooting, I am really at a loss here. Any help would be greatly appreciated. I have played around a bit and found out, that, when I only put one domain in, it works. But when I put another domain in, it stops. Here is the output in status: [root@netzmelone nginx]# systemctl status nginx nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled) Active: failed (Result: exit-code) since Sun, 16 Dec 2012 11:38:08 +0000; 7s ago Process: 14239 ExecStop=/usr/sbin/nginx -g pid /run/nginx.pid; -s quit (code=exited, status=1/FAILURE) Process: 14232 ExecStart=/usr/sbin/nginx -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=0/SUCCESS) Process: 14242 ExecStartPre=/usr/sbin/nginx -t -q -g pid /run/nginx.pid; daemon on; master_process on; (code=exited, status=1/FAILURE) Main PID: 14234 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/nginx.service Dec 16 11:38:08 netzmelone nginx[14242]: nginx: [emerg] could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32 Dec 16 11:38:08 netzmelone nginx[14242]: nginx: configuration file /etc/nginx/nginx.conf test failed
This is most likely happening because of the long domain name. You can fix this by adding server_names_hash_bucket_size 64; at the top of your http block (probably located in /etc/nginx/nginx.conf). I quote from the nginx documentation what to do when this error appears: In this case, the directive value should be increased to the next power of two. So in your case it should become 64. If you still get the same error, try increasing to 128 and further. Reference: https://nginx.org/en/docs/http/server_names.html#optimization
NGINX
13,895,933
204
I want to increase the maximum file size that can be uploaded. After doing some research online, I found that you have to edit the file 'nginx.conf'. The only way I can currently access this file is by going through Putty and typing in the command: vi /etc/nginx/nginx.conf This will open the file but I have 2 questions now: How do I edit this file? I found online that you have to add the following line of code: client_max_body_size 8M; Where would I put this line of code in nginx.conf?
Add client_max_body_size Now that you are editing the file you need to add the line into the server block, like so; server { client_max_body_size 8M; //other lines... } If you are hosting multiple sites add it to the http context like so; http { client_max_body_size 8M; //other lines... } And also update the upload_max_filesize in your php.ini file so that you can upload files of the same size. Saving in Vi Once you are done you need to save, this can be done in vi with pressing esc key and typing :wq and returning. Restarting Nginx and PHP Now you need to restart nginx and php to reload the configs. This can be done using the following commands; sudo service nginx restart sudo service php5-fpm restart Or whatever your php service is called.
NGINX
26,717,013
202
Using nginx, I want to preserve the url, but actually load the same page no matter what. I will use the url with History.getState() to route the requests in my javascript app. It seems like it should be a simple thing to do? location / { rewrite (.*) base.html break; } works, but redirects the url? I still need the url, I just want to always use the same page.
I think this will do it for you: location / { try_files /base.html =404; }
NGINX
7,027,636
192
I'm not able to setup SSL. I've Googled and I found a few solutions but none of them worked for me. I need some help please... Here's the error I get when I attempt to restart nginx: root@s17925268:~# service nginx restart Restarting nginx: nginx: [emerg] SSL_CTX_use_PrivateKey_file("/etc/nginx/conf.d/ssl/ssl.key") failed (SSL: error:0B080074:x509 certificate routines:X509_check_private_key:key values mismatch) nginx: configuration file /etc/nginx/nginx.conf test failed My certificate is from StartSSL and is valid for 1 year. Here's what I tested: The certificate and private key has no trailing spaces. I'm not using the default server.key file. I checked the nginx.conf and the directives are pointing to the correct private key and certificate. I also checked the modulus, and I get a different modulus for both key and certificate. Thank you for your help. :)
Once you have established that they don't match, you still have a problem -- what to do about it. Often, the certificate may merely be assembled incorrectly. When a CA signs your certificate, they send you a block that looks something like -----BEGIN CERTIFICATE----- MIIAA-and-a-buncha-nonsense-that-is-your-certificate -and-a-buncha-nonsense-that-is-your-certificate-and- a-buncha-nonsense-that-is-your-certificate-and-a-bun cha-nonsense-that-is-your-certificate-and-a-buncha-n onsense-that-is-your-certificate-AA+ -----END CERTIFICATE----- they'll also send you a bundle (often two certificates) that represent their authority to grant you a certificate. this will look something like -----BEGIN CERTIFICATE----- MIICC-this-is-the-certificate-that-signed-your-request -this-is-the-certificate-that-signed-your-request-this -is-the-certificate-that-signed-your-request-this-is-t he-certificate-that-signed-your-request-this-is-the-ce rtificate-that-signed-your-request-A -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- MIICC-this-is-the-certificate-that-signed-for-that-one -this-is-the-certificate-that-signed-for-that-one-this -is-the-certificate-that-signed-for-that-one-this-is-t he-certificate-that-signed-for-that-one-this-is-the-ce rtificate-that-signed-for-that-one-this-is-the-certifi cate-that-signed-for-that-one-AA -----END CERTIFICATE----- except that unfortunately, they won't be so clearly labeled. a common practice, then, is to bundle these all up into one file -- your certificate, then the signing certificates. But since they aren't easily distinguished, it sometimes happens that someone accidentally puts them in the other order -- signing certs, then the final cert -- without noticing. In that case, your cert will not match your key. You can test to see what the cert thinks it represents by running openssl x509 -noout -text -in yourcert.cert Near the top, you should see "Subject:" and then stuff that looks like your data. If instead it lookslike your CA, your bundle is probably in the wrong order; you might try making a backup, and then moving the last cert to the beginning, hoping that is the one that is your cert. If this doesn't work, you might just have to get the cert re-issued. When I make a CSR, I like to clearly label what server it's for (instead of just ssl.key or server.key) and make a copy of it with the date in the name, like mydomain.20150306.key etc. that way they private and public key pairs are unlikely to get mixed up with another set.
NGINX
26,191,463
187
upstream apache { server 127.0.0.1:8080; } server{ location ~* ^/service/(.*)$ { proxy_pass http://apache/$1; proxy_redirect off; } } The above snippet will redirect requests where the url includes the string "service" to another server, but it does not include query parameters.
From the proxy_pass documentation: A special case is using variables in the proxy_pass statement: The requested URL is not used and you are fully responsible to construct the target URL yourself. Since you're using $1 in the target, nginx relies on you to tell it exactly what to pass. You can fix this in two ways. First, stripping the beginning of the uri with a proxy_pass is trivial: location /service/ { # Note the trailing slash on the proxy_pass. # It tells nginx to replace /service/ with / when passing the request. proxy_pass http://apache/; } Or if you want to use the regex location, just include the args: location ~* ^/service/(.*) { proxy_pass http://apache/$1$is_args$args; }
NGINX
8,130,692
187
server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 #root /usr/share/nginx/www; root /home/ubuntu/node-login; # Make site accessible from server_name ec2-xx-xx-xxx-xxx.us-west-1.compute.amazonaws.com; location /{ proxy_pass http://127.0.0.1:8000/; proxy_redirect off; } } this results in nignx error [warn] conflicting server name "ec2..." on 0.0.0.0:80 ignored I dont understand, any explanation appreciated. Thanks.
I assume that you're running a Linux, and you're using gEdit to edit your files. In the /etc/nginx/sites-enabled, it may have left a temp file e.g. default~ (watch the ~). Depending on your editor, the file could be named .save or something like it. Just run $ ls -lah to see which files are unintended to be there and remove them (Thanks @Tisch for this). Delete this file, and it will solve your problem.
NGINX
11,426,087
185
I have been getting the nginx error: 413 Request Entity Too Large I have been able to update my client_max_body_size in the server section of my nginx.conf file to 20M and this has fixed the issue. However, what is the default nginx client_max_body_size?
The default value for client_max_body_size directive is 1 MiB. It can be set in http, server and location context — as in the most cases, this directive in a nested block takes precedence over the same directive in the ancestors blocks. Excerpt from the ngx_http_core_module documentation: Syntax: client_max_body_size size; Default: client_max_body_size 1m; Context: http, server, location Sets the maximum allowed size of the client request body, specified in the “Content-Length” request header field. If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client. Please be aware that browsers cannot correctly display this error. Setting size to 0 disables checking of client request body size. Don't forget to reload configuration by nginx -s reload or service nginx reload commands prepending with sudo (if any).
NGINX
28,476,643
184
I have setup an nginx server with php5-fpm. When I try to load the site I get a blank page with no errors. Html pages are served fine but not php. I tried turning on display_errors in php.ini but no luck. php5-fpm.log is not producing any errors and neither is nginx. nginx.conf server { listen 80; root /home/mike/www/606club; index index.php index.html; server_name mikeglaz.com www.mikeglaz.com; error_log /var/log/nginx/error.log; location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } EDIT here's my nginx error log: 2013/03/15 03:52:55 [error] 1020#0: *55 open() "/home/mike/www/606club/robots.txt" failed (2: No such file or directory), client: 199.30.20.40, server: mikeglaz.com, request: "GET /robots.txt HTTP/1.1", host: "mikeglaz.com"
replace include fastcgi_params; with include fastcgi.conf; and remove fastcgi_param SCRIPT_FILENAME ... in nginx.conf
NGINX
15,423,500
183
I have an instance of nginx running which serves several websites. The first is a status message on the server's IP address. The second is an admin console on admin.domain.com. These work great. Now I'd like all other domain requests to go to a single index.php - I have loads of domains and subdomains and it's impractical to list them all in an nginx config. So far I've tried setting server_name to * but that failed as an invalid wildcard. *.* works until I add the other server blocks, then I guess it conflicts with them. Is there a way to run a catch-all server block in nginx after other sites have been defined? N.B. I'm not a spammer, these are genuine sites with useful content, they're just powered by the same CMS from a database!
Change listen option to this in your catch-all server block. (Add default_server) this will take all your non-defined connections (on the specified port). listen 80 default_server; if you want to push everything to index.php if the file or folder does not exist; try_files $uri /$uri /index.php; Per the docs, It can also be set explicitly which server should be default, with the **default_server** parameter in the listen directive
NGINX
9,454,764
179
In order to deal with the microservice architecture, it's often used alongside a Reverse Proxy (such as nginx or apache httpd) and for cross cutting concerns implementation API gateway pattern is used. Sometimes Reverse proxy does the work of API gateway. It will be good to see clear differences between these two approaches. It looks like the potential benefit of API gateway usage is invoking multiple microservices and aggregating the results. All other responsibilities of API gateway can be implemented using Reverse Proxy. Such as: Authentication (It can be done using nginx LUA scripts); Transport security. It itself Reverse Proxy task; Load balancing ... So based on this there are several questions: Does it make sense to use API gateway and Reverse proxy simultaneously (as example request -> API gateway -> reverse proxy(nginx) -> concrete microservice)? In what cases ? What are the other differences that can be implemented using API gateway and can't be implemented by Reverse proxy and vice versa?
It is easier to think about them if you realize they aren't mutually exclusive. Think of an API gateway as a specific type reverse proxy implementation. In regards to your questions, it is not uncommon to see both used in conjunction where the API gateway is treated as an application tier that sits behind a reverse proxy for load balancing and health checking. An example would be something like a WAF sandwich architecture in that your Web Application Firewall/API Gateway is sandwiched by reverse proxy tiers, one for the WAF itself and the other for the individual microservices it talks to. Regarding the differences, they are very similar. It's just nomenclature. As you take a basic reverse proxy setup and start bolting on more pieces like authentication, rate limiting, dynamic config updates, and service discovery, people are more likely to call that an API gateway.
NGINX
35,756,663
173
I have nginx up and running with a Ruby/Sinatra app and all is well. However, I'm now trying to have a second application running from the same server and I noticed something weird. First, here's my nginx.conf: pid /tmp/nginx.pid; error_log /tmp/nginx.error.log; events { worker_connections 1024; accept_mutex off; } http { default_type application/octet-stream; access_log /tmp/nginx.access.log combined; sendfile on; tcp_nopush on; tcp_nodelay off; gzip on; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 500; gzip_disable "MSIE [1-6]\."; gzip_types text/plain text/xml text/css text/comma-separated-values text/javascript application/x-javascript application/atom+xml; upstream app { server unix:/var/www/app/tmp/sockets/unicorn.sock fail_timeout=0; } server { listen 80; client_max_body_size 4G; server_name FAKE.COM; keepalive_timeout 5; root /var/www/app/public; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { proxy_pass http://app; break; } } error_page 500 502 503 504 /500.html; location = /500.html { root /var/www/app/public; } } } 68,0-1 B Notice how server_name is set to FAKE.COM yet the server is responding to all hosts that hit that server via other domain names. How can I make that particular server respond only to requests for FAKE.COM?
The first server block in the nginx config is the default for all requests that hit the server for which there is no specific server block. So in your config, assuming your real domain is REAL.COM, when a user types that in, it will resolve to your server, and since there is no server block for this setup, the server block for FAKE.COM, being the first server block (only server block in your case), will process that request. This is why proper Nginx configs have a specific server block for defaults before following with others for specific domains. # Default server server { return 404; } server { server_name domain_1; [...] } server { server_name domain_2; [...] } etc ** EDIT ** It seems some users are a bit confused by this example and think it is limited to a single conf file etc. Please note that the above is a simple example for the OP to develop as required. I personally use separate vhost conf files with this as so (CentOS/RHEL): http { [...] # Default server server { return 404; } # Other servers include /etc/nginx/conf.d/*.conf; } /etc/nginx/conf.d/ will contain domain_1.conf, domain_2.conf... domain_n.conf which will be included after the server block in the main nginx.conf file which will always be the first and will always be the default unless it is overridden it with the default_server directive elsewhere. The alphabetical order of the file names of the conf files for the other servers becomes irrelevant in this case. In addition, this arrangement gives a lot of flexibility in that it is possible to define multiple defaults. In my specific case, I have Apache listening on Port 8080 on the internal interface only and I proxy PHP and Perl scripts to Apache. However, I run two separate applications that both return links with ":8080" in the output html attached as they detect that Apache is not running on the standard Port 80 and try to "help" me out. This causes an issue in that the links become invalid as Apache cannot be reached from the external interface and the links should point at Port 80. I resolve this by creating a default server for Port 8080 to redirect such requests. http { [...] # Default server block for undefined domains server { listen 80; return 404; } # Default server block to redirect Port 8080 for all domains server { listen my.external.ip.addr:8080; return 301 http://$host$request_uri; } # Other servers include /etc/nginx/conf.d/*.conf; } As nothing in the regular server blocks listens on Port 8080, the redirect default server block transparently handles such requests by virtue of its position in nginx.conf. I actually have four of such server blocks and this is a simplified use case.
NGINX
9,824,328
169
There's an option to hide the version so it will display only nginx, but is there a way to hide that too so it will not show anything or change the header?
If you are using nginx to proxy a back-end application and want the back-end to advertise its own Server: header without nginx overwriting it, then you can go inside of your server {…} stanza and set: proxy_pass_header Server; That will convince nginx to leave that header alone and not rewrite the value set by the back-end.
NGINX
246,227
169
I have a docker with version 17.06.0-ce. When I trying to install NGINX using docker with command: docker run -p 80:80 -p 8080:8080 --name nginx -v $PWD/www:/www -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/wwwlogs -d nginx:latest It shows that docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused \"rootfs_linux.go:57: mounting \\"/appdata/nginx/conf/nginx.conf\\" to rootfs \\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\" at \\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0/etc/nginx/nginx.conf\\" caused \\"not a directory\\"\"" : Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. If do not mount the nginx.conf file, everything is okay. So, how can I mount the configuration file?
This should no longer happen (since v2.2.0.0), see here If you are using Docker for Windows, this error can happen if you have recently changed your password. How to fix: First make sure to delete the broken container's volume docker rm -v <container_name> Update: The steps below may work without needing to delete volumes first. Open Docker Settings Go to the "Shared Drives" tab Click on the "Reset Credentials..." link on the bottom of the window Re-Share the drives you want to use with Docker You should be prompted to enter your username/password Click "Apply" Go to the "Reset" tab Click "Restart Docker" Re-create your containers/volumes Credit goes to BaranOrnarli on GitHub for the solution.
NGINX
45,972,812
167
I need to redirect every http://test.com request to http://www.test.com. How can this be done. In the server block I tried adding rewrite ^/(.*) http://www.test.com/$1 permanent; but in browser it says The page isn't redirecting properly Firefox has detected that the server is redirecting the request for this address in a way that will never complete. My server block looks like server { listen 80; server_name test.com; client_max_body_size 10M; client_body_buffer_size 128k; root /home/test/test/public; passenger_enabled on; rails_env production; #rewrite ^/(.*) http://www.test.com/$1 permanent; #rewrite ^(.*)$ $scheme://www.test.com$1; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } }
Best way to do what you want is to add another server block: server { #implemented by default, change if you need different ip or port #listen *:80 | *:8000; server_name test.com; return 301 $scheme://www.test.com$request_uri; } And edit your main server block server_name variable as following: server_name www.test.com; Important: New server block is the right way to do this, if is evil. You must use locations and servers instead of if if it's possible. Rewrite is sometimes evil too, so replaced it with return.
NGINX
10,294,481
157
I have recently started migrating to Docker 1.9 and Docker-Compose 1.5's networking features to replace using links. So far with links there were no problems with nginx connecting to my php5-fpm fastcgi server located in a different server in one group via docker-compose. Newly though when I run docker-compose --x-networking up my php-fpm, mongo and nginx containers boot up, however nginx quits straight away with [emerg] 1#1: host not found in upstream "waapi_php_1" in /etc/nginx/conf.d/default.conf:16 However, if I run the docker-compose command again while the php and mongo containers are running (nginx exited), nginx starts and works fine from then on. This is my docker-compose.yml file: nginx: image: nginx ports: - "42080:80" volumes: - ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro php: build: config/docker/php ports: - "42022:22" volumes: - .:/var/www/html env_file: config/docker/php/.env.development mongo: image: mongo ports: - "42017:27017" volumes: - /var/mongodata/wa-api:/data/db command: --smallfiles This is my default.conf for nginx: server { listen 80; root /var/www/test; error_log /dev/stdout debug; access_log /dev/stdout; location / { # try to serve file directly, fallback to app.php try_files $uri /index.php$is_args$args; } location ~ ^/.+\.php(/|$) { # Referencing the php service host (Docker) fastcgi_pass waapi_php_1:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; # We must reference the document_root of the external server ourselves here. fastcgi_param SCRIPT_FILENAME /var/www/html/public$fastcgi_script_name; fastcgi_param HTTPS off; } } How can I get nginx to work with only a single docker-compose call?
This can be solved with the mentioned depends_on directive since it's implemented now (2016): version: '2' services: nginx: image: nginx ports: - "42080:80" volumes: - ./config/docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro depends_on: - php php: build: config/docker/php ports: - "42022:22" volumes: - .:/var/www/html env_file: config/docker/php/.env.development depends_on: - mongo mongo: image: mongo ports: - "42017:27017" volumes: - /var/mongodata/wa-api:/data/db command: --smallfiles Successfully tested with: $ docker-compose version docker-compose version 1.8.0, build f3628c7 Find more details in the documentation. There is also a very interesting article dedicated to this topic: Controlling startup order in Compose
NGINX
33,639,138
156
I have this in Nginx configuration files gzip_types text/plain text/html text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; but Nginx give error when starting up [warn]: duplicate MIME type "text/html" in /etc/nginx/nginx.conf:25 What is actually duplicate to text/html? Is it text/plain?
For the option gzip_types, the mime-type text/html is always included by default, so you don't need to specify it explicitly.
NGINX
6,475,472
156
After NGINX upgrade to v1.15.2 starts getting the warning. nginx: [warn] the "ssl" directive is deprecated, use the "listen ... ssl" directive instead in /usr/local/etc/nginx/sites-enabled/confid-file-name:8 Where the 8th line is ssl on; how I can solve this?
Edit your listen statement from: listen 443; to listen 443 ssl; and comment out or delete : # ssl on; then check nginx -t again.
NGINX
51,703,109
155
I've always thought of upstream and downstream along the lines of an actual stream, where the flow of information is like water. So upstream is where water/data comes from (e.g. an HTTP request) and downstream is where it goes (e.g. the underlying system that services the request). I've been looking at API gateways recently and noticed that some of them used the inverse of this definition. I shrugged it off as an oddity at the time. I then discovered that nginx, which some API gateways are based on, also uses the terminology in the opposite way to what I expected. nginx calls the servers that it sends requests to "upstream servers", and presumably the incoming requests would therefore be "downstream clients". Conceptually it seems like nginx would be pushing the requests "uphill" if going to an "upstream server", which is totally counter-intuitive... Gravity is reversed in the land of reverse proxies and API gateways, apparently! I've seen other discussions talking about upstream / downstream representing dependencies between systems but for middleware or infrastructure components that sit between systems the idea of dependencies is a little looser, and I find it more helpful to think in terms of flow of information still - because THAT'S usually the source of your dependencies anyway. Have I got my understanding of the stream analogy fundamentally wrong or are these software components getting the concepts backwards?
In HTTP world, the "upstream server" term was introduced in the HTTP/1.0 specification, RFC 1945: 502 Bad Gateway The server, while acting as a gateway or proxy, received an invalid response from the upstream server it accessed in attempting to fulfill the request. Formal definition was added later, in RFC 2616: upstream/downstream Upstream and downstream describe the flow of a message: all messages flow from upstream to downstream. According to this definition: if you are looking at a request, then the client is upstream, and the server is downstream; in contrast, if you are looking at a response, then the client is downstream, and the server is upstream. At the same time, in HTTP most of the data flow is not for requests, but for responses. So, if you'll consider flow of responses, then the "upstream server" term sounds pretty reasonable and logical. And the term is again used in the 502 response code description (it is identical to HTTP/1.0 one), as well as some other places. The same logic can be also seen in terms "downloading" and "uploading" in natural language. Most of the data flow is from servers to clients, and that's why "downloading" means loading something from a server to a client, and "uploading" - from a client to a server.
NGINX
32,364,579
150
In my error log i get [emerg] 10619#0: a duplicate default server for 0.0.0.0:80 in /etc/nginx/sites-enabled/mysite.com:4 on Line 4 I have: server_name mysite.com www.mysite.com; Any suggestions?
You likely have other files (such as the default configuration) located in /etc/nginx/sites-enabled that needs to be removed. This issue is caused by a repeat of the default_server parameter supplied to one or more listen directives in your files. You'll likely find this conflicting directive reads something similar to: listen 80 default_server; As the nginx core module documentation for listen states: The default_server parameter, if present, will cause the server to become the default server for the specified address:port pair. If none of the directives have the default_server parameter then the first server with the address:port pair will be the default server for this pair. This means that there must be another file or server block defined in your configuration with default_server set for port 80. nginx is encountering that first before your mysite.com file so try removing or adjusting that other configuration. If you are struggling to find where these directives and parameters are set, try a search like so: grep -R default_server /etc/nginx
NGINX
30,973,774
150
I'm running a Sinatra app behind passenger/nginx. I'm trying to get it to respond to both http and https calls. The problem is, when both are defined in the server block https calls are responded to normally but http yields a 400 "The plain HTTP request was sent to HTTPS port" error. This is for a static page so I'm guessing Sinatra has nothing to do with this. Any ideas on how to fix this? Here's the server block: server { listen 80; listen 443 ssl; server_name localhost; root /home/myhome/app/public; passenger_enabled on; ssl on; ssl_certificate /opt/nginx/ssl_keys/ssl.crt; ssl_certificate_key /opt/nginx/ssl_keys/ssl.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; location /static { root /home/myhome/app/public; index index.html index.htm index.php; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 /500.html; access_log /home/myhome/app/logs/access.log; error_log /home/myhome/app/logs/error.log; }
I ran into a similar problem. It works on one server and does not on another server with same Nginx configuration. Found the the solution which is answered by Igor here http://forum.nginx.org/read.php?2,1612,1627#msg-1627 Yes. Or you may combine SSL/non-SSL servers in one server: server { listen 80; listen 443 default ssl; # ssl on - remember to comment this out }
NGINX
8,768,946
150
I am trying to link 2 separate containers: nginx:latest php:fpm The problem is that php scripts do not work. Perhaps the php-fpm configuration is incorrect. Here is the source code, which is in my repository. Here is the file docker-compose.yml: nginx: build: . ports: - "80:80" - "443:443" volumes: - ./:/var/www/test/ links: - fpm fpm: image: php:fpm ports: - "9000:9000" and Dockerfile which I used to build a custom image based on the nginx one: FROM nginx # Change Nginx config here... RUN rm /etc/nginx/conf.d/default.conf ADD ./default.conf /etc/nginx/conf.d/ Lastly, here is my custom Nginx virtual host config: server { listen 80; server_name localhost; root /var/www/test; error_log /var/log/nginx/localhost.error.log; access_log /var/log/nginx/localhost.access.log; location / { # try to serve file directly, fallback to app.php try_files $uri /index.php$is_args$args; } location ~ ^/.+\.php(/|$) { fastcgi_pass 192.168.59.103:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } Could anybody help me configure these containers correctly to execute php scripts? P.S. I run containers via docker-composer like this: docker-compose up from the project root directory.
I know it is kind an old post, but I've had the same problem and couldn't understand why your code didn't work. After a LOT of tests I've found out why. It seems like fpm receives the full path from nginx and tries to find the files in the fpm container, so it must be the exactly the same as server.root in the nginx config, even if it doesn't exist in the nginx container. To demonstrate: docker-compose.yml nginx: build: . ports: - "80:80" links: - fpm fpm: image: php:fpm ports: - ":9000" # seems like fpm receives the full path from nginx # and tries to find the files in this dock, so it must # be the same as nginx.root volumes: - ./:/complex/path/to/files/ /etc/nginx/conf.d/default.conf server { listen 80; # this path MUST be exactly as docker-compose.fpm.volumes, # even if it doesn't exist in this dock. root /complex/path/to/files; location / { try_files $uri /index.php$is_args$args; } location ~ ^/.+\.php(/|$) { fastcgi_pass fpm:9000; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Dockerfile FROM nginx:latest COPY ./default.conf /etc/nginx/conf.d/
NGINX
29,905,953
148
I would like to know the difference between Nginx and Unicorn. As far as I understand, Nginx is a web server while Unicorn is a Ruby HTTP server. Since both Nginx and Unicorn can handle HTTP requests, what is the need to use the combination of Nginx and Unicorn for RoR applications?
Nginx is a pure web server that's intended for serving up static content and/or redirecting the request to another socket to handle the request. Unicorn is a Rack web server and only intended to host a 'Rack App' which is usually generating dynamic content. Rack apps can also serve up static content but it's less efficient than most other traditional web servers. Most RoR setups use a combination of both traditional web servers and Rack servers to apply the best of both of their capabilities. Nginx is incredibly fast at request redirection through proxy balancing and serving up static content. Unicorn is quite capable of processing HTTP headers and balancing inbound requests to Ruby for processing.
NGINX
8,739,871
147
I would like to host 2 different domains in the same server using Nginx. I redirected both domains to this host via @ property. Although I configure 2 different server blocks, whenever I try to access second domain, it redirects to first one. Here is my config. server { listen `www.domain1.example:80`; access_log `/var/log/nginx/host.domain1.access.log` main; root `/var/www/domain1`; server_name `www.domain1.example`; location ~ \.php$ { # Security: must set cgi.fixpathinfo to 0 in `php.ini`! fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass `127.0.0.1:9000`; fastcgi_index `index.php`; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include `/etc/nginx/fastcgi_params`; } } server { listen www.domain2.example:80; access_log /var/log/nginx/host.domain2.access.log main; root /var/www/domain2; server_name www.domain2.example; location ~ \.php$ { # Security: must set cgi.fixpathinfo to 0 in php.ini! fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include /etc/nginx/fastcgi_params; } } How can I fix this?
Your "listen" directives are wrong. See this page: http://nginx.org/en/docs/http/server_names.html. They should be server { listen 80; server_name www.domain1.example; root /var/www/domain1; } server { listen 80; server_name www.domain2.example; root /var/www/domain2; } Note, I have only included the relevant lines. Everything else looked okay but I just deleted it for clarity. To test it you might want to try serving a text file from each server first before actually serving PHP. That's why I left the 'root' directive in there.
NGINX
11,773,544
145
I'm looking at the WSGI specification and I'm trying to figure out how servers like uWSGI fit into the picture. I understand the point of the WSGI spec is to separate web servers like nginx from web applications like something you'd write using Flask. What I don't understand is what uWSGI is for. Why can't nginx directly call my Flask application? Can't flask speak WSGI directly to it? Why does uWSGI need to get in between them? There are two sides in the WSGI spec: the server and the web app. Which side is uWSGI on?
Okay, I think I get this now. Why can't nginx directly call my Flask application? Because nginx doesn't support the WSGI spec. Technically nginx could implement the WSGI spec if they wanted, they just haven't. That being the case, we need a web server that does implement the spec, which is what the uWSGI server is for. Note that uWSGI is a full fledged http server that can and does work well on its own. I've used it in this capacity several times and it works great. If you need super high throughput for static content, then you have the option of sticking nginx in front of your uWSGI server. When you do, they will communicate over a low level protocol known as uwsgi. "What the what?! Another thing called uwsgi?!" you ask. Yeah, it's confusing. When you reference uWSGI you are talking about an http server. When you talk about uwsgi (all lowercase) you are talking about a binary protocol that the uWSGI server uses to talk to other servers like nginx. They picked a bad name on this one. For anyone who is interested, I wrote a blog article about it with more specifics, a bit of history, and some examples.
NGINX
38,601,440
143
I have been studying Node.js recently and came across some material on writing simple Node.js based servers. For example, the following. var express = require("express"), http = require("http"), app; // Create our Express-powered HTTP server // and have it listen on port 3000 app = express(); http.createServer(app).listen(3000); // set up our routes app.get("/hello", function (req, res) { res.send("Hello World!"); }); app.get("/goodbye", function (req, res) { res.send("Goodbye World!"); }); Now, although I seem to understand what's going on in the code, I am slightly confused by the terminology. When I hear the term server, I think about stuff like Apache or Nginx. I am used to thinking about them as being like a container that can hold my web applications. How does Node.js server differ from Nginx/Apache server? Isn't it true that a Node.js based server (i.e. code) can still be placed within something like Nginx to run? So why are both called "servers"?
It's a server, yes. A node.js web application is a full-fledged web server just like Nginx or Apache. You can indeed serve your node.js application without using any other web server. Just change your code to: app = express(); http.createServer(app).listen(80); // serve HTTP directly Indeed, some projects use node.js as the front-end load balancer for other servers (including Apache). Note that node.js is not the only development stack to do this. Web development frameworks in Go, Java and Swift also do this. Why? In the beginning was the CGI. CGI was fine and worked OK. Apache would get a request, find that the url needs to execute a CGI app, execute that CGI app and pass data as environment variables, read the stdout and serve the data back to the browser. The problem is that it is slow. It's OK when the CGI app was a small statically compiled C program but a group of small statically compiled C programs became hard to maintain. So people started writing in scripting languages. Then that became hard to maintain and people started developing object oriented MVC frameworks. Now we started having trouble - EVERY REQUEST must compile all those classes and create all those objects just to serve some HTML, even if there's nothing dynamic to serve (because the framework needs to figure out that there's nothing dynamic to serve). What if we don't need to create all those objects every request? That was what people thought. And from trying to solve that problem came several strategies. One of the earliest was to embed interpreters directly in web servers like mod_php in Apache. Compiled classes and objects can be stored in global variables and therefore cached. Another strategy was to do pre-compilation. And yet another strategy was to run the application as a regular server process and talk with the web server using a custom protocol like FastCGI. Then some developers started simply using HTTP as their app->server protocol. In effect, the app is also an HTTP server. The advantage of this is that you don't need to implement any new, possibly buggy, possibly not tested protocol and you can debug your app directly using a web browser (or also commonly, curl). And you don't need a modified web server to support your app, just any web server that can do reverse proxying or redirects. Why use Apache/Nginx? When you serve a node.js app note that you are the author of your own web server. Any potential bug in your app is a directly exploitable bug on the internet. Some people are (justifiably) not comfortable with this. Adding a layer of Apache or Nginx in front of your node.js app means you have a battle-tested, security-hardened piece of software on the live internet as an interface to your app. It adds a tiny bit of latency (the reverse proxying) but most consider it worth it. This used to be the standard advice in the early days of node.js. But these days there are also sites and web services that exposes node.js directly to the internet. The http.Server module is now fairly well battle-tested on the internet to be trusted.
NGINX
38,821,947
142
I am using nginx and node server to serve update requests. I get a gateway timeout when I request an update on large data. I saw this error from the nginx error logs : 2016/04/07 00:46:04 [error] 28599#0: *1 upstream prematurely closed connection while reading response header from upstream, client: 10.0.2.77, server: gis.oneconcern.com, request: "GET /update_mbtiles/atlas19891018000415 HTTP/1.1", upstream: "http://127.0.0.1:7777/update_mbtiles/atlas19891018000415", host: "gis.oneconcern.com" I googled for the error and tried everything I could, but I still get the error. My nginx conf has these proxy settings: ## # Proxy settings ## proxy_connect_timeout 1000; proxy_send_timeout 1000; proxy_read_timeout 1000; send_timeout 1000; This is how my server is configured server { listen 80; server_name gis.oneconcern.com; access_log /home/ubuntu/Tilelive-Server/logs/nginx_access.log; error_log /home/ubuntu/Tilelive-Server/logs/nginx_error.log; large_client_header_buffers 8 32k; location / { proxy_pass http://127.0.0.1:7777; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $http_host; proxy_cache_bypass $http_upgrade; } location /faults { proxy_pass http://127.0.0.1:8888; proxy_http_version 1.1; proxy_buffers 8 64k; proxy_buffer_size 128k; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } I am using a nodejs backend to serve the requests on an aws server. The gateway error shows up only when the update takes a long time (about 3-4 minutes). I do not get any error for smaller updates. Any help will be highly appreciated. Node js code : app.get("/update_mbtiles/:earthquake", function(req, res){ var earthquake = req.params.earthquake var command = spawn(__dirname + '/update_mbtiles.sh', [ earthquake, pg_details ]); //var output = []; command.stdout.on('data', function(chunk) { // logger.info(chunk.toString()); // output.push(chunk.toString()); }); command.stderr.on('data', function(chunk) { // logger.error(chunk.toString()); // output.push(chunk.toString()); }); command.on('close', function(code) { if (code === 0) { logger.info("updating mbtiles successful for " + earthquake); tilelive_reload_and_switch_source(earthquake); res.send("Completed updating!"); } else { logger.error("Error occured while updating " + earthquake); res.status(500); res.send("Error occured while updating " + earthquake); } }); }); function tilelive_reload_and_switch_source(earthquake_unique_id) { tilelive.load('mbtiles:///'+__dirname+'/mbtiles/tipp_out_'+ earthquake_unique_id + '.mbtiles', function(err, source) { if (err) { logger.error(err.message); throw err; } sources.set(earthquake_unique_id, source); logger.info('Updated source! New tiles!'); }); } Thank you.
I solved this by setting a higher timeout value for the proxy: location / { proxy_read_timeout 300s; proxy_connect_timeout 75s; proxy_pass http://localhost:3000; } Documentation: https://nginx.org/en/docs/http/ngx_http_proxy_module.html
NGINX
36,488,688
142
When I restart the nginx service on a command line on an Ubuntu server, the service crashes when a nginx configuration file has errors. On a multi-site server this puts down all the sites, even the ones without configuration errors. To prevent this, I run the nginx configuration test first: nginx -t After the test ran successful, I could restart the service: /etc/init.d/nginx restart Or only reload the nignx site configs without a restart: nginx -s reload Is there a way to combine those two commands where the restart command is conditional to the configuration test's result? I couldn't find this online and the official documentation on this is rather basic. I don't know my way around Linux that well, so I don't know if what I'm looking for is right in front of me or not possible at all. I'm using nginx v1.1.19.
As of nginx 1.8.0, the correct solution is sudo nginx -t && sudo service nginx reload Note that due to a bug, configtest always returns a zero exit code even if the config file has an error.
NGINX
18,587,638
139
I want to add a custom header for the response received from the server behind nginx. While add_header works for nginx-processed responses, it does nothing when the proxy_pass is used.
add_header works as well with proxy_pass as without. I just today set up a configuration where I've used exactly that directive. I have to admit though that I've struggled as well setting this up without exactly recalling the reason, though. Right now I have a working configuration and it contains the following (among others): server { server_name .myserver.com location / { proxy_pass http://mybackend; add_header X-Upstream $upstream_addr; } } Before nginx 1.7.5 add_header worked only on successful responses, in contrast to the HttpHeadersMoreModule mentioned by Sebastian Goodman in his answer. Since nginx 1.7.5 you can use the keyword always to include custom headers even in error responses. For example: add_header X-Upstream $upstream_addr always; Limitation: You cannot override the server header value using add_header.
NGINX
14,501,047
138
After running an ASP.NET vNext project on my local machine I was trying to figure out how I can run it on nginx as it looks to be a recommended choice Following jsinh's blog, I installed it using: sudo apt-get update sudo apt-get install nginx -y I was trying to understand whether it is working or not by using: ifconfig eth0 | grep inet | awk '{ print $2}' After running sudo service nginx start sudo service nginx stop However, the output is always the same: How to verify if nginx is running or not?
Looking at the requirement you have, the below command shall help: service nginx status
NGINX
35,220,654
132
I am running a command ./startup.sh nginx:start and I am getting this error message zsh: permission denied: ./startup.sh why could this be happening?
Be sure to give it the execution permission. cd ~/the/script/folder chmod +x ./startup.sh This will give exec permission to user, group and other, so beware of possible security issues. To restrict permission to a single access class, you can use: chmod u+x ./startup.sh This will grant exec permission only to user For reference
NGINX
53,229,221
131
I'm in the process of reorganizing URL structure. I need to setup redirect rules for specific URLs - I'm using Nginx. Basically Something like this: http://example.com/issue1 --> http://example.com/shop/issues/custom_issue_name1 http://example.com/issue2 --> http://example.com/shop/issues/custom_issue_name2 http://example.com/issue3 --> http://example.com/shop/issues/custom_issue_name3 Thanks!
location ~ /issue([0-9]+) { return 301 http://example.com/shop/issues/custom_isse_name$1; }
NGINX
18,037,716
126
I've recently decided to switch from Apache2 to Nginx. I installed Nginx on my CentOS server and setup a basic configuration. When I tried to load my site in browser (FF/Chrome) I noticed that css file is not loaded. I checked the error console and saw this message: Error: The stylesheet http://example.com/style.css was not loaded because its MIME type, "text/html", is not "text/css". I checked Nginx configuration and everything seems to be fine: http { include /etc/nginx/mime.types; .......... } The mime type for css files is correctly set in /etc/nginx/mime.types. text/css css; Everything seems to be well configured but my css files are still not loaded. I have no explanation. Another thing worth mentioning. Initially I installed Nginx using epel repositories and i got an old version: 0.8... It appeared to me that my problem was a bug in that version so I uninstalled 0.8 version, added nginx repository to yum and then installed latest version: 1.0.14. I thought the new version will solve my problem, but unfortunately it didn't so I am running out of ideas. I appreciate any help. Configuration files: /etc/nginx/nginx.conf user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } /etc/nginx/conf.d/default.conf server { listen 80; server_name localhost; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.html index.htm index.php; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; include fastcgi_params; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } /etc/nginx/mime.types types { text/html html htm shtml; text/css css; text/xml xml; image/gif gif; image/jpeg jpeg jpg; application/x-javascript js; application/atom+xml atom; application/rss+xml rss; .......................................... other types here .......................................... }
Putting the include /etc/nginx/mime.types; under location / { instead of under http { solved the issue for me.
NGINX
10,075,304
125
I'm used to using Apache with mod_proxy_html, and am trying to achieve something similar with NGINX. The specific use case is that I have an admin UI running in Tomcat on port 8080 on a server at the root context: http://localhost:8080/ I need to surface this on port 80, but I have other contexts on the NGINX server running on this host, so want to try and access this at: http://localhost:80/admin/ I was hoping that the following super simple server block would do it, but it doesn't quite: server { listen 80; server_name screenly.local.akana.com; location /admin/ { proxy_pass http://localhost:8080/; } } The problem is that the returned content (html) contains URLs to scripts and style info that is all accessed at the root context, so I need to get these URLs rewritten to start with /admin/ instead of /. How do I do this in NGINX?
We should first read the documentation on proxy_pass carefully and fully. The URI passed to upstream server is determined based on whether "proxy_pass" directive is used with URI or not. Trailing slash in proxy_pass directive means that URI is present and equal to /. Absense of trailing slash means hat URI is absent. Proxy_pass with URI: location /some_dir/ { proxy_pass http://some_server/; } With the above, there's the following proxy: http:// your_server/some_dir/ some_subdir/some_file -> http:// some_server/ some_subdir/some_file Basically, /some_dir/ gets replaced by / to change the request path from /some_dir/some_subdir/some_file to /some_subdir/some_file. Proxy_pass without URI: location /some_dir/ { proxy_pass http://some_server; } With the second (no trailing slash): the proxy goes like this: http:// your_server /some_dir/some_subdir/some_file -> http:// some_server /some_dir/some_subdir/some_file Basically, the full original request path gets passed on without changes. So, in your case, it seems you should just drop the trailing slash to get what you want. Caveat Note that automatic rewrite only works if you don't use variables in proxy_pass. If you use variables, you should do rewrite yourself: location /some_dir/ { rewrite /some_dir/(.*) /$1 break; proxy_pass $upstream_server; } There are other cases where rewrite wouldn't work, that's why reading documentation is a must. Edit Reading your question again, it seems I may have missed that you just want to edit the html output. For that, you can use the sub_filter directive. Something like ... location /admin/ { proxy_pass http://localhost:8080/; sub_filter "http://your_server/" "http://your_server/admin/"; sub_filter_once off; } Basically, the string you want to replace and the replacement string
NGINX
32,542,282
124
I want to redirect all the HTTP request to https request on ELB. I have two EC2 instances. I am using nginx for the server. I have tried a rewriting the nginx conf files without any success. I would love some advice on it.
AWS Application Load Balancers now support native HTTP to HTTPS redirect. To enable this in the console, do the the following: Go to your Load Balancer in EC2 and tab "Listeners" Select "View/edit rules" on your HTTP listener Delete all rules except for the default one (bottom) Edit default rule: choose "Redirect to" as an action, leave everything as default and enter "443" as a port. The same can be achieved by using the CLI as described here. It is also possible to do this in Cloudformation, where you need to set up a Listener object like this: HttpListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: LoadBalancerArn: !Ref LoadBalancer Port: 80 Protocol: HTTP DefaultActions: - Type: redirect RedirectConfig: Protocol: HTTPS StatusCode: HTTP_301 Port: 443 If you still use Classic Load Balancers, go with one of the NGINX configs described by the others.
NGINX
24,603,620
124
Once I've seen this before when I type a URL http://test.com/test/, instead of give me a html page, it gives me a 'file browser' like interface to browse all the files in the given location. I think it maybe a nginx module that could be enable in the location context. The nginx.conf file: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name 122.97.248.252; location /test { root /home/yozloy/html/; autoindex on; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } update the error.log 2012/05/19 20:48:33 [error] 20357#0: *72 open() "/home/yozloy/html/test" failed (2: No such file or directory), client: 125.43.236.33, server: 122.97.248.252, request: "GET /test HTTP/1.1", host: "unicom2.markson.hk I must misunderstand the location /test mean, I thought it meant when I type http://example.com/test, then it would access the root dictionary which is /home/yozloy/html/
You should try ngx_http_autoindex_module. Set autoindex option to on. It is off by default. Your example configuration should be ok location /{ root /home/yozloy/html/; index index.html; autoindex on; } Without autoindex option you should be getting Error 403 for requests that end with / on directories that do not have an index.html file. With this option you should be getting a simple listing: <html> <head><title>Index of /</title></head> <body bgcolor="white"> <h1>Index of /test/</h1><hr><pre><a href="../">../</a> <a href="test.txt">test.txt</a> 19-May-2012 10:43 0 </pre><hr></body> </html> Edit: Updated the listing to delete any references to test
NGINX
10,663,248
124
On server-side using Sinatra with a stream block. get '/stream', :provides => 'text/event-stream' do stream :keep_open do |out| connections << out out.callback { connections.delete(out) } end end On client side: var es = new EventSource('/stream'); es.onmessage = function(e) { $('#chat').append(e.data + "\n") }; When i using app directly, via http://localhost:9292/, everything works perfect. The connection is persistent and all messages are passed to all clients. However when it goes through Nginx, http://chat.dev, the connection are dropped and a reconnection fires every second or so. Nginx setup looks ok to me: upstream chat_dev_upstream { server 127.0.0.1:9292; } server { listen 80; server_name chat.dev; location / { proxy_pass http://chat_dev_upstream; proxy_buffering off; proxy_cache off; proxy_set_header Host $host; } } Tried keepalive 1024 in upstream section as well as proxy_set_header Connection keep-alive;in location. Nothing helps :( No persistent connections and messages not passed to any clients.
Your Nginx config is correct, you just miss few lines. Here is a "magic trio" making EventSource working through Nginx: proxy_set_header Connection ''; proxy_http_version 1.1; chunked_transfer_encoding off; Place them into location section and it should work. You may also need to add proxy_buffering off; proxy_cache off; That's not an official way of doing it. I ended up with this by "trial and errors" + "googling" :)
NGINX
13,672,743
123
I'm running nginx/ruby-on-rails and I have a simple multipart form to upload files. Everything works fine until I decide to restrict the maximum size of files I want uploaded. To do that, I set the nginx client_max_body_size to 1m (1MB) and expect a HTTP 413 (Request Entity Too Large) status in response when that rule breaks. The problem is that when I upload a 1.2 MB file, instead of displaying the HTTP 413 error page, the browser hangs a bit and then dies with a "Connection was reset while the page was loading" message. I've tried just about every option there is that nginx offers, nothing seems to work. Does anyone have any ideas about this? Here's my nginx.conf: worker_processes 1; timer_resolution 1000ms; events { worker_connections 1024; } http { passenger_root /the_passenger_root; passenger_ruby /the_ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name www.x.com; client_max_body_size 1M; passenger_use_global_queue on; root /the_root; passenger_enabled on; error_page 404 /404.html; error_page 413 /413.html; } } Thanks. **Edit** Environment/UA: Windows XP/Firefox 3.6.13
nginx "fails fast" when the client informs it that it's going to send a body larger than the client_max_body_size by sending a 413 response and closing the connection. Most clients don't read responses until the entire request body is sent. Because nginx closes the connection, the client sends data to the closed socket, causing a TCP RST. If your HTTP client supports it, the best way to handle this is to send an Expect: 100-Continue header. Nginx supports this correctly as of 1.2.7, and will reply with a 413 Request Entity Too Large response rather than 100 Continue if Content-Length exceeds the maximum body size.
NGINX
4,947,107
122
I'm an nginx noob trying out this this tutorial on nginx 1.1.19 on ubuntu 12.04. I have this nginx config file. When I run this command the test fails: $ sudo service nginx restart Restarting nginx: nginx: [crit] pread() "/etc/nginx/sites-enabled/csv" failed (21: Is a directory) nginx: configuration file /etc/nginx/nginx.conf test failed How do I know why the nginx.conf test failed?
sudo nginx -t should test all files and return errors and warnings locations
NGINX
22,306,006
120
We have a server that is serving one html file. Right now the server has 2 CPUs and 2GB of ram. From blitz.io, we are getting about 12k connections per minute and anywhere from 200 timeouts in that 60 seconds with 250 concurrent connections each second. worker_processes 2; events { worker_connections 1024; } If I increase the timeout, the response time starts creeping up beyond a second. What else can I do to squeeze more juice out of this?
Config file: worker_processes 4; # 2 * Number of CPUs events { worker_connections 19000; # It's the key to high performance - have a lot of connections available } worker_rlimit_nofile 20000; # Each connection needs a filehandle (or 2 if you are proxying) # Total amount of users you can serve = worker_processes * worker_connections more info: Optimizing nginx for high traffic loads
NGINX
7,325,211
119
I am using nginx as a reverse proxy and trying to read a custom header from the response of an upstream server (Apache) without success. The Apache response is the following: HTTP/1.0 200 OK Date: Fri, 14 Sep 2012 20:18:29 GMT Server: Apache/2.2.17 (Ubuntu) X-Powered-By: PHP/5.3.5-1ubuntu7.10 Connection: close Content-Type: application/json; charset=UTF-8 My-custom-header: 1 I want to read the value from My-custom-header and use it in a if clause: location / { // ... // get My-custom-header value here // ... } Is this possible?
It's not only possible, it's easy: in nginx the response header values are available through a variable (one per header). See http://wiki.nginx.org/HttpCoreModule#.24sent_http_HEADER for the details on those variables. In your examle the variable would be $sent_http_My_custom_header.
NGINX
12,431,496
115
I was trying to use Thin app server and had one issue. When nginx proxies the request to Thin (or Unicorn) using proxy_pass http://my_app_upstream; the application receives the modified URL sent by nginx (http://my_app_upstream). What I want is to pass the original URL and the original request from client with no modification as the app relies heavily on it. The nginx' doc says: If it is necessary to transmit URI in the unprocessed form then directive proxy_pass should be used without URI part. But I don't understand how exactly to configure that as the related sample is actually using URI: location /some/path/ { proxy_pass http://127.0.0.1; } So could you please help me figuring out how to preserve the original request URL from the client?
I think the proxy_set_header directive could help: location / { proxy_pass http://my_app_upstream; proxy_set_header Host $host; # ... }
NGINX
5,834,025
114
We have an Angular 6 application. It’s served on Nginx. And SSL is on. When we deploy new codes, most of new features work fine but not for some changes. For example, if the front-end developers update the service connection and deploy it, users have to open incognito window or clear cache to see the new feature. What type of changes are not updated automatically? Why are they different from others? What’s the common solution to avoid the issue?
The problem is When a static file gets cached it can be stored for very long periods of time before it ends up expiring. This can be an annoyance in the event that you make an update to a site, however, since the cached version of the file is stored in your visitors’ browsers, they may be unable to see the changes made. Cache-busting solves the browser caching issue by using a unique file version identifier to tell the browser that a new version of the file is available. Therefore the browser doesn’t retrieve the old file from cache but rather makes a request to the origin server for the new file. Angular cli resolves this by providing an --output-hashing flag for the build command. Check the official doc : https://angular.io/cli/build Example ( older versions) ng build --prod --aot --output-hashing=all Below are the options you can pass in --output-hashing none: no hashing performed media: only add hashes to files processed via [url|file]-loaders bundles: only add hashes to the output bundles all: add hashes to both media and bundles Updates For the version of angular ( for example Angular 8,9,10) the command is : ng build --prod --aot --outputHashing=all For the latest version of angular ( from angular 11 to angular 14) the command is reverted back to older format: ng build --aot --output-hashing=all
NGINX
55,402,751
113
Is it possible to get which conf the nginx is using only from a running nginx process? To get the conf file path. sometimes ps aux reveal it, sometimes it doesn't. It might be just something like nginx: master process /usr/sbin/nginx (same as /proc/PID/cmdline) So is nginx -V the only solution? From this question, is it possible to dump conf data structure from nginx process directly? Or at least dump the conf file path?
As of Nginx 1.9.2 you can dump the Nginx config with the -T flag: -T — same as -t, but additionally dump configuration files to standard output (1.9.2). Source: http://nginx.org/en/docs/switches.html This is not the same as dumping for a specific process. If your Nginx is using a different config file, check the output for ps aux and use whatever it gives as the binary, e.g. if it gives something like nginx: master process /usr/sbin/nginx -c /some/other/config you need to run /usr/sbin/nginx -c /some/other/config -T If you are not on 1.9.2 yet, you can dump the config with gdb: https://serverfault.com/questions/361421/dump-nginx-config-from-running-process
NGINX
12,832,033
113
Just want to help somebody out. yes ,you just want to serve static file using nginx, and you got everything right in nginx.conf: location /static { autoindex on; #root /root/downloads/boxes/; alias /root/downloads/boxes/; } But , in the end , you failed. You got "403 forbidden" from browser... ----------------------------------------The Answer Below:---------------------------------------- The Solution is very Simple: Way 1 : Run nginx as the user as the '/root/downloads/boxes/' owner In nginx.conf : #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; YES, in the first line "#user noboy;" , just delete "#" , and change "nobody" to your own username in Linux/OS X, i.e change to "root" for test. The restart nginx. Attention , You'd better not run nginx as root! Here just for testing, it's dangerous for the Hacker. For more reference , see nginx (engine X) – What a Pain in the BUM! [13: Permission denied] Way 2 : Change '/root/downloads/boxes/' owner to 'www-data' or 'nobody' In Terminal: ps aux | grep nginx Get the username of running nginx . It should be 'www-data' or 'nobody' determined by the version of nginx. Then hit in Terminal(use 'www-data' for example): chown -R www-data:www-data /root/downloads/boxes/ ------------------------------One More Important Thing Is:------------------------------ These parent directories "/", "/root", "/root/downloads" should give the execute(x) permission to 'www-data' or 'nobody'. i.e. ls -al /root chmod o+x /root chmod o+x /root/downloads For more reference , see Resolving "403 Forbidden" error and Nginx 403 forbidden for all files
You should give nginx permissions to read the file. That means you should give the user that runs the nginx process permissions to read the file. This user that runs the nginx process is configurable with the user directive in the nginx config, usually located somewhere on the top of nginx.conf: user www-data http://wiki.nginx.org/CoreModule#user The second argument you give to user is the group, but if you don't specify it, it uses the same one as the user, so in my example the user and the group both are www-data. Now the files you want to serve with nginx should have the correct permissions. Nginx should have permissions to read the files. You can give the group www-data read permissions to a file like this: chown :www-data my-file.html http://linux.die.net/man/1/chown With chown you can change the user and group owner of a file. In this command I only change the group, if you would change the user too you would specify the username before the colon, like chown www-data:www-data my-file.html. But setting the group permissions correct should be enough for nginx to be able to read the file.
NGINX
16,808,813
112
I had a problem with a custom HTTP SESSION_ID header not being transfered by nginx proxy. I was told that underscores are prohibited according to the HTTP RFC. Searching, I found that most servers like Apache or nginx define them as illegal in RFC2616 section 4.2, which says: follow the same generic format as that given in Section 3.1 of RFC 822 [9] RFC822 says: The field-name must be composed of printable ASCII characters (i.e., characters that have values between 33. and 126., decimal, except colon) Underscore is decimal character 95 in the ASCII table in the 33-126 range. What am I missing?
They are not forbidden, it's CGI legacy. See "Missing (disappearing) HTTP Headers". If you do not explicitly set underscores_in_headers on;, nginx will silently drop HTTP headers with underscores (which are perfectly valid according to the HTTP standard). This is done in order to prevent ambiguities when mapping headers to CGI variables, as both dashes and underscores are mapped to underscores during that process.
NGINX
22,856,136
111
I am writing an express app that sits behind an nginx server. I was reading through express's documentation and it mentioned the 'trust proxy' setting. All it says is trust proxy Enables reverse proxy support, disabled by default I read the little article here that explains Secure Sessions in Node with nginx. http://blog.nikmartin.com/2013/07/secure-sessions-in-nodejs-with-nginx.html So I am curious. Does setting 'trust proxy' to true only matter when using HTTPS? Currently my app is just HTTP between the client and nginx. If I set it to true now, are there any side-effects/repercussions I need to be aware of? Is there any point to setting it true now?
This is explained in detail in the express behind the proxies guide By enabling the "trust proxy" setting via app.enable('trust proxy'), Express will have knowledge that it's sitting behind a proxy and that the X-Forwarded-* header fields may be trusted, which otherwise may be easily spoofed. Enabling this setting has several subtle effects. The first of which is that X-Forwarded-Proto may be set by the reverse proxy to tell the app that it is https or simply http. This value is reflected by req.protocol. The second change this makes is the req.ip and req.ips values will be populated with X-Forwarded-For's list of addresses.
NGINX
23,413,401
110
I have a problem with my MySQL error log which currently mostly consists of "mbind: Operation not permitted" lines (see below). Why does it happen and how do I fix it? It's the "mostly" part that bothers me. As you can see below, not all lines are "mbind: Operation not permitted". I suspect that MySQL query errors should be instead of that line, but for some reason they can't be written into the file. MySQL itself is a Docker container where log files are volumed via: volumes: - ./mysql/log:/var/log/mysql What's interesting is that: "docker logs mysql_container" shows nothing... slow.log, which resides in the same volume folder, is completely OK and has real slow log lines in it, no "mbind: Operation not permitted" whatsoever! same as slow.log goes to general.log — no problem here, either Any ideas? Thank you in advance. 2019-04-07T12:56:22.478504Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release. 2019-04-07T12:56:22.478533Z 0 [Warning] [MY-011068] [Server] The syntax 'expire-logs-days' is deprecated and will be removed in a future release. Please use binlog_expire_logs_seconds instead. 2019-04-07T12:56:22.478605Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.15) starting as process 1 2019-04-07T12:56:22.480115Z 0 [Warning] [MY-013242] [Server] --character-set-server: 'utf8' is currently an alias for the character set UTF8MB3, but will be an alias for UTF8MB4 in a future release. Please consider using UTF8MB4 in order to be unambiguous. 2019-04-07T12:56:22.480122Z 0 [Warning] [MY-013244] [Server] --collation-server: 'utf8_general_ci' is a collation of the deprecated character set UTF8MB3. Please consider using UTF8MB4 with an appropriate collation instead. mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted mbind: Operation not permitted [same line goes forever] P.S. MySQL starts and runs well, no problem with that. It's just this error.log that keeps bothering me and prevents me from seeing actual errors.
Add the capability CAP_SYS_NICE to your container until MySQL server can handle the error itself "silently". service: mysql: image: mysql:8.0.15 # ... cap_add: - SYS_NICE # CAP_SYS_NICE If you don't have docker-compose, then you can define CAP_SYS_NICE via docker run --cap-add=sys_nice -d mysql References: Docker Seccomp security profiles: https://docs.docker.com/engine/security/seccomp/ Docker resource constraints: https://docs.docker.com/config/containers/resource_constraints/
NGINX
55,559,386
108
I am trying to get rid of deprecated Docker links in my configuration. What's left is getting rid of those Bad Gateway nginx reverse proxy errors when I recreated a container. Note: I am using Docker networks in bridge mode. (docker network create nettest) I am using the following configuration snippet inside nginx: location / { resolver 127.0.0.1 valid=30s; set $backend "http://confluence:8090"; proxy_pass $backend; I started a container with hostname confluence on my Docker network with name nettest. Then I started the nginx container on network nettest. I can ping confluence from inside the nginx container confluence is listed inside the nginx container's /etc/hosts file nginx log says send() failed (111: Connection refused) while resolving, resolver: 127.0.0.1:53 I tried the docker network default dns resolver 127.0.0.11 from /etc/resol.conf nginx log says confluence could not be resolved (3: Host not found) Anybody knows how to configure nginx resolver with Docker Networks or an alternative on how to force Nginx to correctly resolve the Docker network hostname?
First off, you should be using the Docker embedded DNS server at 127.0.0.11. Your problem could be caused by 1 of the following: nginx is trying to use IPv6 (AAAA record) for the DNS queries. See https://stackoverflow.com/a/35516395/1529493 for the solution. Basically something like: http { resolver 127.0.0.11 ipv6=off; } This is probably no longer a problem with Docker 1.11: Fix to not forward docker domain IPv6 queries to external servers (#21396) Take care that you don't accidentally override the resolver configuration directive. In my case I had in the server block resolver 8.8.8.8 8.8.4.4; from Mozilla's SSL Configuration Generator, which was overriding the resolver 127.0.0.11; in the http block. That had me scratching my head for a long time...
NGINX
35,744,650
107