question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I'm trying to get an ingress controller working in Minikube and am following the steps in the K8s documentation here, but am seeing a different result in that the IP address for the ingress controller is different than that for Minikube (the example seems to indicate they should be the same):
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
example-ingress hello-world.info 10.0.2.15 80 12m
$ minikube ip
192.168.99.101
When I try to connect to the Minikube IP address (using the address directly vs. adding it to my local hosts file), I'm getting a "Not found" response from NGINX:
$ curl http://`minikube ip`/
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
When I try to connect to the IP address associated with the ingress controller, it just hangs.
Should I expect the addresses to be the same as the K8s doc indicates?
Some additional information:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 2d23h v1.16.0 10.0.2.15 <none> Buildroot 2018.05.3 4.15.0 docker://18.9.9
$ kubectl get ingresses example-ingress -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/rewrite-target":"/$1"},"name":"example-ingress","namespace":"default"},"spec":{"rules":[{"host":"hello-world.info","http":{"paths":[{"backend":{"serviceName":"web","servicePort":8080},"path":"/"}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /$1
creationTimestamp: "2019-10-28T15:36:57Z"
generation: 1
name: example-ingress
namespace: default
resourceVersion: "25609"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/example-ingress
uid: 5e96c378-fbb1-4e8f-9738-3693cbce7d9b
spec:
rules:
- host: hello-world.info
http:
paths:
- backend:
serviceName: web
servicePort: 8080
path: /
status:
loadBalancer:
ingress:
- ip: 10.0.2.15
| Here’s what worked for me:
minikube start
minikube addons enable ingress
minikube addons enable ingress-dns
Wait until you see the ingress-nginx-controller-XXXX is up and running using Kubectl get pods -n ingress-nginx
Create an ingress using the K8s example yaml file
Update the service section to point to the NodePort Service that you already created
Append
127.0.0.1 hello-world.info
to your /etc/hosts file on MacOS (NOTE:
Do NOT use the Minikube IP)
Run minikube tunnel ( Keep the window open. After you entered the password there will be no more messages, and the cursor just blinks)
Hit the hello-world.info ( or whatever host you configured in the yaml file) in a browser and it should work
| Kubernetes | 58,561,682 | 33 |
I try to get Total and Free disk space on my Kubernetes VM so I can display % of taken space on it. I tried various metrics that included "filesystem" in name but none of these displayed correct total disk size. Which one should be used to do so?
Here is a list of metrics I tried
node_filesystem_size_bytes
node_filesystem_avail_bytes
node:node_filesystem_usage:
node:node_filesystem_avail:
node_filesystem_files
node_filesystem_files_free
node_filesystem_free_bytes
node_filesystem_readonly
| According to my Grafana dashboard, the following metrics work nicely for alerting for available space,
100 - ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} * 100) / node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"})
The formula gives out the percentage of available space on the pointed disk. Make sure you include the mountpoint and fstype within the metrics.
| Kubernetes | 57,357,532 | 33 |
I had a kubernetes single node cluster on my windows 10 machine. Due some errors I had to reinstall the Docker Desktop and since then kubernetes installation has failed while docker installed successfully. All attempts to resolve e.g. deleting the config file in .kube directory and complete reinstallation have failed. See attached pix for details. Installed docker version is Docker version 18.09.2, build 6247962. All search online efforts do not yield a possible solution. I would appreciate pointers to a solution or walk-around.
| I stucked in two kinds of error
system pods running, found labels but still waiting for labels...
xxxx: EOF
I finally sovled it by following the advice by the following project,
https://github.com/AliyunContainerService/k8s-for-docker-desktop/
Do as it told you, if not work,
remove ~/.kube and ~/Library/Group\ Containers/group.com.docker/pki directory, then restart docker desktop and wait like 5 minutes.
The Kubernetes status is running eventually.
| Kubernetes | 55,361,963 | 33 |
If not specified, pods are run under a default service account.
How can I check what the default service account is authorized to do?
Do we need it to be mounted there with every pod?
If not, how can we disable this behavior on the namespace level or cluster level.
What other use cases the default service account should be handling?
Can we use it as a service account to create and manage the Kubernetes deployments in a namespace? For example we will not use real user accounts to create things in the cluster because users come and go.
Environment: Kubernetes 1.12 , with RBAC
|
A default service account is automatically created for each namespace.
$ kubectl get serviceaccount
NAME SECRETS AGE
default 1 1d
Service accounts can be added when required. Each pod is associated with exactly one service account but multiple pods can use the same service account.
A pod can only use one service account from the same namespace.
Service account are assigned to a pod by specifying the account’s name in the pod manifest. If you don’t assign it explicitly the pod will use the default service account.
The default permissions for a service account don't allow it to
list or modify any resources. The default service account isn't allowed to view cluster state let alone modify it in any way.
By default, the default service account in a namespace has no permissions other than those of an unauthenticated user.
Therefore pods by default can’t even view cluster state. Its up to you to grant them appropriate permissions to do that.
$ kubectl exec -it test -n foo sh / # curl
localhost:8001/api/v1/namespaces/foo/services { "kind": "Status",
"apiVersion": "v1", "metadata": {
}, "status": "Failure", "message": "services is forbidden: User
\"system:serviceaccount:foo:default\" cannot list resource
\"services\" in API group \"\" in the namespace \"foo\"", "reason":
"Forbidden", "details": {
"kind": "services" }, "code": 403
as can be seen above the default service account cannot list services
but when given proper role and role binding like below
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: foo-role
namespace: foo
rules:
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: null
name: test-foo
namespace: foo
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: foo-role
subjects:
- kind: ServiceAccount
name: default
namespace: foo
now I am able to list the resource service
$ kubectl exec -it test -n foo sh
/ # curl localhost:8001/api/v1/namespaces/foo/services
{
"kind": "ServiceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/bar/services",
"resourceVersion": "457324"
},
"items": []
Giving all your service accounts the clusteradmin ClusterRole is a
bad idea. It is best to give everyone only the permissions they need to do their job and not a single permission more.
It’s a good idea to create a specific service account for each pod
and then associate it with a tailor-made role or a ClusterRole through a
RoleBinding.
If one of your pods only needs to read pods while the other also needs to modify them then create two different service accounts and make those pods use them by specifying the serviceaccountName property in the
pod spec.
You can refer the below link for an in-depth explanation.
Service account example with roles
You can check kubectl explain serviceaccount.automountServiceAccountToken and edit the service account
kubectl edit serviceaccount default -o yaml
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
creationTimestamp: 2018-10-14T08:26:37Z
name: default
namespace: default
resourceVersion: "459688"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: de71e624-cf8a-11e8-abce-0642c77524e8
secrets:
- name: default-token-q66j4
Once this change is done whichever pod you spawn doesn't have a serviceaccount token as can be seen below.
kubectl exec tp -it bash
root@tp:/# cd /var/run/secrets/kubernetes.io/serviceaccount
bash: cd: /var/run/secrets/kubernetes.io/serviceaccount: No such file or directory
| Kubernetes | 52,995,962 | 33 |
What is the difference between Objects and Resouces in Kubernetes world?
I couldn't find it from https://kubernetes.io/docs/concepts/ . I wonder they make no distinction about them but seems they see objects as a high-level concept of resources.
| A representation of a specific group+version+kind is an object. For example, a v1 Pod, or an apps/v1 Deployment. Those definitions can exist in manifest files, or be obtained from the apiserver.
A specific URL used to obtain the object is a resource. For example, a list of v1 Pod objects can be obtained from the /api/v1/pods resource. A specific v1 Pod object can be obtained from the /api/v1/namespaces/<namespace-name>/pods/<pod-name> resource.
API discovery documents (like the one published at /api/v1) can be used to determine the resources that correspond to each object type.
Often, the same object can be retrieved from and submitted to multiple resources. For example, v1 Pod objects can be submitted to the following resource URLs:
/api/v1/namespaces/<namespace-name>/pods/<pod-name>
/api/v1/namespaces/<namespace-name>/pods/<pod-name>/status
Distinct resources allow for different server-side behavior and access control. The first URL only allows updating parts of the pod spec and metadata. The second URL only allows updating the pod status, and access is typically only given to kubelets.
Authorization rules are based on the resources for particular requests.
| Kubernetes | 52,309,496 | 33 |
I'm using Docker For Desktop with the built-in Kubernetes cluster. I have installed a Pod that serves resources over HTTP, but I'm not sure how to access it using my browser. I have the following ServiceSpec that correctly routes traffic to the Pod:
spec:
clusterIP: 10.99.132.220
externalTrafficPolicy: Cluster
ports:
- name: myport
nodePort: 31534
port: 8037
protocol: TCP
targetPort: 80
type: LoadBalancer
And I can see it set up when I query it with kubectl:
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myservice LoadBalancer 10.99.132.220 localhost 8037:31534/TCP 1h
How do I reach this service using my browser?
| That service will be available in your browser at http://localhost:8037
Note that the port 8037 corresponds to the port property on the ServiceSpec object.
If you are unable to reach the service at that URL, then it could be one of several things, including but not limited to:
There is another Service in your cluster that has claimed that port. Either delete the other Service, or change the port property to an unclaimed port.
Your Pod is not running and ready. Check kubectl get pods.
| Kubernetes | 50,178,696 | 33 |
From the docs:
Secrets must be created before they are consumed in pods as environment variables unless they are marked as optional. References to Secrets that do not exist will prevent the pod from starting.
How to mark secret as optional?
| What you're looking for is
- name: ENV_NAME
valueFrom:
secretKeyRef:
name: <secrets name>
key: <secrets key>
optional: true
You can find type definition here
Edit: similarly for envFrom
envFrom:
- secretRef:
name: secname
optional: true
| Kubernetes | 48,208,705 | 33 |
What does DiskPressure really means and how it can be avoided in kubernetes during container creation?
Seemingly when I creating a new container on a node there is a high chance that it is crashing the whole node because of the pressure...
| From the documentation you'll find that DiskPressure raises when:
Available disk space and inodes on either the node’s root filesytem or image filesystem has satisfied an eviction threshold
Learning about the conditions of your nodes whenever these issues occur is somewhat important (how much space/inodes are left, ...) and also learning about the related container images is important. This can be done with some basic system monitoring (see the Resource Usage Monitoring).
Once you know about the conditions, you should consider to adjust the --low-diskspace-threshold-mb, --image-gc-high-threshold and --image-gc-low-threshold parameters of you kubelet, so that there's always enough space for normal operation, or consider to provision more space for your nodes, depending on the requirements.
| Kubernetes | 42,576,661 | 33 |
I'm trying to deploy a single web application to Minikube on my Mac, and then access it in the browser. I'm trying to use the simplest of setups, but it's not working, I just get a "connection refused" error and I can't figure out why.
This is what I'm trying:
$ minikube start --insecure-registry=docker.example.com:5000
😄 minikube v1.12.3 on Darwin 10.14.6
✨ Using the docker driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing docker container for "minikube" ...
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
$ eval $(minikube -p minikube docker-env)
$ docker build -t web-test .
Sending build context to Docker daemon 16.66MB
Step 1/3 : FROM docker.example.com/library/openjdk:11-jdk-slim
11-jdk-slim: Pulling from library/openjdk
bf5952930446: Pull complete
092c9b8e633f: Pull complete
0b793152b850: Pull complete
7900923f09cb: Pull complete
Digest: sha256:b5d8f95b23481a9d9d7e73c108368de74abb9833c3fae80e6bdfa750663d1b97
Status: Downloaded newer image for docker.example.com/library/openjdk:11-jdk-slim
---> de8b1b4806af
Step 2/3 : COPY target/web-test-0.0.1-SNAPSHOT.jar app.jar
---> 6838e3db240a
Step 3/3 : ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","app.jar"]
---> Running in 550bf762bf2d
Removing intermediate container 550bf762bf2d
---> ce1468d1ff10
Successfully built ce1468d1ff10
Successfully tagged web-test:latest
$ kubectl apply -f web-test-service.yaml
service/web-test unchanged
$ kubectl apply -f web-test-deployment.yaml
deployment.apps/web-test configured
$ kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-test-6bb45ffc54-8mxbc 1/1 Running 0 16m 172.18.0.2 minikube <none> <none>
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16m
web-test NodePort 10.102.19.201 <none> 8080:31317/TCP 16m
$ minikube ip
127.0.0.1
$ curl http://127.0.0.1:31317
curl: (7) Failed to connect to 127.0.0.1 port 31317: Connection refused
$ kubectl logs web-test-6bb45ffc54-8mxbc
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.3.RELEASE)
2020-08-26 14:45:32.692 INFO 1 --- [ main] com.example.web.WebTestApplication : Starting WebTestApplication v0.0.1-SNAPSHOT on web-test-6bb45ffc54-8mxbc with PID 1 (/app.jar started by root in /)
2020-08-26 14:45:32.695 INFO 1 --- [ main] com.example.web.WebTestApplication : No active profile set, falling back to default profiles: default
2020-08-26 14:45:34.041 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2020-08-26 14:45:34.053 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2020-08-26 14:45:34.053 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.37]
2020-08-26 14:45:34.135 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2020-08-26 14:45:34.135 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1355 ms
2020-08-26 14:45:34.587 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2020-08-26 14:45:34.797 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2020-08-26 14:45:34.810 INFO 1 --- [ main] com.example.web.WebTestApplication : Started WebTestApplication in 2.808 seconds (JVM running for 3.426)
$ minikube ssh
docker@minikube:~$ curl 10.102.19.201:8080
Up and Running
docker@minikube:~$
As you can see, the web app is up and running, and I can access it from inside the cluster by doing a minikube ssh, but from outside the cluster, it won't connect. These are my service and deployment manifests:
web-test-service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: web-test
name: web-test
spec:
type: NodePort
ports:
- nodePort: 31317
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: web-test
web-test-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web-test
name: web-test
spec:
replicas: 1
selector:
matchLabels:
app: web-test
strategy: {}
template:
metadata:
labels:
app: web-test
spec:
containers:
- image: web-test
imagePullPolicy: Never
name: web-test
ports:
- containerPort: 8080
restartPolicy: Always
status: {}
Anyone have any idea what I'm doing wrong? Or perhaps how I could try to diagnose the issue further? I have allow tried deploying an ingress, but that doesn't work either.
| You are mostly facing this issue when you use minikube ip which returns 127.0.0.1. It should work if you use internal ip from kubectl get node -o wide instead of 127.0.0.1.
A much easier approach from the official reference docs is you can get the url using minikube service web-test --url and use it in browser or if you use minikube service web-test it will open the url in browser directly.
Your deployment yamls and everything else looks good and hopefully should not have any issue when deploying to a remote cluster.
| Kubernetes | 63,600,378 | 32 |
I have both helm 2 and helm 3 installed in my localhost. I have created a new chart using helm2
sanket@Admins-MacBook-Pro poc % helm create new
Creating new
created a chart 'new ' using helm version 2. Now I have deployed the chart using helm version 3
sanket@Admins-MacBook-Pro poc % helm3 install new new --namespace test
NAME: new
LAST DEPLOYED: Thu Apr 23 17:56:03 2020
NAMESPACE: test
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace test -l "app.kubernetes.io/name=new,app.kubernetes.io/instance=new" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
Now when I try to delete the 'new' release it shows :-
sanket@Admins-MacBook-Pro poc % helm3 delete new
Error: uninstall: Release not loaded: new: release: not found
any idea how to resolve this issue .
| By default, helm3 only shows releases of default namespace.
Do the following to get your release and delete it.
# Get all releases
helm ls --all-namespaces
# OR
helm ls -A
# Delete release
helm uninstall release_name -n release_namespace
| Kubernetes | 61,387,463 | 32 |
I have a problem with helm deployment. It has happend after I have added a new environment variable to the deployment.
When I execute: helm upgrade [RELEASE] [CHART]
I get the following error:
Error: The order in patch list:
[
map[name:APP_ENV value:prod]
map[name:MAILER_URL value:...]
map[name:APP_VERSION value:v0-0-3]
map[name:APP_COMMIT_SHA value:...]
]
doesn't match $setElementOrder list:
[
map[name:APP_ENV]
map[name:COMPOSER_HOME]
map[name:PHP_XDEBUG_ENABLED]
map[name:DATABASE_DRIVER]
map[name:DATABASE_HOST]
map[name:DATABASE_NAME]
map[name:DATABASE_USER]
map[name:SECRET]
map[name:INDEX_HOSTS]
map[name:MAILER_FROM_ADDRESS]
map[name:MAILER_FROM_NAME]
map[name:UPLOAD_DIR]
map[name:ARCHIVE_DIR]
map[name:CATALOG_STORAGE_DIR]
map[name:ASSET_STORAGE_DIR]
map[name:TMP_STORAGE_DIR]
map[name:UPLOAD_TMP_DIR]
map[name:APP_VERSION]
map[name:APP_COMMIT_SHA]
map[name:APP_CRON]
map[name:DATABASE_PASSWORD]
map[name:MAILER_URL]
...
]
However, if I execute the same command with the flag --dry-run, I do not get any error ( helm upgrade [RELEASE] [CHART] --dry-run)
I don't know the reason of this problem or how to solve it
| I've found that the reason of this problem was that I had some envVars duplicated. In my deployment I had:
...
spec:
template:
spec:
container:
env:
- name: ENV_VAR_NAME
value: "test"
- name: ENV_VAR_NAME
value: "test"
...
After removing the duplicated variable:
...
spec:
template:
spec:
container:
env:
- name: ENV_VAR_NAME
value: "test"
...
The helm upgrade [RELEASE] [CHART] worked fine
| Kubernetes | 60,727,150 | 32 |
I'm using MicroK8S in Ubuntu.
I'm trying to run a simple "hello world" program but I got the error when a pod is created.
kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy
Here is my deployment.yaml file which I'm trying to apply:
apiVersion: v1
kind: Service
metadata:
name: grpc-hello
spec:
ports:
- port: 80
targetPort: 9000
protocol: TCP
name: http
selector:
app: grpc-hello
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-hello
spec:
replicas: 1
selector:
matchLabels:
app: grpc-hello
template:
metadata:
labels:
app: grpc-hello
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http2_port=9000",
"--backend=grpc://127.0.0.1:50051",
"--service=hellogrpc.endpoints.octa-test-123.cloud.goog",
"--rollout_strategy=managed",
]
ports:
- containerPort: 9000
- name: python-grpc-hello
image: gcr.io/octa-test-123/python-grpc-hello:1.0
ports:
- containerPort: 50051
Here is what I got when I try to describe the pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
Normal Started 30s kubelet, azeem-ubuntu Started container python-grpc-hello
Normal Pulled 30s kubelet, azeem-ubuntu Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
Normal Created 30s kubelet, azeem-ubuntu Created container python-grpc-hello
Normal Pulled 12s (x3 over 31s) kubelet, azeem-ubuntu Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
Normal Created 12s (x3 over 31s) kubelet, azeem-ubuntu Created container esp
Normal Started 12s (x3 over 30s) kubelet, azeem-ubuntu Started container esp
Warning MissingClusterDNS 8s (x10 over 31s) kubelet, azeem-ubuntu pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning BackOff 8s (x2 over 23s) kubelet, azeem-ubuntu Back-off restarting failed container
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 31s default-scheduler Successfully assigned default/grpc-hello-66869cf9fb-kpr69 to azeem-ubuntu
Normal Started 30s kubelet, azeem-ubuntu Started container python-grpc-hello
Normal Pulled 30s kubelet, azeem-ubuntu Container image "gcr.io/octa-test-123/python-grpc-hello:1.0" already present on machine
Normal Created 30s kubelet, azeem-ubuntu Created container python-grpc-hello
Normal Pulled 12s (x3 over 31s) kubelet, azeem-ubuntu Container image "gcr.io/endpoints-release/endpoints-runtime:1" already present on machine
Normal Created 12s (x3 over 31s) kubelet, azeem-ubuntu Created container esp
Normal Started 12s (x3 over 30s) kubelet, azeem-ubuntu Started container esp
Warning MissingClusterDNS 8s (x10 over 31s) kubelet, azeem-ubuntu pod: "grpc-hello-66869cf9fb-kpr69_default(19c5a870-fcf5-415c-bcb6-dedfc11f936c)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
Warning BackOff 8s (x2 over 23s) kubelet, azeem-ubuntu Back-off restarting failed container
I researched this and I found some answers, but none are working for me. I also create the kube-dns for this but don't know why this still is not working. These kube-dns are running. kube-dns are in kube-system namespace.
NAME READY STATUS RESTARTS AGE
kube-dns-6dbd676f7-dfbjq 3/3 Running 0 22m
And here is what I apply to create the kube-dns:
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.152.183.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
upstreamNameservers: |-
["8.8.8.8", "8.8.4.4"]
# Why set upstream ns: https://github.com/kubernetes/minikube/issues/2027
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: gcr.io/google-containers/k8s-dns-kube-dns:1.15.8
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google-containers/k8s-dns-dnsmasq-nanny:1.15.8
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google-containers/k8s-dns-sidecar:1.15.8
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
serviceAccountName: kube-dns
What am I missing?
| You have not specified how you deployed kube dns but with microk8s it's recommended to use core dns.
You should not deploy kube dns or core dns on your own; rather you need to enable dns using this command microk8s enable dns which would deploy core DNS and set up DNS.
| Kubernetes | 59,550,564 | 32 |
I am writing a Kubernetes controller.
Someone creates a custom resource via kubectl apply -f custom-resource.yaml. My controller notices the creation, and then creates a Deployment that pertains to the custom resource in some way.
I am looking for the proper way to set up the Deployment's ownerReferences field such that a deletion of the custom resource will result in a deletion of the Deployment. I understand I can do this like so:
ownerReferences:
- kind: <kind from custom resource>
apiVersion: <apiVersion from custom resource>
uid: <uid from custom resource>
controller: <???>
I'm unclear on whether this is case where I would set controller to true.
The Kubernetes reference documentation says (in its entirety):
If true, this reference points to the managing controller.
Given that a controller is running code, and an owner reference actually references another Kubernetes resource via matching uid, name, kind and apiVersion fields, this statement is nonsensical: a Kubernetes object reference can't "point to" code.
I have a sense that the documentation author is trying to indicate that—using my example—because the user didn't directly create the Deployment herself, it should be marked with some kind of flag indicating that a controller created it instead.
Is that correct?
The follow on question here is of course: OK, what behavior changes if controller is set to false here, but the other ownerReference fields are set as above?
| ownerReferences has two purposes:
Garbage collection: Refer to the answer of ymmt2005. Essentially all owners are considered for GC. Contrary to the accepted answer the controller field has no impact on GC.
Adoption: The controller field prevents fighting over resources which are to be adopted. Consider a replica set. Usually, the replica set controller creates the pods. However, if there is a pod which matches the label selector it will be adopted by the replica set. To prevent two replica sets fighting over the same pod, the latter is given a unique controller by setting the controller to true. If a resource already has a controller it will not be adopted by another controller. Details are in the design proposal.
TLDR: The field controller is only used for adoption and not GC.
| Kubernetes | 51,068,026 | 32 |
In Nginx, I'm trying to define a variable which allows me to configure a sub-folder for all my location blocks. I did this:
set $folder '/test';
location $folder/ {
[...]
}
location $folder/something {
[...]
}
Unfortunately, this doesn't seem to work. While Nginx doesn't complain about the syntax, it returns a 404 when requesting /test/. If I write the folder in explicitly, it works. So how can I use variables in location blocks?
| You can't. Nginx doesn't really support variables in config files, and its developers mock everyone who ask for this feature to be added:
"[Variables] are rather costly compared to plain static configuration. [A] macro expansion and "include" directives should be used [with] e.g. sed + make or any other common template mechanism." http://nginx.org/en/docs/faq/variables_in_config.html
You should either write or download a little tool that will allow you to generate config files from placeholder config files.
Update The code below still works, but I've wrapped it all up into a small PHP program/library called Configurator also on Packagist, which allows easy generation of nginx/php-fpm etc config files, from templates and various forms of config data.
e.g. my nginx source config file looks like this:
location / {
try_files $uri /routing.php?$args;
fastcgi_pass unix:%phpfpm.socket%/php-fpm-www.sock;
include %mysite.root.directory%/conf/fastcgi.conf;
}
And then I have a config file with the variables defined:
phpfpm.socket=/var/run/php-fpm.socket
mysite.root.directory=/home/mysite
And then I generate the actual config file using that. It looks like you're a Python guy, so a PHP based example may not help you, but for anyone else who does use PHP:
<?php
require_once('path.php');
$filesToGenerate = array(
'conf/nginx.conf' => 'autogen/nginx.conf',
'conf/mysite.nginx.conf' => 'autogen/mysite.nginx.conf',
'conf/mysite.php-fpm.conf' => 'autogen/mysite.php-fpm.conf',
'conf/my.cnf' => 'autogen/my.cnf',
);
$environment = 'amazonec2';
if ($argc >= 2){
$environmentRequired = $argv[1];
$allowedVars = array(
'amazonec2',
'macports',
);
if (in_array($environmentRequired, $allowedVars) == true){
$environment = $environmentRequired;
}
}
else{
echo "Defaulting to [".$environment."] environment";
}
$config = getConfigForEnvironment($environment);
foreach($filesToGenerate as $inputFilename => $outputFilename){
generateConfigFile(PATH_TO_ROOT.$inputFilename, PATH_TO_ROOT.$outputFilename, $config);
}
function getConfigForEnvironment($environment){
$config = parse_ini_file(PATH_TO_ROOT."conf/deployConfig.ini", TRUE);
$configWithMarkers = array();
foreach($config[$environment] as $key => $value){
$configWithMarkers['%'.$key.'%'] = $value;
}
return $configWithMarkers;
}
function generateConfigFile($inputFilename, $outputFilename, $config){
$lines = file($inputFilename);
if($lines === FALSE){
echo "Failed to read [".$inputFilename."] for reading.";
exit(-1);
}
$fileHandle = fopen($outputFilename, "w");
if($fileHandle === FALSE){
echo "Failed to read [".$outputFilename."] for writing.";
exit(-1);
}
$search = array_keys($config);
$replace = array_values($config);
foreach($lines as $line){
$line = str_replace($search, $replace, $line);
fwrite($fileHandle, $line);
}
fclose($fileHandle);
}
?>
And then deployConfig.ini looks something like:
[global]
;global variables go here.
[amazonec2]
nginx.log.directory = /var/log/nginx
nginx.root.directory = /usr/share/nginx
nginx.conf.directory = /etc/nginx
nginx.run.directory = /var/run
nginx.user = nginx
[macports]
nginx.log.directory = /opt/local/var/log/nginx
nginx.root.directory = /opt/local/share/nginx
nginx.conf.directory = /opt/local/etc/nginx
nginx.run.directory = /opt/local/var/run
nginx.user = _www
| NGINX | 15,416,957 | 89 |
How do you determine when to use $request_uri vs $uri?
According to NGINX documentation, $request_uri is the original request (for example, /foo/bar.php?arg=baz includes arguments and can't be modified) but $uri refers to the altered URI.
If the URI doesn't change, does $uri = $request_uri?
Would it be incorrect or better or worse to use:
map $uri $new_uri {
# do something
}
vs
map $request_uri $new_uri {
# do something
}
| $uri is not equivalent to $request_uri.
The $uri variable is set to the URI that nginx is currently processing - but it is also subject to normalisation, including:
Removal of the ? and query string
Consecutive / characters are replace by a single /
URL encoded characters are decoded
The value of $request_uri is always the original URI and is not subject to any of the above normalisations.
Most of the time you would use $uri, because it is normalised. Using $request_uri in the wrong place can cause URL encoded characters to become doubly encoded.
Use $request_uri in a map directive, if you need to match the URI and its query string.
| NGINX | 48,708,361 | 88 |
I am getting a 400 Bad Request request header or cookie too large from nginx with my Rails app. Restarting the browser fixes the issue. I am only storing a string id in my cookie so it should be tiny.
Where can I find the nginx error logs? I looked at nano /opt/nginx/logs/error.log, but it doesn't have anything related.
I tried to set following and no luck:
location / {
large_client_header_buffers 4 32k;
proxy_buffer_size 32k;
}
nginx.conf
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
passenger_root /home/app/.rvm/gems/ruby-1.9.3-p392/gems/passenger-3.0.19;
passenger_ruby /home/app/.rvm/wrappers/ruby-1.9.3-p392/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
client_max_body_size 20M;
server {
listen 80;
server_name localhost;
root /home/app/myapp/current/public;
passenger_enabled on;
#charset koi8-r;
#access_log logs/host.access.log main;
# location / {
# large_client_header_buffers 4 32k;
# proxy_buffer_size 32k;
# }
# location / {
# root html;
# index index.html index.htm;
# client_max_body_size 4M;
# client_body_buffer_size 128k;
# }
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443;
# server_name localhost;
# ssl on;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_timeout 5m;
# ssl_protocols SSLv2 SSLv3 TLSv1;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
Here's my code storing the cookies and a screenshot of the cookies in Firebug. I used firebug to check stored session and I found New Relic and jQuery are storing cookies too; could this be why the cookie size is exceeded?
def current_company
return if current_user.nil?
session[:current_company_id] = current_user.companies.first.id if session[:current_company_id].blank?
@current_company ||= Company.find(session[:current_company_id])
end
| It's just what the error says - Request Header Or Cookie Too Large. One of your headers is really big, and nginx is rejecting it.
You're on the right track with large_client_header_buffers. If you check the docs, you'll find it's only valid in http or server contexts. Bump it up to a server block and it will work.
server {
# ...
large_client_header_buffers 4 32k;
# ...
}
By the way, the default buffer number and size is 4 and 8k, so your bad header must be the one that's over 8192 bytes. In your case, all those cookies (which combine to one header) are well over the limit. Those mixpanel cookies in particular get quite large.
| NGINX | 17,524,396 | 88 |
How do I log all the headers the client (browser) has sent in Nginx? I also want to log the response headers. Note that I am using nginx as reverse proxy.
After going through documentation, I understand that I can log a specific header, but I want to log all of the headers.
| There are two more options to log headers in nginx.
nginx njs scripting language can be used instead of lua (njs may be considered easier to install and to use and is somewhat more "official")
mirror directive (only for request headers)
Using njs to log request and response headers
njs can be installed from package repositories and is preinstalled in official nginx docker images. njs is available since at least 1.9.15 version of nginx (which is rather old), but it is better to use more recent versions.
After installation enable njs http module in nginx config:
load_module modules/ngx_http_js_module.so;
Headers may be logged to error or access log.
Logging to access log with njs
Decide which format to use (JSON, custom, base64...)
Create js module with function to convert headers structure to string (~3 lines of code)
Import this js module in nginx configuration (~1 line)
Declare a variable to use in log_format directive (~1 line)
Add this variable to log format (~1 line)
HTTP Request object has headersIn, headersOut fields in key value format, duplicate headers are merged in this fields and rawHeadersIn, rawHeadersOut which are array of arrays of raw headers.
Create js module, use json to serialize headers:
// /etc/nginx/headers.js
function headers_json(r) {
return JSON.stringify(r.headersIn)
}
export default {headers_json};
Import js module, declare variable and add it to log_format:
http {
js_import headers.js;
js_set $headers_json headers.headers_json;
# Using custom log format here
log_format main '$remote_addr'
'\t$remote_user'
'\t$time_local'
'\t$request'
'\t$status'
'\t$headers_json';
Escaping in acccess log
By default strings in access log are escaped, so you will get something like this:
# curl -H 'H: First' -H 'H: Second' localhost:8899
172.17.0.1 - 16/Apr/2021:08:46:43 +0000 GET / HTTP/1.1 200 {\x22Host\x22:\x22localhost:8899\x22,\x22User-Agent\x22:\x22curl/7.64.1\x22,\x22Accept\x22:\x22*/*\x22,\x22H\x22:\x22First,Second\x22}
You can use escape parameter in log_format directive to change how escaping is applied.
Example of escape=json output:
log_format main escape=json ...
{\"Host\":\"localhost:8899\",\"User-Agent\":\"curl/7.64.1\",\"Accept\":\"*/*\",\"H\":\"First,Second\"}
Another option is to wrap json in base64 encoding in javascript function:
function headers_json_base64(r) {
return JSON.stringify(r.headersIn).toString('base64')
}
Logging to error log with njs
With njs you can use ngx.log or r.log (for older versions of njs ngx object is not available) in javascript function to log headers. js function should be called explicitly for this to work for example with
js_header_filter directive.
js module:
function headers_json_log(r) {
return ngx.log(ngx.WARN, JSON.stringify(r.headersIn))
}
export default {headers_json_log};
Enable logging in location:
location /log/ {
js_header_filter headers.headers_json_log;
return 200;
}
For error log escaping is not applied, so you will get raw json:
2021/04/16 12:22:53 [warn] 24#24: *1 js: {"Host":"localhost:8899","User-Agent":"curl/7.64.1","Accept":"*/*","H":"First,Second"}
Logging to error log may be useful if you don't want to mess up your access logs or you need to log headers only for specific location (for specific location access_log directive and separate log_format can be used too)
Logging request headers with mirror directive
mirror directive can be used to mirror requests to different location. It may be more convenient than tcpdump especially when upstream traffic is encrypted and is a bit simpler than using njs.
mirror can be used only to capture request headers since response headers are returned independently.
mirror directive can be used server wide in http and server contexts or in location context.
# using mirror in server context
mirror /mirror;
mirror_request_body off;
location /mirror {
# Use internal so that location is not available for direct requests
internal;
# Use small timeout not to wait for replies (this is not necessary)
proxy_read_timeout 1;
# Pass headers to logging server
proxy_pass http://127.0.0.1:6677;
# send original request uri in special header
proxy_set_header X-Original-URI $request_uri;
}
To log headers simple http server or just netcat oneliner may be used:
nc -kl 6677 > /path/to/headers.log
Because netcat doesn't reply to nginx, nginx will fill up error log with timeout errors, this errors do not affect clients.
| NGINX | 24,380,123 | 87 |
I'm new to Nginx and I'm trying to get subdomains working.
What I would like to do is take my domain (let's call it example.com) and add:
sub1.example.com,
sub2.example.com, and also have
www.example.com available.
I know how to do this with Apache, but Nginx is being a real head scratcher.
I'm running Debian 6.
My current /etc/nginx/sites-enabled/example.com:
server {
server_name www.example.com example.com;
access_log /srv/www/www.example.com/logs/access.log;
error_log /srv/www/www.example.com/logs/error.log;
root /srv/www/www.example.com/public_html;
location / {
index index.html index.htm;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/www.example.com/public_html$fastcgi_script_name;
}
}
It is working to serve example.com and www.example.com.
I have tried to add a second server block in the same file like:
server {
server_name www.example.com example.com;
access_log /srv/www/www.example.com/logs/access.log;
error_log /srv/www/www.example.com/logs/error.log;
root /srv/www/www.example.com/public_html;
server {
server_name sub1.example.com;
access_log /srv/www/example.com/logs/sub1-access.log;
error_log /srv/www/example.com/logs/sub1-error.log;
root /srv/www/example.com/sub1;
}
location / {
index index.html index.htm;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /srv/www/www.example.com/public_html$fastcgi_script_name;
}
}
No luck. Any ideas? I'd super appreciate any feedback.
| The mistake is putting a server block inside a server block, you should close the main server block then open a new one for the sub domains
server {
server_name example.com;
# the rest of the config
}
server {
server_name sub1.example.com;
# sub1 config
}
server {
server_name sub2.example.com;
# sub2 config
}
| NGINX | 17,568,981 | 87 |
I am trying to pass off all calls to /api to my webservice but I keep getting 404s with the following config. Calls to / return index.html as expected. Does anyone know why?
upstream backend{
server localhost:8080;
}
server {
location /api {
proxy_pass http://backend;
}
location / {
root /html/dir;
}
}
More info here
adept@HogWarts:/etc/nginx/sites-available$ curl -i localhost/api/authentication/check/user/email
HTTP/1.1 404 Not Found
Server: nginx/1.2.1
Date: Mon, 22 Apr 2013 22:49:03 GMT
Content-Length: 0
Connection: keep-alive
adept@HogWarts:/etc/nginx/sites-available$ curl -i localhost:8080/authentication/check/user/email
HTTP/1.1 200 OK
Content-Type: application/json
Date: Mon, 22 Apr 2013 22:49:20 GMT
Transfer-Encoding: chunked
{"user":["false"],"emailAddress":["false"]}
| This
location /api {
proxy_pass http://backend;
}
Needs to be this
location /api/ {
proxy_pass http://backend/;
}
| NGINX | 16,157,893 | 87 |
I develop a new website and I want to use GridFS as storage for all user uploads, because it offers a lot of advantages compared to a normal filesystem storage.
Benchmarks with GridFS served by nginx indicate, that it's not as fast as a normal filesystem served by nginx.
Benchmark with nginx
Is anyone out there, who uses GridFS already in a production environment, or would use it for a new project?
| I use gridfs at work on one of our servers which is part of a price-comparing website with honorable traffic stats (arround 25k visitors per day). The server hasn't much ram, 2gigs, and even the cpu isn't really fast (Core 2 duo 1.8Ghz) but the server has plenty storage space : 10Tb (sata) in raid 0 configuration. The job the server is doing is very simple:
Each product on our price-comparer has an image (there are around 10 million products according to our product db), and the servers job is to download the image, resize it, store it on gridfs, and deliver it to the visitors browser... if it's not present in the grid... or... deliver it to the visitors browser if it's already stored in the grid. So, this could be called as a 'traditional cdn schema'.
We have stored and processed 4 million images on this server since it's up and running. The resize and store stuff is done by a simple php script... but for sure, a python script, or something like java could be faster.
Current data size : 11.23g
Current storage size : 12.5g
Indices : 5
Index size : 849.65m
About the reliability : This is very reliable. The server doesn't load, the index size is ok, queries are fast
About the speed : For sure, is it not fast as local file storage, maybe 10% slower, but fast enough to be used in realtime even when the image needs to be processed, which is in our case, very php dependant. Maintenance and development times have also been reduced: it became so simple to delete a single or multiple images : just query the db with a simple delete command. Another interesting thing : when we rebooted our old server, with local file storage (so million of files in thousands of folders), it sometimes hangs for hours cause the system was performing a file integrity check (this really took hours...). We do not have this problem any more with gridfs, our images are now stored in big mongodb chunks (2gb files)
So... on my mind... Yes, gridfs is fast and reliable enough to be used for production.
| NGINX | 3,413,115 | 87 |
We use following nginx site configure file in our production env.
log_format main '$http_x_forwarded_for - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $request_time';
server {
root /srv/www/web;
server_name *.test.com;
access_log /var/log/nginx/xxx.test.com.access.log main;
Both "http://a.test.com/ping" and "http://b.test.com/ping" http request will be record in file xxx.test.com.access.log.
But there is a problem, nginx don't store "domain name" in xxx.test.com.access.log.
"http://a.test.com/ping" and "http://b.test.com/ping" share the same request "Get /ping".
How can I record "a.test.com" or "b.test.com" in nginx log?
| Try adding the $host variable in log_format:
log_format main '$http_x_forwarded_for - $remote_user [$time_local] "$host" "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $request_time';
http://wiki.nginx.org/HttpCoreModule#.24host:
$host
This variable is equal to line Host in the header of request or name
of the server processing the request if the Host header is not
available.
This variable may have a different value from $http_host in such
cases: 1) when the Host input header is absent or has an empty value,
$host equals to the value of server_name directive; 2) when the value
of Host contains port number, $host doesn't include that port number.
$host's value is always lowercase since 0.8.17.
| NGINX | 21,135,719 | 85 |
This is a broad question but I'd like to get a canonical answer. I have been trying to deploy a site using gunicorn and nginx in Django. After reading tons of tutorials I have been successful but I can't be sure that the steps I followed are good enough to run a site without problems or maybe there are better ways to do it. That uncertainty is annoying.
That's why I'm looking for a very detailed and well explained answer for newbies. I don't want to explain too much what I know and what I don't know since this could skew the answers a bit and other people could benefit to a lesser degree from your answers. However, some things I'd like to see mentioned are:
What "setup" have you seen work best? I used virtualenv and moved my Django project inside this environment, however I have seen another setups where there is a folder for virtual environments and other for projects.
How can I setup things in a way that allows several sites to be hosted in a single server?
Why some people suggest using gunicorn_django -b 0.0.0.0:8000 and others suggest gunicorn_django -b 127.0.0.1:8000? I tested the latter in an Amazon EC2 instance but it didn't work while the former worked without problem.
What is the logic behind the config file of nginx? There are so many tutorials using drastically different configuration files that I'm confused on which one is better. For example, some people use alias /path/to/static/folder and others root /path/to/static/folder. Maybe you can share your preferred configuration file.
Why do we create a symlink between site-available and sites-enabled in /etc/nginx?
Some best practices are as always welcomed :-)
Thanks
|
What "setup" have you seen work best? I used virtualenv and moved my
django project inside this environment, however I have seen another
setups where there is a folder for virtual environments and other for
projects.
virtualenv is a way to isolate Python environments; as such, it doesn't have a large part to play at deployment - however during development and testing it is a requirement if not highly recommended.
The value you would get from virtualenv is that it allows you to make sure that the correct versions of libraries are installed for the application. So it doesn't matter where you stick the virtual envrionment itself. Just make sure you don't include it as part of the source code versioning system.
The file system layout is not critical. You will see lots of articles extolling the virtues of directory layouts and even skeleton projects that you can clone as a starting point. I feel this is more of a personal preference than a hard requirement. Sure its nice to have; but unless you know why, it doesn't add any value to your deployment process - so don't do it because some blog recommends it unless it makes sense for your scenario. For example - no need to create a setup.py file if you don't have a private PyPi server that is part of your deployment workflow.
How can I setup things in a way that allows several sites to be hosted
in a single server?
There are two things you need to do multiple site setups:
A server that is listening on the public IP on port 80 and/or port 443 if you have SSL.
A bunch of "processes" that are running the actual django source code.
People use nginx for #1 because its a very fast proxy and it doesn't come with the overhead of a comprehensive server like Apache. You are free to use Apache if you are comfortable with it. There is no requirement that "for mulitple sites, use nginx"; you just need a service that is listening on that port, knows how to redirect (proxy) to your processes running the actual django code.
For #2 there are a few ways to start these processes. gevent/uwsgi are the most popular ones. The only thing to remember here is do not use runserver in production.
Those are the absolute minimum requirements. Typically people add some sort of process manager to control all the "django servers" (#2) running. Here you'll see upstart and supervisor mentioned. I prefer supervisor as it doesn't need to take over the entire system (unlike upstart). However, again - this is not a hard requirement. You could perfectly run a bunch of screen sessions and detatch them. The downside is, should your server restart, you would have to relaunch the screen sessions.
Personally I would recommend:
Nginx for #1
Take your pick between uwsgi and gunicorn - I use uwsgi.
supervisor for managing the backend processes.
Individual system accounts (users) for each application you are hosting.
The reason I recommend #4 is to isolate permissions; again, not a requirement.
Why some people suggest using gunicorn_django -b 0.0.0.0:8000 and
others suggest gunicorn_django -b 127.0.0.1:8000? I tested the latter
in an Amazon EC2 instance but it didn't work while the former worked
without problem.
0.0.0.0 means "all IP addresses" - its a meta address (that is, a placeholder address). 127.0.0.1 is a reserved address that always points to the local machine. That is why its called "localhost". It is only reachable to processes running on the same system.
Typically you have the front end server (#1 in the list above) listening on the public IP address. You should explicitly bind the server to one IP address.
However, if for some reason you are on DHCP or you don't know what the IP address will be (for example, its a newly provisioned system), you can tell nginx/apache/any other process to bind to 0.0.0.0. This should be a temporary stop-gap measure.
For production servers you'll have a static IP. If you have a dynamic IP (DHCP), then you can leave in 0.0.0.0. It is very rare that you'll have DHCP for your production machines though.
Binding gunicorn/uwsgi to this address is not recommended in production. If you bind your backend process (gunicorn/uwsgi) to 0.0.0.0, it may become accessible "directly", bypassing your front-end proxy (nginx/apache/etc); someone could just request http://your.public.ip.address:9000/ and access your application directly especially if your front-end server (nginx) and your back end process (django/uwsgi/gevent) are running on the same machine.
You are free to do it if you don't want to have the hassle of running a front-end proxy server though.
What is the logic behind the config file of nginx? There are so many
tutorials using drastically different configuration files that I'm
confused on which one is better. For example, some people use "alias
/path/to/static/folder" and others "root /path/to/static/folder".
Maybe you can share your preferred configuration file.
First thing you should know about nginx is that it is not a webserver like Apache or IIS. It is a proxy. So you'll see different terms like 'upstream'/'downstream' and multiple "servers" being defined. Take some time and go through the nginx manual first.
There are lots of different ways to set up nginx; but here is one answer to your question on alias vs. root. root is an explicit directive that binds the document root (the "home directory") of nginx. This is the directory it will look at when you give a request without a path like http://www.example.com/
alias means "map a name to a directory". Aliased directories may not be a sub directory of the document root.
Why do we create a symlink between site-available and sites-enabled in
/etc/nginx?
This is something unique to debian (and debian-like systems like ubuntu). sites-available lists configuration files for all the virtual hosts/sites on the system. A symlink from sites-enabled to sites-available "activates" that site or virtual host. It is a way to separate configuration files and easily enable/disable hosts.
| NGINX | 13,004,484 | 85 |
I want to proxy requests made to my Flask app to another web service running locally on the machine. I'd rather use Flask for this than our higher-level nginx instance so that we can reuse our existing authentication system built into our app. The more we can keep this "single sign on" the better.
Is there an existing module or other code to do this? Trying to bridge the Flask app through to something like httplib or urllib is proving to be a pain.
| I spent a good deal of time working on this same thing and eventually found a solution using the requests library that seems to work well. It even handles setting multiple cookies in one response, which took a bit of investigation to figure out. Here's the flask view function:
from dotenv import load_dotenv # pip package python-dotenv
import os
#
from flask import request, Response
import requests # pip package requests
load_dotenv()
API_HOST = os.environ.get('API_HOST'); assert API_HOST, 'Envvar API_HOST is required'
@api.route('/', defaults={'path': ''}, methods=["GET", "POST"]) # ref. https://medium.com/@zwork101/making-a-flask-proxy-server-online-in-10-lines-of-code-44b8721bca6
@api.route('/<path>', methods=["GET", "POST"]) # NOTE: better to specify which methods to be accepted. Otherwise, only GET will be accepted. Ref: https://flask.palletsprojects.com/en/3.0.x/quickstart/#http-methods
def redirect_to_API_HOST(path): #NOTE var :path will be unused as all path we need will be read from :request ie from flask import request
res = requests.request( # ref. https://stackoverflow.com/a/36601467/248616
method = request.method,
url = request.url.replace(request.host_url, f'{API_HOST}/'),
headers = {k:v for k,v in request.headers if k.lower() != 'host'}, # exclude 'host' header
data = request.get_data(),
cookies = request.cookies,
allow_redirects = False,
)
#region exlcude some keys in :res response
excluded_headers = ['content-encoding', 'content-length', 'transfer-encoding', 'connection'] #NOTE we here exclude all "hop-by-hop headers" defined by RFC 2616 section 13.5.1 ref. https://www.rfc-editor.org/rfc/rfc2616#section-13.5.1
headers = [
(k,v) for k,v in res.raw.headers.items()
if k.lower() not in excluded_headers
]
#endregion exlcude some keys in :res response
response = Response(res.content, res.status_code, headers)
return response
Update April 2021: excluded_headers should probably include all "hop-by-hop headers" defined by RFC 2616 section 13.5.1.
| NGINX | 6,656,363 | 85 |
Ok, I'm almost giving up on this, but how can I disable the caching from Nginx for JavaScript files? I'm using a docker container with Nginx. When I now change something in the JavaScript file, I need multiple reloads until the new file is there.
How do I know it's Nginx and not the browser/docker?
Browser: I used curl on the command line to simulate the request and had the same issues. Also, I'm using a CacheKiller plugin and have cache disabled in Chrome Dev Tools.
Docker: When I connect to the container's bash, and use cat after changing the file, I get the correct result immediately.
I changed my nginx.conf for the sites-enabled to this (which I found in another stackoverflow thread)
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|xml|html|htm)$ {
# clear all access_log directives for the current level
access_log off;
add_header Cache-Control no-cache;
# set the Expires header to 31 December 2037 23:59:59 GMT, and the Cache-Control max-age to 10 years
expires 1s;
}
However, after rebuilding the containers (and making sure it's in the container with cat), it still didn't work. This here is the complete .conf
server {
server_name app;
root /var/www/app/web;
# Redirect to blog
location ~* ^/blog {
proxy_set_header Accept-Encoding "";
sub_filter 'https://testproject.wordpress.com/' '/blog/';
sub_filter_once off;
rewrite ^/blog/(.*) /$1 break;
rewrite ^/blog / break;
proxy_pass https://testproject.wordpress.com;
}
# Serve index.html only for exact root URL
location / {
try_files $uri /app_dev.php$is_args$args;
}
location ~ ^/(app|app_dev|config)\.php(/|$) {
fastcgi_pass php-upstream;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
# Prevents URIs that include the front controller. This will 404:
# http://domain.tld/app_dev.php/some-path
# Remove the internal directive to allow URIs like this
internal;
}
location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|xml|html|htm)$ {
# clear all access_log directives for the current level
access_log off;
add_header Cache-Control no-cache;
# set the Expires header to 31 December 2037 23:59:59 GMT, and the Cache-Control max-age to 10 years
expires 1s;
}
error_log /var/log/nginx/app_error.log;
access_log /var/log/nginx/app_access.log;
}
| I have the following nginx virtual host (static content) for local development work to disable all browser caching:
server {
listen 8080;
server_name localhost;
location / {
root /your/site/public;
index index.html;
# kill cache
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache';
if_modified_since off;
expires off;
etag off;
}
}
No cache headers sent:
$ curl -I http://localhost:8080
HTTP/1.1 200 OK
Server: nginx/1.12.1
Date: Mon, 24 Jul 2017 16:19:30 GMT
Content-Type: text/html
Content-Length: 2076
Connection: keep-alive
Last-Modified: Monday, 24-Jul-2017 16:19:30 GMT
Cache-Control: no-store
Accept-Ranges: bytes
Last-Modified is always current time.
Note: nginx's $date_gmt format is not per the HTTP spec (see changing the format).
To disable caching for a particular file extension (e.g. JS as requested by the OP):
location ~* \.js$ {
expires -1;
}
See Nitai's answer below to expand the list of file extensions - using a non-capturing group regex pattern.
| NGINX | 40,243,633 | 84 |
With the release of TCP load balancing for the Nginx community version, I would like to mix OpenVPN and SSL pass-through data. The only way for Nginx to know how to route the traffic is via their domain name.
vpn1.app.com ─┬─► nginx at 10.0.0.1 ─┬─► vpn1 at 10.0.0.3
vpn2.app.com ─┤ ├─► vpn2 at 10.0.0.4
https.app.com ─┘ └─► https at 10.0.0.5
I have taken a look at the TCP guides and the module documentation, but it doesn't seem well referenced. If anyone can point me to the right direction, i'd be grateful.
Related question on ServerFault: Can a Reverse Proxy use SNI with SSL pass through?
| This is now possible with the addition of the ngx_stream_ssl_preread module added in Nginx 1.11.5 and the ngx_stream_map module added in 1.11.2.
This allows Nginx to read the TLS Client Hello and decide based on the SNI extension which backend to use.
stream {
map $ssl_preread_server_name $name {
vpn1.app.com vpn1_backend;
vpn2.app.com vpn2_backend;
https.app.com https_backend;
default https_default_backend;
}
upstream vpn1_backend {
server 10.0.0.3:443;
}
upstream vpn2_backend {
server 10.0.0.4:443;
}
upstream https_backend {
server 10.0.0.5:443;
}
upstream https_default_backend {
server 127.0.0.1:443;
}
server {
listen 10.0.0.1:443;
proxy_pass $name;
ssl_preread on;
}
}
| NGINX | 34,741,571 | 84 |
I am having an intriguing problem where whenever I use add_header in my virtual host configuration on an ubuntu server running nginx with PHP and php-fpm it simply doesn't work and I have no idea what I am doing wrong. Here is my config file:
server {
listen 80; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/example.com/webroot/;
index index.html index.htm index.php;
# Make site accessible from http://www.example.com/
server_name www.example.com;
# max request size
client_max_body_size 20m;
# enable gzip compression
gzip on;
gzip_static on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE';
add_header PS 1
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ /index.php?$query_string;
# Uncomment to enable naxsi on this location
# include /etc/nginx/naxsi.rules
}
location ~* \.(css|js|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|odb|odc|odf|odg|odp|ods|odt|ogg|ogv|$
# 1 year -> 31536000
expires 500s;
access_log off;
log_not_found off;
add_header Pragma public;
add_header Cache-Control "max-age=31536000, public";
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
#fastcgi_pass 127.0.0.1:9000;
# With php5-fpm:
fastcgi_pass unix:/var/run/example.sock;
fastcgi_index index.php?$query_string;
include fastcgi_params;
# instead I want to get the value from Origin request header
}
# Deny access to hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
error_page 403 /403/;
}
server {
listen 80;
server_name example.com;
rewrite ^ http://www.example.com$request_uri? permanent;
}
I've tried adding the headers to the other location sections but the result is the same.
Any help appreciated!!
| There were two issues for me.
One is that nginx only processes the last add_header it spots down a tree. So if you have an add_header in the server context, then another in the location nested context, it will only process the add_header directive inside the location context. Only the deepest context.
From the NGINX docs on add_header:
There could be several add_header directives. These directives are inherited from the previous level if and only if there are no add_header directives defined on the current level.
Second issue was that the location / {} block I had in place was actually sending nginx to the other location ~* (\.php)$ block (because it would repath all requests through index.php, and that actually makes nginx process this php block). So, my add_header directives inside the first location directive were useless, and it started working after I put all the directives I needed inside the php location directive.
Finally, here's my working configuration to allow CORS in the context of an MVC framework called Laravel (you could change this easily to fit any PHP framework that has index.php as a single entry point for all requests).
server {
root /path/to/app/public;
index index.php;
server_name test.dev;
# redirection to index.php
location / {
try_files $uri $uri/ /index.php?$query_string;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-fpm:
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
# cors configuration
# whitelist of allowed domains, via a regular expression
# if ($http_origin ~* (http://localhost(:[0-9]+)?)) {
if ($http_origin ~* .*) { # yeah, for local development. tailor your regex as needed
set $cors "true";
}
# apparently, the following three if statements create a flag for "compound conditions"
if ($request_method = OPTIONS) {
set $cors "${cors}options";
}
if ($request_method = GET) {
set $cors "${cors}get";
}
if ($request_method = POST) {
set $cors "${cors}post";
}
# now process the flag
if ($cors = 'trueget') {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Credentials' 'true';
}
if ($cors = 'truepost') {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Credentials' 'true';
}
if ($cors = 'trueoptions') {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Max-Age' 1728000; # cache preflight value for 20 days
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken,Keep-Alive,X-Requested-With,If-Modified-Since';
add_header 'Content-Length' 0;
add_header 'Content-Type' 'text/plain charset=UTF-8';
return 204;
}
}
error_log /var/log/nginx/test.dev.error.log;
access_log /var/log/nginx/test.dev.access.log;
}
The gist for the above is at: https://gist.github.com/adityamenon/6753574
| NGINX | 18,450,310 | 84 |
I am using two system (both are Nginx load balancer and one act as backup).
I want to add and use few HTTP custom headers.
Below is my code for both:
upstream upstream0 {
#list of upstream servers
server backend:80;
server backup_load_balancer:777 backup;
#healthcheck
}
server {
listen 80;
#Add custom header about the port and protocol (http or https)
server_name _;
location / {
# is included since links are not allowed in the post
proxy_pass "http://upstream0;"
}
}
Backup system
server {
listen 777;
server_name _;
#doing some other extra stuff
#use port and protocol to direct
}
How can I achieve that?
| To add a header, add the add_header declaration to either the location block or the server block:
server {
add_header X-server-header "my server header content!";
location /specific-location {
add_header X-location-header "my specific-location header content!";
}
}
An add_header declaration within a location block will override the same add_header declaration in the outer server block.
e.g. if location contained add_header X-server-header ... that would override the outer declaration for that path location.
Obviously, replace the values with what you want to add. And that's all there is to it.
| NGINX | 11,973,047 | 84 |
If I have the headers: X_HEADER1 & X_HEADER2, I want to reject all requests if either of these headers are not set or do not contain the correct values. What is the best way to do this?
Thanks
| You can use two IF statements either before or in the location block to inspect the headers and then return a 403 error code if it is present. Alternatively, you can use those IF statements to rewrite to a specific location block and deny all in that location:
if ($http_x_custom_header) {
return 403;
}
Reference:
https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/
https://nginx.org/en/docs/http/ngx_http_access_module.html
Adding more detail per comment/request:
if ($http_x_custom_header) {
return 405;
}
this looks to see if header exists
if you want to check to see if the correct values exist, then you first need to map the correct values to a variable.
map $http_x_header $is_ok {
default "0";
Value1 "1";
Value2 "1";
Value3 "1";
}
if ($is_ok) {
return 405;
}
this first maps the header value to whether or not its ok, then checks to see if the variable is ok.
EDIT: Removed semicolon after map block since this causes an error.
| NGINX | 18,970,620 | 83 |
I am running docker-nginx on ECS server. My nginx service is suddenly stopped because the proxy_pass of one of the servers got unreachable. The error is as follows:
[emerg] 1#1: host not found in upstream "dev-example.io" in /etc/nginx/conf.d/default.conf:988
My config file is as below:
server {
listen 80;
server_name test.com;
location / {
proxy_pass http://dev-exapmle.io:5016/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
server {
listen 80 default_server;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I have many servers in the config file, even if one server was down, I need to have running nginx. Is there any way to fix it?
| Just adding a resolver did not resolve the issue in my case. But I was able to work around it by using a variable for the host.
Also, I guess it makes more sense to use Docker's DNS at 127.0.0.11 (this is a fixed IP).
Example:
server {
listen 80;
server_name test.com;
location / {
resolver 127.0.0.11;
set $example dev-example.io:5016;
proxy_pass http://$example;
}
}
I found the variable workaround on this page.
| NGINX | 42,720,618 | 82 |
How to overwrite default Content-Type in nginx? Currently when I request 01.dae file, there's
Content-Type: application/octet-stream;
And I want it to be
Content-Type: application/xml;
I tried something like
location ~* \.dae$ {
types { };
default_type application/xml;
}
and
location ~* \.dae$ {
add_header Content-Type application/xml;
}
but nothing works.
| In case you have no file extension:
location ~ something {
default_type application/xml;
}
Nginx docs for default_type
In case you are setting up let's encrypt certificate with a client which creates http server: How to use golang lego let's encrypt client behind nginx?
| NGINX | 19,629,930 | 82 |
I am trying to understand how Nginx's try_files directive works. Nginx is running on my server and serving up the default page located at /usr/share/nginx/html/index.html.
However, I have a simple HTML page located in the filesystem at /var/www/test/index.html. The following config file is not causing that file to get served. I'd like to understand why not, and what change I need to make so that it will be served.
Here is the relevant portion of default.conf:
server {
listen 80;
server_name localhost;
root /var/www;
try_files /test/index.html /;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
| a very common try_files line which can be applied on your condition is
location / {
try_files $uri $uri/ /test/index.html;
}
you probably understand the first part, location / matches all locations, unless it's matched by a more specific location, like location /test for example
The second part ( the try_files ) means when you receive a URI that's matched by this block try $uri first, for example http://example.com/images/image.jpg nginx will try to check if there's a file inside /images called image.jpg if found it will serve it first.
Second condition is $uri/ which means if you didn't find the first condition $uri try the URI as a directory, for example http://example.com/images/, ngixn will first check if a file called images exists then it wont find it, then goes to second check $uri/ and see if there's a directory called images exists then it will try serving it.
Side note: if you don't have autoindex on you'll probably get a 403 forbidden error, because directory listing is forbidden by default.
EDIT: I forgot to mention that if you have index defined, nginx will try to check if the index exists inside this folder before
trying directory listing.
Third condition /test/index.html is considered a fall back option, (you need to use at least 2 options, one and a fall back), you can use as much as you can (never read of a constriction before), nginx will look for the file index.html inside the folder test and serve it if it exists.
If the third condition fails too, then nginx will serve the 404 error page.
Also there's something called named locations, like this
location @error {
}
You can call it with try_files like this
try_files $uri $uri/ @error;
TIP: If you only have 1 condition you want to serve, like for example inside folder images you only want to either serve the image or go to 404 error, you can write a line like this
location /images {
try_files $uri =404;
}
which means either serve the file or serve a 404 error, you can't use only $uri by it self without =404 because you need to have a fallback option.
You can also choose which ever error code you want, like for example:
location /images {
try_files $uri =403;
}
This will show a forbidden error if the image doesn't exist, or if you use 500 it will show server error, etc ..
| NGINX | 17,798,457 | 82 |
I have nginx acting as a reverse proxy to apache. I now need to add a new subdomain
that will serve files from another directory, but at the same time I want all location and proxy_pass directives that I have for the default host to apply to the subdomain also.
I know that if I copy the rules from the default host to the new subdomain it will work, but is there a way for the subdomain to inherit the rules?
Below is a sample configuration
server {
listen 80;
server_name www.somesite.com;
access_log logs/access.log;
error_log logs/error.log error;
location /mvc {
proxy_pass http://localhost:8080/mvc;
}
location /assets {
alias /var/www/html/assets;
expires max;
}
... a lot more locations
}
server {
listen 80;
server_name subdomain.somesite.com;
location / {
root /var/www/some_dir;
index index.html index.htm;
}
}
Thanks
| You could move the common parts to another configuration file and include from both server contexts. This should work:
server {
listen 80;
server_name server1.example;
...
include /etc/nginx/include.d/your-common-stuff.conf;
}
server {
listen 80;
server_name another-one.example;
...
include /etc/nginx/include.d/your-common-stuff.conf;
}
Edit: Here's an example that's actually copied from my running server. I configure my basic server settings in /etc/nginx/sites-enabled (normal stuff for nginx on Ubuntu/Debian). For example, my main server bunkus.org's configuration file is /etc/nginx/sites-enabled and it looks like this:
server {
listen 80 default_server;
listen [2a01:4f8:120:3105::101:1]:80 default_server;
include /etc/nginx/include.d/all-common;
include /etc/nginx/include.d/bunkus.org-common;
include /etc/nginx/include.d/bunkus.org-80;
}
server {
listen 443 default_server;
listen [2a01:4f8:120:3105::101:1]:443 default_server;
include /etc/nginx/include.d/all-common;
include /etc/nginx/include.d/ssl-common;
include /etc/nginx/include.d/bunkus.org-common;
include /etc/nginx/include.d/bunkus.org-443;
}
As an example here's the /etc/nginx/include.d/all-common file that's included from both server contexts:
index index.html index.htm index.php .dirindex.php;
try_files $uri $uri/ =404;
location ~ /\.ht {
deny all;
}
location = /favicon.ico {
log_not_found off;
access_log off;
}
location ~ /(README|ChangeLog)$ {
types { }
default_type text/plain;
}
| NGINX | 9,905,378 | 82 |
I issued a nginx -s stop and after that I got this error when trying to reload it.
[error]: invalid PID number "" in "/var/run/nginx.pid"
That /var/run/nginx/pid file is empty atm.
What do I need to do to fix it?
| nginx -s reload is only used to tell a running nginx process to reload its config. After a stop, you don't have a running nginx process to send a signal to. Just run nginx (possibly with a -c /path/to/config/file)
| NGINX | 7,646,972 | 82 |
With the base install of nginx, your sites-available folder has just one file: default
how does the sites-available folder work and how would I use it to host multiple (separate) websites?
| Just to add another approach, you can use a separate file for each virtual domain or site you're hosting.
You can use a copy of default as a starting point for each one and customize for each site.
Then create symlinks in sites-enabled. In this way you can take sites up and down just by adding or removing a symlink and issuing a service nginx reload.
You can get creative and use this method to redirect sites to a maintenance mode page while you are doing site maintenance.
So the structure looks like this:
/sites-available/ (you can use obvious file names like this)
|
|-> a.mysite.com
|-> b.mysite.com
|-> someOtherSite.com
/sites-enabled/ (these are just symlinks to the real files in /sites-available)
|
|-> a.mysite.com
|-> b.mysite.com
Notice that since there are only the first two entries are the only symlinked items in sites-enabled, the third entry, someOtherSite.com is therefore offline.
| NGINX | 11,693,135 | 81 |
Nginx, Passenger, and Rails are running beautifully on my Linode. Before I launch, I'd like to restrict access so only my IP can view the site.
I've tried to deny access to all, and allow access to only my IP in Nginx. It does deny access to all, but I can't get the allow to work. I have checked to ensure the IP address I'm specifying in nginx.conf is my correct public ip.
Here's my nginx.conf. I've restarted nginx after editing the file, and tested some other changes which worked as expected (for instance, I removed deny all and was able to access the site, as expected).
What am I doing wrong?
http {
passenger_root /path/to/passenger-3.0.11;
passenger_ruby /path/to/ruby;
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
server {
listen 80;
server_name www.foo.bar;
root /path/to/rails/public/;
passenger_enabled on;
location / {
allow my.public.ip.here;
deny all;
}
}
}
| modify your nginx.conf
server {
listen 80;
server_name www.foo.bar;
location / {
root /path/to/rails/public/;
passenger_enabled on;
allow my.public.ip.here;
deny all;
}
}
| NGINX | 8,438,867 | 81 |
I have a node.js server running behind an nginx proxy. node.js is running an HTTP 1.1 (no SSL) server on port 3000. Both are running on the same server.
I recently set up nginx to use HTTP2 with SSL (h2). It seems that HTTP2 is indeed enabled and working.
However, I want to know whether the fact that the proxy connection (nginx <--> node.js) is using HTTP 1.1 affects performance. That is, am I missing the HTTP2 benefits in terms of speed because my internal connection is HTTP 1.1?
| In general, the biggest immediate benefit of HTTP/2 is the speed increase offered by multiplexing for the browser connections which are often hampered by high latency (i.e. slow round trip speed). These also reduce the need (and expense) of multiple connections which is a work around to try to achieve similar performance benefits in HTTP/1.1.
For internal connections (e.g. between webserver acting as a reverse proxy and back end app servers) the latency is typically very, very, low so the speed benefits of HTTP/2 are negligible. Additionally each app server will typically already be a separate connection so again no gains here.
So you will get most of your performance benefit from just supporting HTTP/2 at the edge. This is a fairly common set up - similar to the way HTTPS is often terminated on the reverse proxy/load balancer rather than going all the way through.
However there are potential benefits to supporting HTTP/2 all the way through. For example it could allow server push all the way from the application. Also potential benefits from reduced packet size for that last hop due to the binary nature of HTTP/2 and header compression. Though, like latency, bandwidth is typically less of an issue for internal connections so importance of this is arguable. Finally some argue that a reverse proxy does less work connecting a HTTP/2 connect to a HTTP/2 connection than it would to a HTTP/1.1 connection as no need to convert one protocol to the other, though I'm sceptical if that's even noticeable since they are separate connections (unless it's acting simply as a TCP pass through proxy). So, to me, the main reason for end to end HTTP/2 is to allow end to end Server Push, but even that is probably better handled with HTTP Link Headers and 103-Early Hints due to the complications in managing push across multiple connections and I'm not aware of any HTTP proxy server that would support this (few enough support HTTP/2 at backend never mind chaining HTTP/2 connections like this) so you'd need a layer-4 load balancer forwarding TCP packers rather than chaining HTTP requests - which brings other complications.
For now, while servers are still adding support and server push usage is low (and still being experimented on to define best practice), I would recommend only to have HTTP/2 at the end point. Nginx also doesn't, at the time of writing, support HTTP/2 for ProxyPass connections (though Apache does), and has no plans to add this, and they make an interesting point about whether a single HTTP/2 connection might introduce slowness (emphasis mine):
Is HTTP/2 proxy support planned for the near future?
Short answer:
No, there are no plans.
Long answer:
There is almost no sense to implement it, as the main HTTP/2 benefit
is that it allows multiplexing many requests within a single
connection, thus [almost] removing the limit on number of
simalteneous requests - and there is no such limit when talking to
your own backends. Moreover, things may even become worse when using
HTTP/2 to backends, due to single TCP connection being used instead
of multiple ones.
On the other hand, implementing HTTP/2 protocol and request
multiplexing within a single connection in the upstream module will
require major changes to the upstream module.
Due to the above, there are no plans to implement HTTP/2 support in
the upstream module, at least in the foreseeable future. If you
still think that talking to backends via HTTP/2 is something needed -
feel free to provide patches.
Finally, it should also be noted that, while browsers require HTTPS for HTTP/2 (h2), most servers don't and so could support this final hop over HTTP (h2c). So there would be no need for end to end encryption if that is not present on the Node part (as it often isn't). Though, depending where the backend server sits in relation to the front end server, using HTTPS even for this connection is perhaps something that should be considered if traffic will be travelling across an unsecured network (e.g. CDN to origin server across the internet).
EDIT AUGUST 2021
HTTP/1.1 being text-based rather than binary does make it vulnerable to various request smuggling attacks. In Defcon 2021 PortSwigger demonstrated a number of real-life attacks, mostly related to issues when downgrading front end HTTP/2 requests to back end HTTP/1.1 requests. These could probably mostly be avoided by speaking HTTP/2 all the way through, but given current support of front end servers and CDNs to speak HTTP/2 to backend, and backends to support HTTP/2 it seems it’ll take a long time for this to be common, and front end HTTP/2 servers ensuring these attacks aren’t exploitable seems like the more realistic solution.
| NGINX | 41,637,076 | 80 |
Recently I installed the latest version of Nginx and looks like I'm having hard time running PHP with it.
Here is the configuration file I'm using for the domain:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.php;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}
}
Here is the error I'm getting on the error log file:
FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream
| Try another *fastcgi_param* something like
fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name;
| NGINX | 17,808,787 | 80 |
Configuration
Ubuntu Server 11.10 64 bit
Amazon AWS, Ec2, hosted on the cloud
t1.micro instance
Before I write anything else, I'd like to state that I've checked both nginx 502 bad gateway and Nginx + PHP-FPM 502 Bad Gateway threads, which unfortunately haven't helped me in this regard.
The issue appears to be rather common: a misconfiguration of nginx or php-fpm can lead to a 502 Bad Gateway error, which is something that I haven't been able to get rid of. Note that this appears even when I go to my domain root, without specifying any particular directory.
I'm running an Amazon EC2 webserver, with port 9000 enabled, port 80 open, etc.
The question in particular is, how can I get rid of this nasty error? Or, better yet, how can I get php5-fpm to actually work.
What I Have Attempted so Far
Mostly consistent editing of configuration files, notably php-fpm.conf and nginx.conf.
i. php-fpm.conf
I've added the following, which hasn't quite helped much:
;;;;;;;;;;;;;
; Fpm Start ;
;;;;;;;;;;;;;
;pm.start_servers = 20
;pm.min_spare_servers = 5
;pm.max_spare_servers = 35
Now, afterward I tried including my configuration files:
include=/etc/php5/fpm/*.conf
Which only screwed me even further.
Full Configuration
;;;;;;;;;;;;;;;;;;;;;
; FPM Configuration ;
;;;;;;;;;;;;;;;;;;;;;
; All relative paths in this configuration file are relative to PHP's install
; prefix (/usr). This prefix can be dynamicaly changed by using the
; '-p' argument from the command line.
; Include one or more files. If glob(3) exists, it is used to include a bunch of
; files from a glob(3) pattern. This directive can be used everywhere in the
; file.
; Relative path can also be used. They will be prefixed by:
; - the global prefix if it's been set (-p arguement)
; - /usr otherwise
;include=/etc/php5/fpm/*.conf
;;;;;;;;;;;;;;;;;;
; Global Options ;
;;;;;;;;;;;;;;;;;;
[global]
; Pid file
; Note: the default prefix is /var
; Default Value: none
pid = /var/run/php5-fpm.pid
; Error log file
; Note: the default prefix is /var
; Default Value: log/php-fpm.log
error_log = /var/log/php5-fpm.log
; Log level
; Possible Values: alert, error, warning, notice, debug
; Default Value: notice
log_level = notice
; If this number of child processes exit with SIGSEGV or SIGBUS within the time
; interval set by emergency_restart_interval then FPM will restart. A value
; of '0' means 'Off'.
; Default Value: 0
;emergency_restart_threshold = 0
; Interval of time used by emergency_restart_interval to determine when
; a graceful restart will be initiated. This can be useful to work around
; accidental corruptions in an accelerator's shared memory.
; Available Units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
emergency_restart_interval = 0
; Time limit for child processes to wait for a reaction on signals from master.
; Available units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
;process_control_timeout = 0
; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging.
; Default Value: yes
daemonize = no
;;;;;;;;;;;;;
; Fpm Start ;
;;;;;;;;;;;;;
;pm.start_servers = 20
;pm.min_spare_servers = 5
;pm.max_spare_servers = 35
;;;;;;;;;;;;;;;;;;;;
; Pool Definitions ;
;;;;;;;;;;;;;;;;;;;;
; Multiple pools of child processes may be started with different listening
; ports and different management options. The name of the pool will be
; used in logs and stats. There is no limitation on the number of pools which
; FPM can handle. Your system will tell you anyway :)
; To configure the pools it is recommended to have one .conf file per
; pool in the following directory:
include=/etc/php5/fpm/pool.d/*.conf
ii. nginx.conf
In all honesty this configuration is a smattering of a few websites I've visited, but I can tell you that before this 502 Bad Gateway business, the server was running fine (without PHP working. Period.).
The issue primarily lies in the fact that something is terribly, terribly wrong. And now, when I try to do a service php5-fpm restart, it hangs in what I'm guessing is an infinite loop or something, which I can't even CTRL-C out of.
Full Configuration
user www-data;
worker_processes 1;
pid /var/run/nginx.pid;
events {
worker_connections 64;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush off;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
server {
listen 80;
server_name ec2-xx-xx-xx-xx.compute-x.amazonaws.com;
location ~ ^(.+\.php)(.*)$ {
root /home/wayvac/public;
fastcgi_pass unix:/var/run/php5-fpm.pid;
#fastcgi_pass 127.0.0.1:9000; #Un-comment this and comment "fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;" if you are not using php-fpm.
fastcgi_index index.php;
set $document_root2 $document_root;
if ($document_root2 ~ "^(.*\\\\).*?[\\\\|\/]\.\.\/(.*)$") { set $document_root2 $1$2; }
if ($document_root2 ~ "^(.*\\\\).*?[\\\\|\/]\.\.\/(.*)$") { set $document_root2 $1$2; }
if ($document_root2 ~ "^(.*\\\\).*?[\\\\|\/]\.\.\/(.*)$") { set $document_root2 $1$2; }
if ($document_root2 ~ "^(.*\\\\).*?[\\\\|\/]\.\.\/(.*)$") { set $document_root2 $1$2; }
if ($document_root2 ~ "^(.*\\\\).*?[\\\\|\/]\.\.\/(.*)$") { set $document_root2 $1$2; }
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_param SCRIPT_FILENAME $document_root2$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param PATH_TRANSLATED $document_root2$fastcgi_path_info;
include fastcgi_params;
fastcgi_param DOCUMENT_ROOT $document_root2;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
root /home/wayvac/public;
index index.html index.htm index.php;
}
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
# Some basic cache-control for static files to be sent to the browser
expires max;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
#include drop.conf;
#include php.conf;
}
}
| If anyone finds this page by encountering the same problem I had, I found the answer here.
For those of you who can't be bothered to click and work it out for themselves... ;)
The Condition:
Ubuntu or Debian server with NGINX and PHP 5.3 works fine but upgrading PHP to 5.4 gives 502 Bad Gateway errors. Looking for services running on port 9000 (typically running netstat -lp or similar) returns nothing.
The fix:
Open /etc/php5/fpm/pool.d/www.conf and make a note of the 'listen' parameter (in my case /var/run/php5-fpm.sock):
; The address on which to accept FastCGI requests.
; Valid syntaxes are:
; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific address on
; a specific port;
; 'port' - to listen on a TCP socket to all addresses on a
; specific port;
; '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
listen = /var/run/php5-fpm.sock
and replace the fastcgi_pass variable in your vhost with the location you just noted.
So this sample symfony2 configuration (taken from here):
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ ^/(app|app_dev)\.php(/|$) {
fastcgi_pass 127.0.0.1:9000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
becomes this:
# pass the PHP scripts to FastCGI server at /var/run/php5-fpm.sock
location ~ ^/(app|app_dev)\.php(/|$) {
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param HTTPS off;
}
Then restart nginx:
sudo /etc/init.d/nginx restart
Note: replace ~ ^/(app|app_dev)\.php(/|$) { with ~ ^/index\.php(/|$) { if you're not on SF2**
Hope this saves someone a little bit of time :)
Edit
Of course, you could change the listen = /var/run/php5-fpm.sock to listen = 127.0.0.1:9000 in /etc/php5/fpm/pool.d/www.conf then restart php5-fpm (which would save you from having to change your vhosts), but you have to assume they changed php5-fpm to run through a socket rather than listening on port 9000 for a reason.
Edit2
If you're still experiencing 502 error see this answer.
| NGINX | 10,003,978 | 80 |
Im looking for something better than sudo restart projectname every time I issue a git pull origin master, which pulls down my latest changes to a Django project. This restart command, I believe, is related to Upstart, which I use to start/top my Gunicorn server process.
This restart causes a brief outage. Users hitting the web server (nginx) will get a 500, because Gunicorn is still restarting. In fact, it seems to restart instantly, but it takes a few seconds for pages to load.
Any ideas on how to make this seamless? Ideally, I'd like to issue my git pull and Gunicorn reloads automatically.
| You can tell Gunicorn to reload gracefully using the HUP signal like so:
kill -HUP <pid>
(see the FAQ for details)
I use Supervisor to control my Gunicorn server, which allows me to use this (slightly hacky) way of reloading Gunicorn after a deploy:
supervisorctl status gunicorn | sed "s/.*[pid ]\([0-9]\+\)\,.*/\1/" | xargs kill -HUP
You could obviously achieve something similar with pidof, or ps.
This is actually run from a Fabric script, so I don't even have to logon to the server at all.
| NGINX | 9,881,819 | 80 |
My docker compose configs look like this:
docker-compose.yml
version: '3.5'
services:
nginx:
ports:
- 8080:8080
docker-compose.prod.yml
version: '3.5'
services:
nginx:
ports:
- 80:80
Now, when I run command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml up the nginx exposes on host machine two ports: 8000 and 80, because it merges ports properties:
version: '3.5'
services:
nginx:
ports:
- 8080:8080
- 80:80
Is there a way to override it? I want to expose only port 80.
| This behaviour is documented at https://docs.docker.com/compose/extends/#adding-and-overriding-configuration
For the multi-value options ports, expose, external_links, dns, dns_search, and tmpfs, Compose concatenates both sets of values
Since the ports will be the concatenation of the ports in all your compose files, I would suggest creating a new docker-compose.dev.yml file which contains your development port mappings, removing them from the base docker-compose.yml file.
As Nikson says, you can name this docker-compose.override.yml to apply your development configuration automatically without chaining the docker-compose files. docker-compose.override.yml will not be applied if you manually specify another override file (e.g. docker-compose -f docker-compose.yml -f docker-compose.prod.yml)
| NGINX | 48,851,190 | 79 |
As my title, here is the config file located in conf.d/api-server.conf
server {
listen 80;
server_name api.localhost;
location / {
add_header 'Access-Control-Allow-Origin' 'http://api.localhost';
add_header 'Access-Control-Allow_Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
proxy_redirect off;
proxy_set_header host $host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-forward-for $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:3000;
}
}
The nginx.conf file stay the same as default.
After I send request to api.localhost (api.localhost/admin/login), I still receive 405 error:
XMLHttpRequest cannot load http://api.localhost/admin/login. Response
to preflight request doesn't pass access control check: No 'Access-
Control-Allow-Origin' header is present on the requested resource.
Origin 'http://admin.localhost:3000' is therefore not allowed access.
The response had HTTP status code 405.
| The issue is that your if condition is not going to send the headers in the parent in /. If you check the preflight response headers it would be
HTTP/1.1 204 No Content
Server: nginx/1.13.3
Date: Fri, 01 Sep 2017 05:24:04 GMT
Connection: keep-alive
Access-Control-Max-Age: 1728000
Content-Type: text/plain charset=UTF-8
Content-Length: 0
And that doesn't give anything. So two possible fixes for you. Copy the add_header inside if block also
server {
listen 80;
server_name api.localhost;
location / {
add_header 'Access-Control-Allow-Origin' 'http://api.localhost';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' 'http://api.localhost';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
proxy_redirect off;
proxy_set_header host $host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-forward-for $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:3000;
}
}
Or you can move it outside the location block, so every request has the response
server {
listen 80;
server_name api.localhost;
add_header 'Access-Control-Allow-Origin' 'http://api.localhost';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
location / {
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}
proxy_redirect off;
proxy_set_header host $host;
proxy_set_header X-real-ip $remote_addr;
proxy_set_header X-forward-for $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:3000;
}
}
If you only want to allow certain locations in your config for CORS. like /api then you should create a template conf with your headers
add_header 'Access-Control-Allow-Origin' 'http://api.localhost';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Access-Control-Allow-Headers' 'Authorization,Accept,Origin,DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Content-Range,Range';
add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS,PUT,DELETE,PATCH';
and then use
include conf.d/corsheaders.conf;
in your OPTIONS block and /api block. So CORS are only allowed for the /api. If you don't care which location for CORS then you can use the second approach of moving core headers to server block
| NGINX | 45,986,631 | 79 |
Setting up Flask with uWSGI and Nginx can be difficult. I tried following this DigitalOcean tutorial and still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time.
If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests?
Does it make sense to not use Nginx either, just running bare Flask app on a port?
| When you "run Flask" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable.
The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server.
Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available.
The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).
Flask documents how to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask.
| NGINX | 38,982,807 | 79 |
In my apache configuration I have the following simple rewrite rule which
unless file exists will rewrite to index.php
on the urls you never see the file extension (.php)
how can I rewrite this in nginx?
#
# Redirect all to index.php
#
RewriteEngine On
# if a directory or a file exists, use it directly
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_URI} (/[^.]*|\.)$ [NC]
RewriteRule .* index.php [L]
Here's how my nginx server block looks like now, but it doesn't work :(
root /home/user/www;
index index.php;
# Make site accessible from http://localhost/
server_name some-domain.dev;
###############################################################
# exclude /favicon.ico from logs
location = /favicon.ico {
log_not_found off;
access_log off;
}
##############################################################
# Disable logging for robots.txt
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
##############################################################
# Deny all attempts to access hidden files such as
# .htaccess, .htpasswd, .DS_Store (Mac).
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
##############################################################
#
location / {
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root/index.php$args;
fastcgi_pass 127.0.0.1:9000;
}
###############################################################
# serve static files directly
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ {
access_log off;
expires 30d;
}
###############################################################
# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
# NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
# With php5-cgi alone:
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
}
| I have tried this and succeeded to get my index page.
When I have added this code in my site configuration file:
location / {
try_files $uri $uri/ /index.php;
}
Inside the configuration file itself it is explained that these are the configured steps
First attempt to serve request as file,
then as directory,
then fall back to index.html
In my case it is index.php, as I am providing page through php code.
| NGINX | 12,924,896 | 79 |
I am trying to modify the Nginx config file to remove a "rewrite".
Currently, I have this config file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name amc.local;
return 301 https://$host:8443/index.html;
}
}
Now I want to reload this config file, I tried
nginx -s reload
nginx -c <conf file>
nginx -s stop/start
In the log file, there is the line
2014/01/22 11:25:25 [notice] 1310#0: signal process started
but the modifications are not loaded.
| Maybe you're not doing it as root?
Try sudo nginx -s reload, if it still doesn't work, you might want to try sudo pkill -HUP nginx.
| NGINX | 21,292,533 | 78 |
I'm trying to include $remote_addr or $http_remote_addr on my proxy_pass without success. The rewrite rule works:
location ^~ /freegeoip/ {
rewrite ^ http://freegeoip.net/json/$remote_addr last;
}
The proxy_pass without the $remote_addr works, but freegeoip does not read the x-Real-IP:
location ^~ /freegeoip/ {
proxy_pass http://freegeoip.net/json/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
Then, I'm adding the ip to the end of the request:
location ^~ /freegeoip/ {
proxy_pass http://freegeoip.net/json/$remote_addr;
}
but nginx reports this error:
no resolver defined to resolve freegeoip.net
| If the proxy_pass statement has no variables in it, then it will use the gethostbyaddr system call during startup or reload and will cache that value permanently. If there are any variables, such as using either of the following:
set $originaddr http://origin.example.com;
proxy_pass $originaddr;
# or:
proxy_pass http://origin.example.com$request_uri;
then nginx will use a built-in resolver, and the resolver directive must be present. resolver is probably a misnomer; think of it as "which DNS server will the built-in resolver use". Since NGINX 1.1.9, the built-in resolver will honor DNS TTL values. Before then it used a fixed value of 5 minutes.
| NGINX | 17,685,674 | 78 |
Is there a way to redirect HTTPS requests to HTTP by adding a rule in the domain's vhost file?
| Why is something like that useful? At first look I wasn't sure if it could be done. But it presented an interesting question.
You might try putting a redirect statement in your config file and restarting your server. Two possibilities might happen:
The server will issue the redirect - what you seem to want.
The server will first do the https exchange, and THEN issue the redirect, in which case, what's the point?
Will add more if I come up with something more concrete.
UPDATE: (couple of hours later)
You could try this. You need to put this in your nginx.conf file -
server {
listen 443;
server_name _ *;
rewrite ^(.*) http://$host$1 permanent;
}
Sends a permanent redirect to the client. I am assuming you are using port 443 (default) for https.
server {
listen 80;
server_name _ *;
...
}
Add this so that your normal http requests on port 80 are undisturbed.
UPDATE: 18th Dec 2016
- server_name _ should be used instead of server_name _ * in nginx versions > 0.6.25 (thanks to @Luca Steeb)
| NGINX | 3,893,839 | 78 |
I'm confused what purpose Mongrel2 serves/provides that nginx doesn't already do.
(Yes, I've read the manual but I must to be too much of a noob to understand how it's fundamentally different than nginx)
My current web application stack is:
- nginx: webserver
- Lua: programming language
- FastCGI + LuaJIT: to connect nginx to Lua
- Postgres: database
| If you could only name one thing then it would be that Mongrel2 is build around ZeroMQ which means that scaling your web server has never been easier.
If a request comes in, Mongrel2 receives it (nothing unusual here, same as for NginX and any other httpd). Next thing that happens is that Mongrel2 distributes the task of compiling a response to n (ZeroMQ-enabled) backends, waits for them to do the work, receives results, compiles the response and sends it off to the client.
Now, the magic is with the fact that n can be any number and, that each of n can be written in any language as supported by ZeroMQ (20 or so) plus, all goes across the network so each n can be a dedicated box, possibly in another datacenter.
In other words: with NginX and all the rest you have to do scalability in your logic tier, Mongrel2 allows you to start (from a request/response cycle point of view) this right where the request hits your infrastructure, at the httpd rather than letting complexity penetrate down to your logic tier which blows complexity upwards by at least one order of magnitude imo.
| NGINX | 6,089,091 | 77 |
I have Nginx setup and displaying the test page properly. If I try to change the root path, I get a 403 Forbidden error, even though all permissions are identical. Additionally, the nginx user exists.
nginx.conf:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
index index.html index.htm;
server {
listen 80;
server_name localhost;
root /var/www/html; #changed from the default /usr/share/nginx/html
}
}
namei -om /usr/share/nginx/html/index.html
f: /usr/share/nginx/html/index.html
dr-xr-xr-x root root /
drwxr-xr-x root root usr
drwxr-xr-x root root share
drwxr-xr-x root root nginx
drwxr-xr-x root root html
-rw-r--r-- root root index.html
namei -om /var/www/html/index.html
f: /var/www/html/index.html
dr-xr-xr-x root root /
drwxr-xr-x root root var
drwxr-xr-x root root www
drwxr-xr-x root root html
-rw-r--r-- root root index.html
error log
2014/03/23 12:45:08 [error] 5490#0: *13 open()
"/var/www/html/index.html" failed (13: Permission denied), client:
XXX.XX.XXX.XXX, server: localhost, request: "GET /index.html HTTP/1.1", host: "ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com"
| I experienced the same problem and it was due to SELinux.
To check if SELinux is running:
# getenforce
To disable SELinux until next reboot:
# setenforce Permissive
Restart Nginx and see if the problem persists. If you would like to permanently alter the settings you can edit /etc/sysconfig/selinux
If SELinux is your problem you can run the following to allow nginx to serve your www directory (make sure you turn SELinux back on before testing this. i.e, # setenforce Enforcing)
# chcon -Rt httpd_sys_content_t /path/to/www
If you're still having issues take a look at the boolean flags in getsebool -a, in particular you may need to turn on httpd_can_network_connect for network access
# setsebool -P httpd_can_network_connect on
For me it was enough to allow http to serve my www directory.
| NGINX | 22,586,166 | 76 |
I want to make a Flask+Nginx+Gunicorn deployment. I have Nginx setup and running and I run gunicorn as described in the docs:
gunicorn app:app
But when I logout of the server the gunicorn process exits? What is the correct way to make sure it stay running for Nginx to connect to, and restarts if it crashes?
| Use --daemon option while running gunicorn.
Example:
gunicorn grand56.wsgi:application --name grand56 --workers 3 --user=root --group=root --bind=127.0.0.1:1001 --daemon
| NGINX | 13,654,688 | 75 |
SO has many articles mentioning this error code:
FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream...
That probably means that this error message is more or less useless.
The message is telling us that the FastCGI handler doesn't like whatever it was sent for some reason. The problem is that sometimes we have no idea what the reason is.
So I'm re-stating the question -- How do we debug this error code?
Consider the situation where we have a very simple site, with just the phpinfo.php file. Additionally, there is a very simple nginx config, as follows:
server {
server_name testsite.local;
root /var/local/mysite/;
location / {
index index.html index.htm index.php;
}
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass fastcgi_backend;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
How can we see output/log exactly what fastcgi_params got sent to the script?
How can we see the actual error message? In my case, I'm using php-fpm. It has no info in the log about this error. The logs do not append any rows for this error. Is there a verbose mode for php-fpm?
/var/log/php-fpm/error.log
/var/log/php-fpm/www-error.log
I've tried to set this in the php-fpm.conf file
log_level = notice
and this in the php-fpm.d/www.conf file:
catch_workers_output = yes
| To answer your question:
in php-fpm.d/www.conf file:
set the access.log entry:
access.log = /var/log/$pool.access.log
restart php-fpm service.
try to access your page
cat /var/log/www.access.log, you will see access logs like:
- - 10/Nov/2016:19:02:11 +0000 "GET /app.php" 404
- - 10/Nov/2016:19:02:37 +0000 "GET /app.php" 404
To resolve "Primary script unknown" problem:
if you see "GET /" without a correct php file name, then it's your nginx conf problem.
if you see "GET /app.php" with 404, it means nginx is correctly passing the script file name but php-fpm failed to access this file (user "php-fpm:php-fpm" don't have access to your file, which trapped me for 3 hours)
Hope my answer helps.
| NGINX | 35,261,922 | 73 |
I was looking at my nginx config file I noticed two this.
server {
listen 80 default_server;
listen [::]:80 default_server;
index index.html;
}
I understand this part listen 80 default_server; it tells nginx to listen on port 80 and set that as the "default_server" but I do not understand the second line.
listen [::]:80 default_server;
It appears I am setting the default server again on port 80 but I do not really understand the [::] part of it at all.
Can someone explain to me what this configuration does?
| It is for the IPv6 configs
from the nginx docs
IPv6 addresses (0.7.36) are specified in square brackets:
listen [::]:8000;
listen [::1];
| NGINX | 34,305,351 | 72 |
It seems like I have not clearly communicated my problem. I need to send a file (using AJAX) and I need to get the upload progress of the file using the Nginx HttpUploadProgressModule. I need a good solution to this problem. I have tried with the jquery.uploadprogress plugin, but I am finding myself having to rewrite much of it to get it to work in all browsers and to send the file using AJAX.
All I need is the code to do this and it needs to work in all major browsers (Chrome, Safari, FireFox, and IE). It would be even better If I could get a solution that will handle multiple file uploads.
I am using the jquery.uploadprogress plugin to get the upload progress of a file from the NginxHttpUploadProgressModule. This is inside an iframe for a facebook application. It works in firefox, but it fails in chrome/safari.
When I open the console I get this.
Uncaught ReferenceError: progressFrame is not defined
jquery.uploadprogress.js:80
Any idea how I would fix that?
I would like to also send the file using AJAX when it is completed. How would I implement that?
EDIT:
I need this soon and it is important so I am going to put a 100 point bounty on this question. The first person to answer it will receive the 100 points.
EDIT 2:
Jake33 helped me solve the first problem. First person to leave a response with how to send the file with ajax too will receive the 100 points.
| Uploading files is actually possible with AJAX these days. Yes, AJAX, not some crappy AJAX wannabes like swf or java.
This example might help you out: https://webblocks.nl/tests/ajax/file-drag-drop.html
(It also includes the drag/drop interface but that's easily ignored.)
Basically what it comes down to is this:
<input id="files" type="file" />
<script>
document.getElementById('files').addEventListener('change', function(e) {
var file = this.files[0];
var xhr = new XMLHttpRequest();
(xhr.upload || xhr).addEventListener('progress', function(e) {
var done = e.position || e.loaded
var total = e.totalSize || e.total;
console.log('xhr progress: ' + Math.round(done/total*100) + '%');
});
xhr.addEventListener('load', function(e) {
console.log('xhr upload complete', e, this.responseText);
});
xhr.open('post', '/URL-HERE', true);
xhr.send(file);
});
</script>
(demo: http://jsfiddle.net/rudiedirkx/jzxmro8r/)
So basically what it comes down to is this =)
xhr.send(file);
Where file is typeof Blob: http://www.w3.org/TR/FileAPI/
Another (better IMO) way is to use FormData. This allows you to 1) name a file, like in a form and 2) send other stuff (files too), like in a form.
var fd = new FormData;
fd.append('photo1', file);
fd.append('photo2', file2);
fd.append('other_data', 'foo bar');
xhr.send(fd);
FormData makes the server code cleaner and more backward compatible (since the request now has the exact same format as normal forms).
All of it is not experimental, but very modern. Chrome 8+ and Firefox 4+ know what to do, but I don't know about any others.
This is how I handled the request (1 image per request) in PHP:
if ( isset($_FILES['file']) ) {
$filename = basename($_FILES['file']['name']);
$error = true;
// Only upload if on my home win dev machine
if ( isset($_SERVER['WINDIR']) ) {
$path = 'uploads/'.$filename;
$error = !move_uploaded_file($_FILES['file']['tmp_name'], $path);
}
$rsp = array(
'error' => $error, // Used in JS
'filename' => $filename,
'filepath' => '/tests/uploads/' . $filename, // Web accessible
);
echo json_encode($rsp);
exit;
}
| NGINX | 4,856,917 | 72 |
I see people are running setups like Nginx + Gunicorn + Flask.
Can someone explain what is the benefit of having Gunicorn in front of Flask? Why not just run Flask alone? Doesn't it consume more resources having Gunicorn + Flask running? Is Gunicorn able to reboot the Flask instance when it fails to respond?
What's also the purpose of having nginx on top of gunicorn? Isn't gunicorn enough? Again, more resources being spent?
| I think you may be confused, Flask is not a web server, it is a framework and needs some sort of web server, such as Gunicorn, Nginx or Apache, to accept HTTP requests which it will then operate on. The reason why people run Nginx and Gunicorn together is that in addition to being a web server, Nginx can also proxy connections to Gunicorn which brings certain performance benefits, here is a pretty good answer that elaborates on those benefits:
https://serverfault.com/questions/220046/why-is-setting-nginx-as-a-reverse-proxy-a-good-idea
EDIT:
Added link containing information about performance benefits of running Nginx as a proxy.
| NGINX | 20,766,684 | 71 |
I have a Rails app up and running on my server and now I'd like to add another one.
I want Nginx to check what the request is for and split traffic based on domain name
Both sites have their own nginx.conf symlinked into sites-enabled, but I get an error starting nginx Starting nginx: nginx: [emerg] duplicate listen options for 0.0.0.0:80 in /etc/nginx/sites-enabled/bubbles:6
They are both listening on 80 but for different things.
Site #1
upstream blog_unicorn {
server unix:/tmp/unicorn.blog.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name walrus.com www.walrus.com;
root /home/deployer/apps/blog/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri @blog_unicorn;
location @blog_unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://blog_unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Site two:
upstream bubbles_unicorn {
server unix:/tmp/unicorn.bubbles.sock fail_timeout=0;
}
server {
listen 80 default deferred;
server_name bubbles.com www.bubbles.com;
root /home/deployer/apps/bubbles/current/public;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri @bubbles_unicorn;
location @bubbles_unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://bubbles_unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
| The documentation says:
The default_server parameter, if present, will cause the server to become the default server for the specified address:port pair.
It's also obvious, there can be only one default server.
And it is also says:
A listen directive can have several additional parameters specific to socket-related system calls. They can be specified in any listen directive, but only once for the given address:port pair.
So, you should remove default and deferred from one of the listen 80 directives. And same applies to ipv6only=on directive as well.
| NGINX | 13,676,809 | 71 |
I want to redirect requests on two conditions using Nginx.
This doesn't work:
if ($host = 'domain.example' || $host = 'domain2.example'){
rewrite ^/(.*)$ http://www.domain.example/$1 permanent;
}
What is the correct way to do this?
| I had this same problem before. Because Nginx can't do complex conditions or nested if statements, you need to evaluate over 2 different expressions.
set a variable to some binary value then enable if either condition is true in 2 different if statements:
set $my_var 0;
if ($host = 'domain.example') {
set $my_var 1;
}
if ($host = 'domain2.example') {
set $my_var 1;
}
if ($my_var = 1) {
rewrite ^/(.*)$ http://www.domain.example/$1 permanent;
}
| NGINX | 4,833,238 | 71 |
Is there a command that will list all vhosts or servers running under nginx on CentOS? I would like to pipe the results to a text file for reporting purposes.
I'm looking for something similar to this command that I use for Apache:
apachectl -S 2>&1 | grep 'port 80'
| starting from version 1.9.2 you can do:
nginx -T
show complete nginx configuration
nginx -T | grep "server_name " #include the whitespace to exclude non relevant results
show you all server names
| NGINX | 32,400,933 | 70 |
I tried to deploy my rails app on nginx and ubuntu via capistrano like the tutorial on the page https://gorails.com/deploy/ubuntu/14.04.
but at the end i get an error message:
Incomplete response received from application
in my browser.
this is probably an error from passenger, but how can i figure out what to do?
| Your rails_env production don't have required set up,probably missing secret_key_base.
Open /etc/nginx/sites-available/default and change the rails_env to development:
rails_env production;
to
rails_env development;
If the app is loading it's not a passenger issue.
Production Solution:
Enter your app root
run: rake secret
copy the output
go to /yourapp/config/secrets.yml
set the production secret_key_base
Restart the passenger app :
touch /yourapp/tmp/restart.txt
| NGINX | 29,241,053 | 70 |
I am trying access kibana application deployed in nginx,but getting below
URL :- http://127.0.0.1/kibana-3.1.2
2015/02/01 23:05:05 [alert] 3919#0: *766 768 worker_connections are not enough while connecting to upstream, client: 127.0.0.1, server: , request: "GET /kibana-3.1.2 HTTP/1.0", upstream: "http://127.0.0.1:80/kibana-3.1.2", host: "127.0.0.1"
Kibana is deployed at /var/www/kibana-3.1.2
I have tried to increase the worker_connections,but still no luck,getting below in this case.
2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)
2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)
2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)
2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)
2015/02/01 23:02:27 [alert] 3802#0: accept4() failed (24: Too many open files)
nginx.conf :-
user www-data;
worker_processes 4;
pid /var/run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
And below in the location directive.
location /kibana-3.1.2{
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1;
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Headers *;
}
| Old question, but i had the same issue and the accepted answer didnt work for me.
I had to increase the number of worker_connections, as stated here.
/etc/nginx/nginx.conf
events {
worker_connections 20000;
}
| NGINX | 28,265,717 | 70 |
How do I set a location condition in Nginx that responds to anything that isn't equal to the listed locations?
I tried:
location !~/(dir1|file2\.php) {
rewrite ^/(.*) http://example.com/$1 permanent;
}
But it doesn't trigger the redirect. It simply handles the requested URI using the rules in the rest of the server configuration.
| According to nginx documentation
there is no syntax for NOT matching a regular expression. Instead, match the target regular expression and assign an empty block, then use location / to match anything else
So you could define something like
location ~ (dir1|file2\.php) {
# empty
}
location / {
rewrite ^/(.*) http://example.com/$1 permanent;
}
| NGINX | 16,302,897 | 70 |
When is it necessary to restart nginx and reload will not suffice?
Does it make a difference if an extension like passenger is used?
Should the service be restarted if it consumes too much memory. Any other reasons for restarting Nginx, particularly after a configuration change either in an extension or a Nginx core config?
After making a configuration change, one can either restart or reload nginx, via the binary itself or the init.d script "/etc/init.d/nginx -h" on Ubuntu. Which method should be preferred?
| Reloading nginx is safer than restarting because before old process will be terminated, new configuration file is parsed and whole process is aborted if there are any problems with it.
On the other hand when you restart nginx you might encounter situation in which nginx will stop, and won't start back again, because of syntax error.
Reloading terminates the old process, so any memory leaks should be cleared anyway.
| NGINX | 13,525,465 | 69 |
What to use for a medium to large python WSGI application, Apache + mod_wsgi or Nginx + mod_wsgi?
Which combination will need more memory and CPU time?
Which one is faster?
Which is known for being more stable than the other?
I am also thinking to use CherryPy's WSGI server but I hear it's not very suitable for a very high-load application, what do you know about this?
Note: I didn't use any Python Web Framework, I just wrote the whole thing from scratch.
Note': Other suggestions are also welcome.
| For nginx/mod_wsgi, ensure you read:
http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html
Because of how nginx is an event driven system underneath, it has behavioural characteristics which are detrimental to blocking applications such as is the case with WSGI based applications. Worse case scenario is that with multiprocess nginx configuration, you can see user requests be blocked even though some nginx worker processes may be idle. Apache/mod_wsgi doesn't have this issue as Apache processes will only accept requests when it has the resources to actually handle the request. Apache/mod_wsgi will thus give more predictable and reliable behaviour.
| NGINX | 195,534 | 69 |
I'm trying to use a dockerized version of nginx as a proxy server for my node (ExpressJS) application. Without any configuration to nginx and publishing port 80 for the container, I am able to see the default nginx landing page. So I know that much is working.
Now I can mount my sites-enabled directory that contains the configuration for proxy_pass localhost:3000. I have my node application running locally (not in any Docker container) and I can access it via port 3000 (i.e. localhost:3000). However, I would assume that with nginx container running, mapped to port 80, and proxying my localhost:3000, that I would be able to see my very simple (hello world) application. Instead I receive a 502.
Do I need to pass something into docker? Is this likely a nginx configuration error? Here is my nginx configuration:
server {
listen 0.0.0.0:80;
server_name localhost;
location / {
proxy_pass http://localhost:3000;
}
}
I have tried using this question but it did not seem to help. That is unless I'm doing something completely wrong.
| If you're using docker-for-mac 18.03 or newer it auto creates a special DNS entry host.docker.internal that dynamically binds to the host inet ip. You can then use the dns name to proxy services running on the host machine from inside a container as a stand-in for localhost.
i.e. an nginx config file:
server {
listen 0.0.0.0:80;
server_name localhost;
location / {
proxy_pass http://host.docker.internal:3000;
}
}
| NGINX | 27,810,076 | 68 |
Please help me understand what worker_processes and worker_connections are in Nginx and what is the relation between them. I have looked under Nginx directives it says:
worker_processes
A worker process is a single-threaded process.
If Nginx is doing CPU-intensive work such as SSL or gzipping and you have 2 or more CPUs/cores, then you may set worker_processes to be equal to the number of CPUs or cores.
If you are serving a lot of static files and the total size of the files is bigger than the available memory, then you may increase worker_processes to fully utilize disk bandwidth.
worker_connections
The worker_connections and worker_processes from the main section allows you to calculate max clients you can handle:
max clients = worker_processes * worker_connections
So I understand that worker_processes is single threaded and its value is helpful in CPU-intensive work, but I am unable to understand "allows you to handle max clients you can handle".
If anyone can give an example as given in worker_processes it would be helpful for me to understand.
| worker_connections is the number of simultaneous connections; so they are simply stating how to calculate, for example:
you are only running 1 process with 512 connections, you will only be able to serve 512 clients.
If 2 processes with 512 connections each, you will be able to handle 2x512=1024 clients.
The number of connections is limited by the maximum number of open files (RLIMIT_NOFILE) on your system
nginx
has a better, updated description of worker connections.
fyi, the wiki section is considered obsolete (dont ask), now only the main nginx.org/en/docs are preferred...
| NGINX | 23,386,986 | 68 |
Is it possible to serve a custom "Bad Gateway" error page in Nginx?
Similar to having custom 404 pages.
| There are three pieces that must be in place in order for your custom error page to display instead of the generic "Bad Gateway" error.
You must create an html file named something like "500.html" and place it in the root. In the case of Rails running behind Nginx, this means putting it at public/500.html.
You must have a line in your config file that points at least the 502 errors to that 500.html page like this:
error_page 502 /500.html;
You must have a location block for /500.html in your config file. If your root is already defined, this block can be empty. But the block must exist nonetheless.
location /500.html{
}
| NGINX | 7,796,237 | 68 |
I have the following scenario: I have an env variable $SOME_IP defined and want to use it in a nginx block. Referring to the nginx documentation I use the env directive in the nginx.conf file like the following:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
env SOME_IP;
Now I want to use the variable for a proxy_pass. I tried it like the following:
location / {
proxy_pass http://$SOME_IP:8000;
}
But I end up with this error message: nginx: [emerg] unknown "some_ip" variable
| With NGINX Docker image
Apply envsubst on template of the configuration file at container start. envsubst is included in official NGINX docker images.
Environment variable is referenced in a form $VARIABLE or ${VARIABLE}.
nginx.conf.template:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
access_log off;
return 200 '${MESSAGE}';
add_header Content-Type text/plain;
}
}
}
Dockerfile:
FROM nginx:1.17.8-alpine
COPY ./nginx.conf.template /nginx.conf.template
CMD ["/bin/sh" , "-c" , "envsubst < /nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"]
Build and run docker:
docker build -t foo .
docker run --rm -it --name foo -p 8080:80 -e MESSAGE="Hellou World" foo
NOTE:If config template contains dollar sign $ which should not be substituted then list all used variables as parameter of envsubst so that only those are replaced. E.g.:
CMD ["/bin/sh" , "-c" , "envsubst '$USER_NAME $PASSWORD $KEY' < /nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"]
Nginx Docker documentation for reference. Look for Using environment variables in nginx configuration.
Using environment variables in nginx configuration
Out-of-the-box, nginx doesn’t support environment variables inside
most configuration blocks. But envsubst may be used as a workaround if
you need to generate your nginx configuration dynamically before nginx
starts.
Here is an example using docker-compose.yml:
web:
image: nginx
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
The mysite.template file may then contain variable references like
this:
listen ${NGINX_PORT};
| NGINX | 21,866,477 | 67 |
I want to have my API controller use SSL, so I added another listen directive to my nginx.conf
upstream unicorn {
server unix:/tmp/unicorn.foo.sock fail_timeout=0;
}
server {
listen 80 default deferred;
listen 443 ssl default;
ssl_certificate /etc/ssl/certs/foo.crt;
ssl_certificate_key /etc/ssl/private/foo.key;
server_name foo;
root /var/apps/foo/current/public;
try_files $uri/system/maintenance.html $uri/index.html $uri @unicorn;
location @unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 502 503 /maintenance.html;
error_page 500 504 /500.html;
keepalive_timeout 5;
}
which passes the nginx conftest without any problems. I also added a force_ssl directive to my ApiController
class ApiController < ApplicationController
force_ssl if Rails.env.production?
def auth
user = User.authenticate(params[:username], params[:password])
respond_to do |format|
format.json do
if user
user.generate_api_key! unless user.api_key.present?
render json: { key: user.api_key }
else
render json: { error: 401 }, status: 401
end
end
end
end
def check
user = User.find_by_api_key(params[:api_key])
respond_to do |format|
format.json do
if user
render json: { status: 'ok' }
else
render json: { status: 'failure' }, status: 401
end
end
end
end
end
which worked just fine when I wasn't using SSL, but now when I try to curl --LI http://foo/api/auth.json, I get properly redirected to https, but then I keep on getting redirected to http://foo/api/auth ending in an infinite redirect loop.
My routes simply have
get "api/auth"
get "api/check"
I'm using Rails 3.2.1 on Ruby 1.9.2 with nginx 0.7.65
| You're not forwarding any information about whether this request was an HTTPS-terminated request or not. Normally, in a server, the "ssl on;" directive will set these headers, but you're using a combined block.
Rack (and force_ssl) determines SSL by:
If the request came in on port 443 (this is likely not being passed back to Unicorn from nginx)
If ENV['HTTPS'] == "on"
If the X-Forwarded-Proto header == "HTTPS"
See the force_ssl source for the full story.
Since you're using a combined block, you want to use the third form. Try:
proxy_set_header X-Forwarded-Proto $scheme;
in your server or location block per the nginx documentation.
This will set the header to "http" when you come in on a port 80 request, and set it to "https" when you come in on a 443 request.
| NGINX | 9,448,168 | 67 |
I'm new to AWS and trying to understand which version of NGINX I should be installing on my instance. I've found multiple options;
Via EPEL as the blog entry
Amazon's own (?) version as this answer
The 2016 NGINX official tutorial
On my development environment (Centos VM) I used sudo yum install nginx. Having tried the EPEL route I don't get the same set up, in particular sites enabled/available is not created as part of the setup. I want to use nginxconfig.io which requires those. Which version of NGINX should i use for that?
| Alternative way to install that could be easier (has a fairly recent version of Nginx):
$ sudo amazon-linux-extras list | grep nginx
38 nginx1=latest disabled [ =stable ]
$ sudo amazon-linux-extras enable nginx1
38 nginx1=latest enabled [ =stable ]
Now you can install:
$ sudo yum clean metadata
$ sudo yum -y install nginx
$ nginx -v
nginx version: nginx/1.16.1
| NGINX | 57,784,287 | 66 |
How to set index.html for the domain name e.g. https://www.example.com/ - leads user to index.html in root directory.
I've tried different things like:
server {
# some configs
location = / {
index index.html;
fastcgi_index index.html;
}
or
location / {
index index.html;
fastcgi_index index.html;
}
}
Nothing helped me.
There are some other configs with location keyword, though I'd commented them either.
Other "location" configs in the server { clause:
location ~ .*(css|htc|js|bmp|jp?g|gif|ico|cur|png|swf|htm?|html)$ {
access_log off;
root $www_root;
}
location ~ \.php$
{
include /etc/nginx/fastcgi_params;
index index.html;
fastcgi_index index.html;
fastcgi_param SCRIPT_FILENAME $www_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_pass 127.0.0.1:9000;
# Директива определяет что ответы FastCGI-сервера с кодом больше или равные 400
# перенаправлять на обработку nginx'у с помощью директивы error_page
fastcgi_intercept_errors on;
break;
}
location ~ /\.ht {
deny all;
}
All them were commented and uncommented, but nothing helped.
PS Editions were made in /etc/nginx/sites-enabled/domainname.com file.
| in your location block you can do:
location / {
try_files $uri $uri/index.html;
}
which will tell ngingx to look for a file with the exact name given first, and if none such file is found it will try uri/index.html. So if a request for https://www.example.com/ comes it it would look for an exact file match first, and not finding that would then check for index.html
| NGINX | 11,954,255 | 66 |
I want to configure both Apache and nginx to run together on Ubuntu because I want to develop on both nginx and Apache. I have read that I have to edit the configuration on Apache or nginx to make one of them run on another port rather than 80.
Which files should I edit in Nginx to make it run through another port?
| go to /etc/nginx/sites-available then modify the host file which should listen to a different port (if you didn't change anything here you will find a default file, enter to change it)
in the file change listen: 80 to the port you want to listen to
don't forget to reload the service: service nginx reload
| NGINX | 23,024,473 | 65 |
I just installed nginx and php fastcgi about an hour ago, and after reading examples of a quick starting configuration, and the nginx documentation etc, I just cant get it to work.
No matter what I change or try, I always only get the "Welcome to Nginx!" screen on "localhost/..." - I cant even call a simple index.html
My config:
(the stuff in the comments is what I tried out)
// default nginx stuff (unchanged)
server {
#listen 80 default_server;
#listen 80 default;
listen 80;
#server_name localhost;
#server_name _;
#access_log /var/log/nginx/board.access_log;
#error_log /var/log/nginx/board.error_log;
#root /var/www/board;
#root /var/www/board/public/;
root /var/www/board/public;
#index index.html;
index index.html index.htm index.php;
}
If I understand it right, this should be the easiest setup, right? just define listen 80; and index index.html; but I just cant get it to work
The file /var/www/board/public/index.html exists and has content
Before I waste 2 more hours trying out something, can someone of you give it a quick watch and tell me what I'm doing wrong? Thanks.
| Fundamentally you hadn't declare location which is what nginx uses to bind URL with resources.
server {
listen 80;
server_name localhost;
access_log logs/localhost.access.log main;
location / {
root /var/www/board/public;
index index.html index.htm index.php;
}
}
| NGINX | 11,061,788 | 65 |
I've developed a URL shortening web application.
It consists of two separate docker containers: one containing the backend REST api and another containing the frontend static website.
These two containers are linked to an nginx container.
The configuration for this nginx container is below:
worker_processes 1;
events { worker_connections 1024; }
http {
upstream api {
server short-url:8080;
}
upstream frontend {
server short-url-frontend:8081;
}
gzip on;
gzip_vary on;
gzip_min_length 860;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml application/javascript application/x-javascript application/xml;
gzip_disable "MSIE [1-6]\.";
server {
listen 80;
root /user/share/nginx/html;
location /urlshortener/v1 {
proxy_pass http://api/urlshortener/v1;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
location ~ ^/([A-Za-z0-9]+) {
rewrite ^/([A-Za-z0-9]+) /$1
proxy_pass http://api/urlshortener/v1;
}
location / {
proxy_pass http://frontend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
}
}
}
If a url ends with /urlshortening/v1, I'm proxying to the backend.
If a url starts with /, I'm proxying to the frontend.
Shortened urls e.g. /3xTy or /a0q need to be proxied to the backend so that the user can be navigated to the original url. In order to do this, I've defined a location with a regular expression.
location ~ ^/([A-Za-z0-9]+) {
rewrite ^/([A-Za-z0-9]+) /$1
proxy_pass http://api/urlshortener/v1;
}
This block of code gives me the following error:
2018/11/17 16:47:03 [emerg] 1#1: "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/nginx.conf:36
I've gone through several examples and reviewed a number of answers and I believe that the configuration I have should work. Can someone please explain why I'm getting this error?
| If you use a URI with a proxy_pass statement within a regular expression location, you need to build the entire URI using one or more variables. See this document for details.
So the alternatives are to (1), capture the URI from the location expression and add it to the proxy_pass statement. For example:
location ~ ^/([A-Za-z0-9]+) {
proxy_pass http://api/urlshortener/v1/$1;
}
Or (2), use proxy_pass without a URI part, and construct the desired URI using a rewrite...break. For example:
location ~ ^/([A-Za-z0-9]+) {
rewrite ^/([A-Za-z0-9]+) /urlshortener/v1/$1 break;
proxy_pass http://api;
}
See this document for details.
| NGINX | 53,353,572 | 64 |
I'm using nginx on OS X 10.8. Freshly installed nginx but can't find a way to restart nginx except kill nginx_pid say kill 64116. Wondering if there are better ways to restart nginx.
Found some methods on Google and SO but didn't work:
nginx -s restart
sudo fuser -k 80/tcp ; sudo /etc/init.d/nginx restart
The error message for nginx -s restart is
nginx: [error] open() "/usr/local/var/run/nginx.pid" failed (2: No such file or directory)
Sometimes also get this error msg:
nginx: invalid option: "-s restart"
| Try running sudo nginx before starting nginx.
| NGINX | 14,176,477 | 64 |
I understand thread driven that Apache uses: every connection opens up a thread and when the response is sent, the thread is closed, releasing the resources for other threads).
But I don't get the event driven design that Nginx uses. I've read some basics about event driven design .. but I don't understand how this is used by nginx to handle web requests.
Where can i read and understand how Nginx is handling the connections in an event driven way so I get why it's better, rather than just accepting that event-based design is better than thread-driven design.
| Nginx uses the Reactor pattern. Basically, it's single-threaded (but can fork several processes to utilize multiple cores). The main event loop waits for the OS to signal a readiness event - e.g. that data is available to read from a socket, at which point it is read into a buffer and processed. The single thread can very efficiently serve tens of thousands of simultaneous connections (the thread-per-connection model would fail at this because of the huge context-switching overhead, as well as the large memory consumption, as each thread needs its own stack).
| NGINX | 3,436,808 | 64 |
I have a Dockerfile and custom Nginx configuration file (in the same directory with Dockerfile) as follows:
Dockerfile:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
nginx.conf file:
upstream myapp1 {
least_conn;
server http://example.com:81;
server http://example.com:82;
server http://example.com:83;
}
server {
listen 80;
location / {
proxy_pass http://myapp1;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I run these two commands:
docker --tls build -t nginx-image .
docker --tls run -d -p 80:80 --name nginx nginx-image
Then I checked out all running containers but it didn't show up. When I searched Nginx container's log, I found this error message:
[emerg] 1#1: unknown directive "upstream" in
/etc/nginx/nginx.conf:1 Nginx: [emerg] unknown directive
"upstream" in /etc/nginx/nginx.conf:
What am I missing?
| As mentioned in the NGiNX documentation, upstream is supposed to be defined in an http context.
As mentioned in nginx unkown directive “upstream”:
When that file is included normally by nginx.conf, it is included already inside the http context:
http {
include /etc/nginx/sites-enabled/*;
}
You either need to use -c /etc/nginx/nginx.conf or make a small wrapper like the above block and nginx -c it.
In case of Docker, you can see different options with abevoelker/docker-nginx:
docker run -v /tmp/foo:/foo abevoelker/nginx nginx -c /foo/nginx.conf
For a default nginx.conf, check your CMD:
CMD ["nginx", "-c", "/data/conf/nginx.conf"]
| NGINX | 30,151,436 | 63 |
I installed it by running sudo apt-get install phpymyadmin and then running
sudo ln -s /usr/share/phpmyadmin/ /usr/share/nginx/html and sudo service nginx restart
but it's not working.
Note: I didn't select any of the apache2 or lighttpd options when installing.
| Option 1:
This will install the latest version of PhpMyAdmin from a shell script I've written. You are welcome to check it out on Github.
Run the following command from your code/projects directory:
curl -sS https://raw.githubusercontent.com/grrnikos/pma/master/pma.sh | bash
Option 2:
This will install PhpMyAdmin (not the latest version) from Ubuntu's repositories. Assuming that your projects live in /home/vagrant/Code :
sudo apt-get install phpmyadmin Do not select apache2 nor lighttpd when prompted. Just hit tab and enter.
sudo ln -s /usr/share/phpmyadmin/ /home/vagrant/code/phpmyadmin
cd ~/Code && serve phpmyadmin.test /home/vagrant/code/phpmyadmin
Note: If you encounter issues creating the symbolic link on step 2, try the first option or see Lyndon Watkins' answer below.
Final steps:
Open the /etc/hosts file on your main machine and add:
127.0.0.1 phpmyadmin.test
Go to http://phpmyadmin.test:8000
| NGINX | 23,788,096 | 63 |
I want my nginx make display all url's clean.
http://www.mydomain.com/indexhtml.html as http://www.mydomain.com/indexhtml
http://www.mydomain.com/indexphp.php as http://www.mydomain.com/indexphp
With some research I've made the first case to work. It`s done by following configuration:
location / {
root html;
index index.html index.htm index.php;
try_files $uri.html $uri/ =404;
}
It works for indexhtml.html displaying as indexhtml, but nothing happens with .php. If I change $uri.html to $uri.php, it works neither for .html, neither .php. I`ve tried to put something similar in php location but without any success.
Any advices?
| From what I've researched, if you append your /etc/nginx/conf.d/domain.tld.conf file to include:
location / {
try_files $uri $uri.html $uri/ @extensionless-php;
index index.html index.htm index.php;
}
location ~ \.php$ {
try_files $uri =404;
}
location @extensionless-php {
rewrite ^(.*)$ $1.php last;
}
Then restart nginx and give it a go. Hopefully this will help you! More information can be found (where I found it) here @ tweaktalk.net
| NGINX | 21,911,297 | 63 |
At the end of last week I noticed a problem on one of my medium AWS instances where Nginx always returns a HTTP 499 response if a request takes more than 60 seconds. The page being requested is a PHP script
I've spent several days trying to find answers and have tried everything that I can find on the internet including several entries here on Stack Overflow, nothing works.
I've tried modifying the PHP settings, PHP-FPM settings and Nginx settings. You can see a question I raised on the NginX forums on Friday (http://forum.nginx.org/read.php?9,237692) though that has received no response so I am hoping that I might be able to find an answer here before I am forced to moved back to Apache which I know just works.
This is not the same problem as the HTTP 500 errors reported in other entries.
I've been able to replicate the problem with a fresh micro AWS instance of NginX using PHP 5.4.11.
To help anyone who wishes to see the problem in action I'm going to take you through the set-up I ran for the latest Micro test server.
You'll need to launch a new AWS Micro instance (so it's free) using the AMI ami-c1aaabb5
This PasteBin entry has the complete set-up to run to mirror my test environment. You'll just need to change example.com within the NginX config at the end
http://pastebin.com/WQX4AqEU
Once that's set-up you just need to create the sample PHP file which I am testing with which is
<?php
sleep(70);
die( 'Hello World' );
?>
Save that into the webroot and then test. If you run the script from the command line using php or php-cgi, it will work. If you access the script via a webpage and tail the access log /var/log/nginx/example.access.log, you will notice that you receive the HTTP 1.1 499 response after 60 seconds.
Now that you can see the timeout, I'll go through some of the config changes I've made to both PHP and NginX to try to get around this. For PHP I'll create several config files so that they can be easily disabled
Update the PHP FPM Config to include external config files
sudo echo '
include=/usr/local/php/php-fpm.d/*.conf
' >> /usr/local/php/etc/php-fpm.conf
Create a new PHP-FPM config to override the request timeout
sudo echo '[www]
request_terminate_timeout = 120s
request_slowlog_timeout = 60s
slowlog = /var/log/php-fpm-slow.log ' >
/usr/local/php/php-fpm.d/timeouts.conf
Change some of the global settings to ensure the emergency restart interval is 2 minutes
# Create a global tweaks
sudo echo '[global]
error_log = /var/log/php-fpm.log
emergency_restart_threshold = 10
emergency_restart_interval = 2m
process_control_timeout = 10s
' > /usr/local/php/php-fpm.d/global-tweaks.conf
Next, we will change some of the PHP.INI settings, again using separate files
# Log PHP Errors
sudo echo '[PHP]
log_errors = on
error_log = /var/log/php.log
' > /usr/local/php/conf.d/errors.ini
sudo echo '[PHP]
post_max_size=32M
upload_max_filesize=32M
max_execution_time = 360
default_socket_timeout = 360
mysql.connect_timeout = 360
max_input_time = 360
' > /usr/local/php/conf.d/filesize.ini
As you can see, this is increasing the socket timeout to 3 minutes and will help log errors.
Finally, I'll edit some of the NginX settings to increase the timeout's that side
First I edit the file /etc/nginx/nginx.conf and add this to the http directive
fastcgi_read_timeout 300;
Next, I edit the file /etc/nginx/sites-enabled/example which we created earlier (See the pastebin entry) and add the following settings into the server directive
client_max_body_size 200;
client_header_timeout 360;
client_body_timeout 360;
fastcgi_read_timeout 360;
keepalive_timeout 360;
proxy_ignore_client_abort on;
send_timeout 360;
lingering_timeout 360;
Finally I add the following into the location ~ .php$ section of the server dir
fastcgi_read_timeout 360;
fastcgi_send_timeout 360;
fastcgi_connect_timeout 1200;
Before retrying the script, start both nginx and php-fpm to ensure that the new settings have been picked up. I then try accessing the page and still receive the HTTP/1.1 499 entry within the NginX example.error.log.
So, where am I going wrong? This just works on apache when I set PHP's max execution time to 2 minutes.
I can see that the PHP settings have been picked up by running phpinfo() from a web-accessible page. I just don't get, I actually think that too much has been increased as it should just need PHP's max_execution_time, default_socket_timeout changed as well as NginX's fastcgi_read_timeout within just the server->location directive.
Update 1
Having performed some further test to show that the problem is not that the client is dying I have modified the test file to be
<?php
file_put_contents('/www/log.log', 'My first data');
sleep(70);
file_put_contents('/www/log.log','The sleep has passed');
die('Hello World after sleep');
?>
If I run the script from a web page then I can see the content of the file be set to the first string. 60 seconds later the error appears in the NginX log. 10 seconds later the contents of the file changes to the 2nd string, proving that PHP is completing the process.
Update 2
Setting fastcgi_ignore_client_abort on; does change the response from a HTTP 499 to a HTTP 200 though nothing is still returned to the end client.
Update 3
Having installed Apache and PHP (5.3.10) onto the box straight (using apt) and then increasing the execution time the problem does appear to also happen on Apache as well. The symptoms are the same as NginX now, a HTTP200 response but the actual client connection times out before hand.
I've also started to notice, in the NginX logs, that if I test using Firefox, it makes a double request (like this PHP script executes twice when longer than 60 seconds). Though that does appear to be the client requesting upon the script failing
| The cause of the problem is the Elastic Load Balancers on AWS. They, by default, timeout after 60 seconds of inactivity which is what was causing the problem.
So it wasn't NginX, PHP-FPM or PHP but the load balancer.
To fix this, simply go into the ELB "Description" tab, scroll to the bottom, and click the "(Edit)" link beside the value that says "Idle Timeout: 60 seconds"
| NGINX | 15,613,452 | 63 |
I am getting this error from Nginx, but can't seem to figure it out! I am using codeigniter and am using the database for sessions. So I'm wondering how the header can ever be too big. Is there anyway to check what the header is? or potentially see what I can do to fix this error?
Let me know if you need me to put up any conf files or whatever and I'll update as you request them
2012/12/15 11:51:39 [error] 2007#0: *5778 upstream sent too big header while reading response header from upstream, client: 24.63.77.149, server: jdobres.xxxx.com, request: "POST /main/login HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "jdobres.xxxxx.com", referrer: "http://jdobres.xxxx.com/"
UPDATE
I added the following into conf:
proxy_buffer_size 512k;
proxy_buffers 4 512k;
proxy_busy_buffers_size 512k;
And now I still get the following:
2012/12/16 12:40:27 [error] 31235#0: *929 upstream sent too big header while reading response header from upstream, client: 24.63.77.149, server: jdobres.xxxx.com, request: "POST /main/login HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "jdobres.xxxx.com", referrer: "http://jdobres.xxxx.com/"
| Add this to your http {} of the nginx.conf file normally located at /etc/nginx/nginx.conf:
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
Then add this to your php location block, this will be located in your vhost file look for the block that begins with location ~ .php$ {
fastcgi_buffer_size 128k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
| NGINX | 13,894,386 | 63 |
I need to enable gzip compression on nginx server. As I have observed from firfox firebug NET tools, I have found that html file are gzip compressed. But Not the javascript files and CSS files.
I have already check Mime.types and nginx configuration file /etc/nginx/ngnix.conf and not found any issue.
still not able to see the css and javascript Gzip Compression.
My NGINX.conf entries are as below
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
| This is an working config that I currently use in production.
http://pastie.org/10870547
gzip on;
gzip_disable "msie6";
gzip_comp_level 6;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_proxied any;
gzip_types
text/plain
text/css
text/js
text/xml
text/javascript
application/javascript
application/json
application/xml
application/rss+xml
image/svg+xml;
This config was tested via tools.pingdom.com.
| NGINX | 12,640,014 | 63 |
I've seen example NGINX configurations with the "deferred" option added to the listen directive
server {
listen 80 default deferred;
...
}
I can't work out what it does (and whether or not I should use it) and the documentation doesn't make too much sense to me
deferred -- indicates to use that postponed accept(2) on Linux with
the aid of option TCP_DEFER_ACCEPT
Can anyone explain what this option is for?
| TCP_DEFER_ACCEPT can help boost performance by reducing the amount of preliminary formalities that happen between the server and client.
You can read more about it HERE.
| NGINX | 8,449,058 | 63 |
I am trying to proxy a request to different targets depending on an environment variable. My approach was to put the target url into the custom variable $target and give this to proxy_pass.
But using a variable with proxy_pass doesn't seem to work. This simple config leads to a "502 Bad Gateway" response from nginx.
server {
listen 8080;
server_name myhost.example.com;
access_log /var/log/nginx/myhost.access.log;
location /proxy {
set $target http://proxytarget.example.com;
proxy_pass $target;
}
}
The same config without the variable works:
server {
listen 8080;
server_name myhost.example.com;
access_log /var/log/nginx/myhost.access.log;
location /proxy {
proxy_pass http://proxytarget.example.com;
}
}
Is it really not possible to use proxy_pass this way or am I just doing something wrong?
| I've recently stumbled upon this need myself and have found that in order to use variables in a proxy_pass destination you need to set a resolver as your error.log would most probably contain something like no resolver defined to resolve ...
The solution in my case was to setup the following using a local DNS for DNS resolution:
location ~ /proxy/(.*) {
resolver 127.0.0.1 [::1];
proxy_pass http://$1;
}
In your case this should work:
location /proxy {
resolver 127.0.0.1 [::1];
set $target http://proxytarget.example.com;
proxy_pass $target;
}
For resolver 127.0.0.1 to work, you need to install bind9 locally.
For Debian/Ubuntu:
sudo apt-get install bind9
More information on nginx and dynamic proxy_passing here: http://www.nginx-discovery.com/2011/05/day-51-proxypass-and-resolver.html
Edit:
Replaced the previous public DNS with a local one for security issues.
| NGINX | 5,743,609 | 63 |
I have tried following some tutorials and documentation on dockerizing my web server, but I am having trouble getting the service to run via the docker run command.
This is my Dockerfile:
FROM ubuntu:trusty
#Update and install stuff
RUN apt-get update
RUN apt-get install -y python-software-properties aptitude screen htop nano nmap nginx
#Add files
ADD src/main/resources/ /usr/share/nginx/html
EXPOSE 80
CMD service nginx start
I create my image:
docker build -t myImage .
And when I run it:
docker run -p 81:80 myImage
it seems to just stop:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
90e54a254efa pms-gui:latest /bin/sh -c service n 3 seconds ago Exit 0 prickly_bohr
I would expect this to be running with port 81->80 but it is not. Running
docker start 90e
does not seem to do anything.
I also tried entering it directly
docker run -t -i -p 81:80 myImage /bin/bash
and from here I can start the service
service nginx start
and from another tab I can see it is working as intended (also in my browser):
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
408237a5e10b myImage:latest /bin/bash 12 seconds ago Up 11 seconds 0.0.0.0:81->80/tcp mad_turing
So I assume it is something I am doing wrong with my Dockerfile? Could anyone help me out with this, I am quite new to Docker. Thank you!
SOLUTION: Based on the answer from Ivant I found another way to start nginx in the foreground. My Dockerfile CMD now looks like:
CMD /usr/sbin/nginx -g "daemon off;"
| As of now, the official nginx image uses this to run nginx (see the Dockerfile):
CMD ["nginx", "-g", "daemon off;"]
In my case, this was enough to get it to start properly. There are tutorials online suggesting more awkward ways of accomplishing this but the above seems quite clean.
| NGINX | 24,241,292 | 62 |
Well I am currently trying to get my django application served using nginx and uwsgi. I am currently using a virtual environment to which uwsgi is installed. However I am currently getting a 502 bad gateway error when attempting to access the page.
The Error I am experiencing.
2014/02/27 14:20:48 [crit] 29947#0: *20 connect() to unix:///tmp/uwsgi.sock failed (13: Permission denied) while connecting to upstream, client: 144.136.65.176, server: domainname.com.au, request: "GET /favicon.ico HTTP/1.1", upstream: "uwsgi://unix:///tmp/uwsgi.sock:", host: "www.domainname.com.au"
This is my nginx.conf
# mysite_nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:///tmp/uwsgi.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name .domainname.com.au; # substitute your machine's IP address or FQDN
charset utf-8;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/deepc/media; # your Django project's media files - amend as required
}
location /static {
alias /home/deepc/static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/deepc/.virtualenvs/dcwebproj/dcweb/uwsgi_params; # the uwsgi_params file you installed
}
}
Here is my uwsgi.ini file
[uwsgi]
socket=/tmp/uwsgi.sock
chmod-socket=644
uid = www-data
gid = www-data
chdir=/home/deepc/.virtualenvs/dcwebproj/dcweb
module=dcweb.wsgi:application
pidfile=/home/deepc/.virtualenvs/dcwebproj/dcweb.pid
vacuum=true
From what i have read on google its a permissions problem with the www-data group and /tmp/ directory. However I am new to this and have tried to changer the permission level of the folder to no avail. Could someone point me in the right direction? Is this a permissions problem.
Also is it ok practice to put the sock file in tmp directory?
Thanks
| I think you just need to change your socket file to 666(664 is ok with www-data), or remove it and run uwsgi server again.
In my uwsgi.ini:
chmod-socket = 664
uid = www-data
gid = www-data
| NGINX | 22,071,681 | 62 |
I have installed Nginx in our redhat machine using rpm. Now we want to add nginx-rtmp module, but inorder to add new module as per the document i need to build it by downloading the tar ball. Does it mean that i have to remove the rpm and install it as per the document.
Ref: https://github.com/arut/nginx-rtmp-module/wiki/Getting-started-with-nginx-rtmp
./configure --add-module=/usr/build/nginx-rtmp-module
make
make install
| With nginx 1.9.11, it's not necessary to recompile the server, as they added support for dynamic modules. Take a look here:
https://www.nginx.com/blog/dynamic-modules-nginx-1-9-11/
| NGINX | 16,049,717 | 62 |
What is the difference between:
location = /abc {}
and
locaton ~ /abc {}
| location = /abc {} matches the exact uri /abc
location ~ /abc is a regex match on the uri, meaning any uri containing /abc,
you probably want: location ~ ^/abc for the uri begining with /abc
instead
| NGINX | 5,239,131 | 62 |
Nginx+PHP (on fastCGI) works great for me. When I enter a path to a PHP file which doesn't exist, instead of getting the default 404 error page (which comes for any invalid .html file), I simply get a "No input file specified.".
How can I customize this 404 error page?
| You can setup a custom error page for every location block in your nginx.conf, or a global error page for the site as a whole.
To redirect to a simple 404 not found page for a specific location:
location /my_blog {
error_page 404 /blog_article_not_found.html;
}
A site wide 404 page:
server {
listen 80;
error_page 404 /website_page_not_found.html;
...
You can append standard error codes together to have a single page for several types of errors:
location /my_blog {
error_page 500 502 503 504 /server_error.html
}
To redirect to a totally different server, assuming you had an upstream server named server2 defined in your http section:
upstream server2 {
server 10.0.0.1:80;
}
server {
location /my_blog {
error_page 404 @try_server2;
}
location @try_server2 {
proxy_pass http://server2;
}
The manual can give you more details, or you can search google for the terms nginx.conf and error_page for real life examples on the web.
| NGINX | 1,024,199 | 62 |
I'm actually working on a webapp, I use Reactjs for the frontend and Golang for the backend. Those 2 programs are hosted separately on 2 VMs on Google-Compute-Engine. I want to serve my app through https so I choose to use Nginx for serving the frontend in production. Firstly I made my config file for Nginx:
#version: nginx/1.14.0 (ubuntu)
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/banshee;
server_name XX.XXX.XX.XXX; #public IP of my frontend VM
index index.html;
location / {
try_files $uri /index.html =404;
}
}
For this part everything works as expected but after that I want to serve my App over https following this tutorial. I installed the packages software-properties-common,python-certbot-apache
and certbot but when I tried
sudo cerbot --nginx certonly
I get the following message:
gdes@frontend:/etc/nginx$ sudo certbot --nginx certonly
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Could not choose appropriate plugin: The requested nginx plugin does not appear to be installed
The requested nginx plugin does not appear to be installed
I made some searches on Google and here and I still can't figure out which plugin is missing or an other way to fix this.
Does someone have an idea tohelp me ?
Thanks a lot :)
| I was trying to create Let's Encrypt certificate using certbot for my sub-domain and had the following issue.
Command:
certbot --nginx -d my_subdomain.website.com -d my_subdomain2.website.com
Issue:
The requested Nginx plugin does not appear to be installed
Solution:
Ubuntu 20+
sudo apt-get install python3-certbot-nginx
Earlier Versions
sudo apt-get install python-certbot-nginx
| NGINX | 53,223,914 | 61 |
Now that I have nginx setup I need to be able to hide my .git directories. What kind of rewrite would I need to stop prying eyes? And where in the server {} or http {} block would it go?
| http {
server {
location ~ /\.git {
deny all;
}
}
}
This location directive will deny access to any .git directory in any subdirectory.
Note: This location block must be before your main location block, so that it can be evaluated first.
| NGINX | 2,999,353 | 61 |
How to fix Error: must either provide a name or specify --generate-name in Helm
Created sample helm chart name as mychart and written the deployment.yaml, service.yaml, ingress.yaml with nginx service. After that running the command like $ helm install mychart
service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: main
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
deployment.yaml
apiVersion: extensions/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13
ports:
containerPort: 80
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
http.port: "443"
spec:
backend:
serviceName: nginx
servicePort: 80
Expected output:
.....
status: DEPLOYED
| just to add --generate-name at the end of helm command
| NGINX | 57,322,873 | 60 |
I am using React Router for routing for a multi-page website. When trying to go to a sub page directly https://test0809.herokuapp.com/signin you'd get a "404 Not Found -nginx" error (To be able to see this problem you might need to go to this link in Incognito mode so there's no cache). All the links work fine if you go from the home page: test0809.herokuapp.com/. I was using BrowserRouter and was able to eliminate the "404 not found" error by changing BrowserRouter to HashRouter, which gives all my urls a "#" sign. Besides all the problems with having a "#" in your urls, the biggest issue with it is that I need to implement LinkedIn Auth in my website, and LinkedIn OAuth 2.0 does not allow redirect URLs to contain #.
LinedIn OAuth 2.0 error screen grab
import React, { Component } from 'react'
import { BrowserRouter as Router, Route, Link } from 'react-router-dom'
import LinkedIn from 'react-linkedin-login'
const Home = () => <div><h2>Home</h2></div>
const About = () => <div><h2>About</h2></div>
class Signin extends Component {
callbackLinkedIn = code => {
console.log(1, code)
}
render() {
return (
<div>
<h2>Signin</h2>
<LinkedIn
clientId="clientID"
callback={this.callbackLinkedIn}
>
</div>
)
}
}
const BasicExample = () =>
<Router>
<div>
<ul>
<li>
<Link to="/">Home</Link>
</li>
<li>
<Link to="/about">About</Link>
</li>
<li>
<Link to="/signin">Signin</Link>
</li>
</ul>
<hr />
<Route exact path="/" component={Home} />
<Route path="/about" component={About} />
<Route path="/signin" component={Signin} />
</div>
</Router>
export default BasicExample
Any suggestions on the workarounds?
Background: I started the project with create-react-app. GitHub repo: /debelopumento/test0809
| The problem is that nginx doesn't know what to do with /signin. You need to change your nginx config (usually in /etc/nginx/conf.d/) to serve your index.html regardless of the route. Here is a sample nginx config that might help:
server {
listen 80 default_server;
server_name /var/www/example.com;
root /var/www/example.com;
index index.html index.htm;
location ~* \.(?:manifest|appcache|html?|xml|json)$ {
expires -1;
# access_log logs/static.log; # I don't usually include a static log
}
location ~* \.(?:css|js)$ {
try_files $uri =404;
expires 1y;
access_log off;
add_header Cache-Control "public";
}
# Any route containing a file extension (e.g. /devicesfile.js)
location ~ ^.+\..+$ {
try_files $uri =404;
}
# Any route that doesn't have a file extension (e.g. /devices)
location / {
try_files $uri $uri/ /index.html;
}
}
| NGINX | 45,598,779 | 60 |
I use NGINX in my dedicated server.
I've a question about the return and rewrite 301.
Rewrite 301:
rewrite ^ http://xxx.xxxxx.net/xx-xxx/$request_uri? permanent;
Return 301:
location ~ redirect-this/?$ {
return 301 http://xxx.xxxxx.net/xx-xxx/redirect-this$1;
}
All redirects work correctly. But..
Which is the most effective method to make a 301 redirect?
I've more than 200 url to redirect. So, what you recommend?
| As stated in the nginx pitfalls you should use server blocks and return statements as they're way faster than evaluating RegEx via location blocks.
Since you're forcing the rewrite rule to send a 301 there's no difference when it comes to SEO, btw..
| NGINX | 30,165,746 | 60 |
I'm getting the following error on my chrome console:
GET http://localhost/grunt/vendor/angular/angular.js net::ERR_CONTENT_LENGTH_MISMATCH
This only happens when a simultaneous requests are shot towards nginx e.g. when the browsers cache is empty and the whole app loads. Loading the resource above as a single requests succeeds.
Here are the headers to this requests, copied from Chrome:
Remote Address:127.0.0.1:80
Request URL:http://localhost/grunt/vendor/angular/angular.js
Request Method:GET
Status Code:200 OK
Request Headersview source
Accept:*/*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8,de;q=0.6,pl;q=0.4,es;q=0.2,he;q=0.2,gl;q=0.2
Cache-Control:no-cache
Connection:keep-alive
Cookie:gs_u_GSN-265185-D=1783247335:2567:5000:1377697930719
Host:localhost
Pragma:no-cache
Referer:http://localhost/grunt/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.122 Safari/537.36
Response Headersview source
Accept-Ranges:bytes
Cache-Control:public, max-age=0
Connection:keep-alive
Content-Length:873444
Content-Type:application/javascript
Date:Tue, 23 Sep 2014 11:08:19 GMT
ETag:"873444-1411465226000"
Last-Modified:Tue, 23 Sep 2014 09:40:26 GMT
Server:nginx/1.6.0
the real size of the file:
$ ll vendor/angular/angular.js
-rw-rw-r-- 1 xxxx staff 873444 Aug 30 07:21 vendor/angular/angular.js
As you can see Content-Length and the real size of the file are the same, so that's weird
And the nginx configuration to this proxy:
location /grunt/ {
proxy_pass http://localhost:9000/;
}
Any ideas?
Thanks
EDIT: found more info on the error log:
2014/09/23 13:08:19 [crit] 15435#0: *8 open() "/usr/local/var/run/nginx/proxy_temp/1/00/0000000001" failed (13: Permission denied) while reading upstream, client: 127.0.0.1, server: localhost, request: "GET /grunt/vendor/angular/angular.js HTTP/1.1", upstream: "http://127.0.0.1:9000/vendor/angular/angular.js", host: "localhost", referrer: "http://localhost/grunt/"
| Adding the following line to the nginx config was the only thing that fixed the net::ERR_CONTENT_LENGTH_MISMATCH error for me:
proxy_buffering off;
| NGINX | 25,993,826 | 60 |
I've got a Node.js powered site that I'm running on Amazon Elastic Beanstalk.
My Node.js app listens on port 8080, and I'm using the nginx elastic load balancer configuration with my EB app, listening on port 80 and 443 for HTTP and HTTPS.
However, I only want to accept traffic in my app that has come via HTTPS.
I could rig something up in the app to deal with this, but am interested in a way to get the load balancer to redirect all HTTP requests to my site via HTTPS.
| After several false-starts with ideas from Amazon's paid support, they did come through in the end. The way you get this to work is you configure your environment to respond to both port 80 and 443. Then create a folder in your main Node.js app folder called .ebextensions, and you place a file named 00_nginx_https_rw.config in there, with this text as the contents:
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#! /bin/bash
CONFIGURED=`grep -c "return 301 https" /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/listen 8080;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; }\n' /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
Amazon's support team explained: This config creates a deployment hook which will add the rewrite rules to /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf.
(Previously they had offered me .config's that copied separate files into /etc/nginx/conf.d, but those either had no effect, or worse, seemed to overwrite or take precedence over the default nginx configuration, for some reason.)
If you ever want to undo this, i.e. to remove the hooks, you need to remove this ebextension and issue a command to remove the files that it creates. You can do this either manually, or via ebextensions commands you put in place temporarily:
/opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh
/opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
I haven't tried this, but presumably something like this would work to remove them and undo this change:
container_commands:
00_undochange:
command: rm /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh
01_undochange:
command: rm /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
Hope this can help someone else in the future.
| NGINX | 24,297,375 | 60 |
after downloading and trying to configure nginx when um executing the command ./configure
um getting this error
./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre=<path> option.
and I execute the
apt-get build-dep nginx
command um getting the following error
The following packages have unmet dependencies:
libgd2-noxpm-dev : Depends: libgd2-noxpm (= 2.0.36~rc1~dfsg-6ubuntu2) but it is not going to be installed
E: Build-dependencies for nginx could not be satisfied.
I dont have any idea about the libgd2-noxpm. This is my first time with nginx . how to overcome from this error . Thank you in advance
| You have to install pcre3:
apt-get install libpcre3 libpcre3-dev
The library is required for regular expressions support in the location directive and for the ngx_http_rewrite_module module. http://nginx.org/en/docs/install.html
| NGINX | 14,045,720 | 60 |
Environment is Nginx + uwsgi.
Getting a 502 bad gateway error from Nginx on certain GET requests. Seems to be related to the length of the URL. In our particular case, it was a long list of GET parameters. Shorten the GET parameters and no 502 error.
From the nginx/error.log
[error] 22113#0: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.1.100, server: server.domain.com, request: "GET <long_url_here>"
No information in the uwsgi error log.
| After spending a lot of time on this, I finally figured it out. There are many references to Nginx and connection reset by peer. Most of them seemed to be related to PHP. I couldn't find an answer that was specific to Nginx and uwsgi.
I finally found a reference to fastcgi and a 502 bad gateway error (https://support.plesk.com/hc/en-us/articles/213903705). That lead me to look for a buffer size limit in the uwsgi configuration which exists as buffer-size. The default value is 4096. From the documentation, it says:
If you plan to receive big requests with lots of headers you can increase this value up to 64k (65535).
There are many ways to configure uwsgi, I happen to use a .ini file. So in my .ini file I tried:
buffer-size=65535
This fixed the problem. You can adjust that to taste. Maybe start with the max and work back until you have an acceptable value, or just leave it at the max.
This was frustrating to track down because there was no error on the uwsgi side of things.
| NGINX | 22,697,584 | 59 |
I am getting this error in my nginx-error.log file:
2014/02/17 03:42:20 [crit] 5455#0: *1 connect() to unix:/tmp/uwsgi.sock failed (13: Permission denied) while connecting to upstream, client: xx.xx.x.xxx, server: localhost, request: "GET /users HTTP/1.1", upstream: "uwsgi://unix:/tmp/uwsgi.sock:", host: "EC2.amazonaws.com"
The browser also shows a 502 Bad Gateway Error. The output of a curl is the same, Bad Gateway html
I've tried to fix it by changing permissions for /tmp/uwsgi.sock to 777. That didn't work. I also added myself to the www-data group (a couple questions that looked similar suggested that). Also, no dice.
Here is my nginx.conf file:
nginx.conf
worker_processes 1;
worker_rlimit_nofile 8192;
events {
worker_connections 3000;
}
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I am running a Flask application with Nginsx and Uwsgi, just to be thorough in my explanation. If anyone has any ideas, I would really appreciate them.
EDIT
I have been asked to provide my uwsgi config file. So, I never personally wrote my nginx or my uwsgi file. I followed the guide here which sets everything up using ansible-playbook. The nginx.conf file was generated automatically, but there was nothing in /etc/uwsgi except a README file in both apps-enabled and apps-available folders. Do I need to create my own config file for uwsgi? I was under the impression that ansible took care of all of those things.
I believe that ansible-playbook figured out my uwsgi configuration since when I run this command
uwsgi -s /tmp/uwsgi.sock -w my_app:app
it starts up and outputs this:
*** Starting uWSGI 2.0.1 (64bit) on [Mon Feb 17 20:03:08 2014] ***
compiled with version: 4.7.3 on 10 February 2014 18:26:16
os: Linux-3.11.0-15-generic #25-Ubuntu SMP Thu Jan 30 17:22:01 UTC 2014
nodename: ip-10-9-xxx-xxx
machine: x86_64
clock source: unix
detected number of CPU cores: 1
current working directory: /home/username/Project
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 4548
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
Python version: 2.7.5+ (default, Sep 19 2013, 13:52:09) [GCC 4.8.1]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x1f60260
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72760 bytes (71 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 3 seconds on interpreter 0x1f60260 pid: 26790 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 26790, cores: 1)
| The permission issue occurs because uwsgi resets the ownership and permissions of /tmp/uwsgi.sock to 755 and the user running uwsgi every time uwsgi starts.
The correct way to solve the problem is to make uwsgi change the ownership and/or permission of /tmp/uwsgi.sock such that nginx can write to this socket. Therefore, there are three possible solutions.
Run uwsgi as the www-data user so that this user owns the socket file created by it.
uwsgi -s /tmp/uwsgi.sock -w my_app:app --uid www-data --gid www-data
Change the ownership of the socket file so that www-data owns it.
uwsgi -s /tmp/uwsgi.sock -w my_app:app --chown-socket=www-data:www-data
Change the permissions of the socket file, so that www-data can write to it.
uwsgi -s /tmp/uwsgi.sock -w my_app:app --chmod-socket=666
I prefer the first approach because it does not leave uwsgi running as root.
The first two commands need to be run as root user. The third command does not need to be run as root user.
The first command leaves uwsgi running as www-data user. The second and third commands leave uwsgi running as the actual user that ran the command.
The first and second command allow only www-data user to write to the socket. The third command allows any user to write to the socket.
I prefer the first approach because it does not leave uwsgi running as root user and it does not make the socket file world-writeable .
| NGINX | 21,820,444 | 59 |
I'm trying to change the client_max_body_size value, so my NGINX ingress will not return the HTTP 413 Content Too Large error (as seen in the logs).
I've tested a few solutions.
Here is my config map:
kind: ConfigMap
apiVersion: v1
data:
proxy-connect-timeout: "15"
proxy-read-timeout: "600"
proxy-send-timeout: "600"
proxy-body-size: "8m"
hsts-include-subdomains: "false"
body-size: "64m"
server-name-hash-bucket-size: "256"
client-max-body-size: "50m"
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
These changes had no effect at all: in NGINX controller's log I can see the information about reloading the config map, but the values in nginx.conf are the same:
$ cat /etc/nginx/nginx.conf | grep client_max
client_max_body_size "8m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
client_max_body_size "1m";
My nginx-controller config uses this image:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0
How can I force NGINX to change this setting? I need to change it globally, for all my ingresses.
| You can use the annotation nginx.ingress.kubernetes.io/proxy-body-size to set the max-body-size option right in your Ingress object instead of changing a base ConfigMap.
Here is the example of usage:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
...
| NGINX | 49,918,313 | 58 |
I want to use gunicorn for a REST API application with Flask/Python. What is the purpose of adding nginx here to gunicorn? The gunicorn site recommends using gunicorn with nginx.
| Nginx has some web server functionality (e.g., serving static pages; SSL handling) that gunicorn does not, whereas gunicorn implements WSGI (which nginx does not).
... Wait, why do we need two servers? Think of Gunicorn as the
application web server that will be running behind nginx – the front-
facing web server. Gunicorn is WSGI-compatible. It can talk to other
applications that support WSGI, like Flask or Django.
Source: https://realpython.com/blog/python/kickstarting-flask-on-ubuntu-setup-and-deployment/
| NGINX | 43,044,659 | 58 |
I'm trying to get my server re-setup as a Lemp stack
The issue I am now running into is installing PHP 7 without Apache, since nGinx will be my webserver.
So, I've added ppa:ondrej/php. ran apt-get update, and tried to install just php7.0 via apt-get install php7.0
--nodeps flag does not work, as I am on Ubuntu 15.10
And I am presented with:
The following extra packages will be installed:
apache2 apache2-bin apache2-data apache2-utils libapache2-mod-php7.0 libapr1
libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.1-0 libqdbm14
php-common php-readline php7.0 php7.0-cli php7.0-common php7.0-json
php7.0-opcache php7.0-readline
Suggested packages:
apache2-doc apache2-suexec-pristine apache2-suexec-custom php-pear
php-user-cache
The following NEW packages will be installed:
apache2 apache2-bin apache2-data apache2-utils libapache2-mod-php7.0 libapr1
libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap liblua5.1-0 libqdbm14
php php-common php-readline php7.0 php7.0-cli php7.0-common php7.0-json
php7.0-opcache php7.0-readline
I do not want apache anywhere near my server, so how can I install php7 without it? Short of compiling from source (as this makes it difficult at best to keep it updated)
| If you just request php7.0, it'll install Apache as default. Do apt-get install php7.0-fpm and it'll install as FPM instead, leaving something like nginx up to you.
| NGINX | 34,880,267 | 58 |
I have got a virtual private server with nginx Virtual Hosts setup (Server Blocks).
I've installed Git and got my ssh keys authenticated with GitHub.
I have my website running in
~/var/www/example.com/public_html/
I tried to run:
git clone [email protected]:example/example.co.uk.git
to pull my files on GitHub to the /public_html/ directory but I get the error:
fatal: could not create work tree dir 'example.com'.: Permission denied
I've followed this tutorial including the same issue he has mentioned on the page, but it still won't work: http://machiine.com/2013/pulling-a-git-repo-from-github-to-your-ubuntu-server/
I'm completely new to this, so your help would be much appreciated!
| I think you don't have your permissions set up correctly for /var/www
Change the ownership of the folder.
sudo chown -R $USER /var/www
| NGINX | 20,276,895 | 58 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.