question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I'm developing an ember.js app with a laravel backend. I'm trying to return http error codes with php if something goes awry. I've noticed that when issue a PUT request and return a 400 status code, my CORS headers get ignored by my conf file which breaks my ember frontend. I have no idea why the PUT/400 code combo makes nginx ignore my conf. Any help would be much appreciated. server { listen *:80 ; server_name userchamp.com; access_log /var/log/nginx/embertest.com.access.log; location / { root /var/www/embertest/public; try_files $uri $uri/ /index.php?$args ; index index.html index.htm index.php; } location ~ \.php$ { if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain charset=UTF-8'; add_header 'Content-Length' 0; return 204; } if ($request_method = 'POST') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; } if ($request_method = 'PUT') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; } if ($request_method = 'GET') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; } if ($request_method = 'DELETE') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; } root /var/www/embertest/public; try_files $uri $uri/ /index.php?$args ; index index.html index.htm index.php; fastcgi_index index.php; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param APP_ENV dev; fastcgi_param APP_DBG true; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; } }
For nginx >= 1.7.5 Append "always" to the header definition: add_header 'Access-Control-Allow-Origin' '*' always; For nginx < 1.7.5 According to the nginx official document of ngx_header_module, the add_header can't work when response code is 400 syntax: add_header name value; default: — context: http, server, location, if in location Adds the specified field to a response header provided that the response code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307. A value can contain variables. In another way, you can try the HttpHeadersMoreModule, which is more powerful.
NGINX
20,414,669
52
I'm trying to find tutorials on how to build a module/plugin for Nginx web server. Can someone help please, I just can't seem to find the appropriate tutorials.
Quoting from the documentation: Evan Miller has written the definitive guide to Nginx module development. But some parts of it are a little out of date. You've been warned. A github search turned up the Nginx Development Kit. It seems to be more up to date. From my own personal experience, Evan Miller's guide was of a great help. You must also have a deep understanding of how NGINX works. Agentzh's tutorial can help you. Reading the source code of his modules is always helpful too. There is also a video tutorial that I haven't check yet, but it seems nice.
NGINX
1,187,314
52
I'm trying to follow this Ansible tutorial while adjusting it for Ubuntu 16.04 with php7. Below this message you'll find my Ansible file. After running it and trying to visit the page in the browser I get a 404, and the following in the nginx error logs: 2016/10/15 13:13:20 [crit] 28771#28771: *7 connect() to unix:/var/run/php7.0-fpm.sock failed (2: No such file or directory) while connecting to upstream, client: 93.xxx.xxx.xx, server: 95.xx.xx.xx, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php7.0-fpm.sock:", host: "95.xx.xx.xx" So I checked if the socket file exists, and it seems to exist, but ls behaves weird: $ sudo ls -l /var/run/php total 4 -rw-r--r-- 1 root root 5 Oct 15 13:00 php7.0-fpm.pid srw-rw---- 1 www-data www-data 0 Oct 15 13:00 php7.0-fpm.sock $ sudo ls -l /var/run/php7 ls: cannot access '/var/run/php7': No such file or directory $ sudo ls -l /var/run/php7.0-fpm.sock ls: cannot access '/var/run/php7.0-fpm.sock': No such file or directory Why can ls find the socket file if I search it by part of the name php while it cannot find the socket file when I list more than that php7 or even the full name php7.0-fpm.sock? And most importantly, how can I make this work with nginx? All tips are welcome! below I pasted my Ansible file --- - hosts: php become: true tasks: - name: install packages apt: name={{ item }} update_cache=yes state=latest with_items: - git - mcrypt - nginx - php-cli - php-curl - php-fpm - php-intl - php-json - php-mcrypt - php-mbstring - php-sqlite3 - php-xml - sqlite3 - name: enable mbstring shell: phpenmod mbstring notify: - restart php7.0-fpm - restart nginx - name: create /var/www/ directory file: dest=/var/www/ state=directory owner=www-data group=www-data mode=0700 - name: Clone git repository git: > dest=/var/www/laravel repo=https://github.com/laravel/laravel.git update=no become: true become_user: www-data register: cloned - name: install composer shell: curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer args: creates: /usr/local/bin/composer - name: composer create-project composer: command=create-project working_dir=/var/www/laravel optimize_autoloader=no become: true become_user: www-data when: cloned|changed - name: set APP_DEBUG=false lineinfile: dest=/var/www/laravel/.env regexp='^APP_DEBUG=' line=APP_DEBUG=false - name: set APP_ENV=production lineinfile: dest=/var/www/laravel/.env regexp='^APP_ENV=' line=APP_ENV=production - name: Configure nginx template: src=nginx.conf dest=/etc/nginx/sites-available/default notify: - restart php5-fpm - restart nginx handlers: - name: restart php7.0-fpm service: name=php7.0-fpm state=restarted - name: restart nginx service: name=nginx state=restarted - name: reload nginx service: name=nginx state=reloaded
Had the same problem. Solution is very easy. In nginx conf file you are trying upstreaming to unix:/var/run/php7.0-fpm.sock Correct path is unix:/var/run/php/php7.0-fpm.sock There is a mention about this in the documentation Nginx communicates with PHP-FPM using a Unix domain socket. Sockets map to a path on the filesystem, and our PHP 7 installation uses a new path by default: PHP 5 /var/run/php5-fpm.sock PHP 7 /var/run/php/php7.0-fpm.sock
NGINX
40,059,745
51
The below is my nginx configuration file located in /etc/nginx/nginx.conf user Foo; worker_processes 1; error_log /home/Foo/log/nginx/error.log; pid /home/Foo/run/nginx.pid; events { worker_connections 1024; use epoll; } http { access_log /home/Foo/log/nginx/access.log; server { listen 80; location = / { proxy_pass http://192.168.0.16:9999; } } } As you can see I change log, pid files location into home directory. When I re-start Linux it seems to work, Nginx records error logs in the file I set and pid file also. However, when it tries nginx -s reload or the other, It tries to open other error log file. nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied) 2015/12/14 11:23:54 [warn] 3356#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 nginx: the configuration file /etc/nginx/nginx.conf syntax is ok 2015/12/14 11:23:54 [emerg] 3356#0: open() "/home/Foo/run/nginx.pid" failed (13: Permission denied) nginx: configuration file /etc/nginx/nginx.conf test failed I know, I can solve permission error with sudo but the main issue in here is a error log file(/var/log/nginx/error.log) Nginx tries to open. Why does it try to access another error log file?
You might need to fire it with sudo sudo nginx -t
NGINX
34,258,894
51
For example, I want to do this: if ($http_user_agent ~ "MSIE 6.0" || $http_user_agent ~ "MSIE 7.0" (etc, etc)) { rewrite ^ ${ROOT_ROOT}ancient/ last; break; } instead of this: if ($http_user_agent ~ "MSIE 6.0") { rewrite ^ ${ROOT_ROOT}ancient/ last; break; } if ($http_user_agent ~ "MSIE 7.0") { rewrite ^ ${ROOT_ROOT}ancient/ last; break; } Nginx rejects this syntax (minus the (etc, etc)), and I don't see anything in the docs about this. Also, we opted not to use $ancient_browser directive, so that's not an option.
Edit: As Alexey Ten didn't add a new answer, I'll edit mine to give his better answer in this case. if ($http_user_agent ~ "MSIE [67]\.") Original answer: Nginx doesn't allow multiple or nested if statements however you can do this : set $test 0; if ($http_user_agent ~ "MSIE 6\.0") { set $test 1; } if ($http_user_agent ~ "MSIE 7\.0") { set $test 1; } if ($test = 1) { rewrite ^ ${ROOT_ROOT}ancient/ last; } It is not shorter but it allows you to do the check and put the rewrite rule only once. Alternative answer: In some cases, you can also use | (pipe) if ($http_user_agent ~ "(MSIE 6\.0)|(MSIE 7\.0)") { rewrite ^ ${ROOT_ROOT}ancient/ last; }
NGINX
29,756,330
51
I use nginX/1.6 and laravel when i posted data to server i get this error 413 Request Entity Too Large. i tried many solutions as bellow 1- set client_max_body_size 100m; in server and location and http in nginx.conf. 2- set upload_max_filesize = 100m in php.ini 3- set post_max_size = 100m in php.ini After restarting php5-fpm and nginx the problem still not solved
Add ‘client_max_body_size xxM’ inside the http section in /etc/nginx/nginx.conf, where xx is the size (in megabytes) that you want to allow. http { client_max_body_size 20M; }
NGINX
26,608,606
51
I'm trying to setup an application webserver using uWSGI + Nginx, which runs a Flask application using SQLAlchemy to communicate to a Postgres database. When I make requests to the webserver, every other response will be a 500 error. The error is: Traceback (most recent call last): File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 867, in _execute_context context) File "/var/env/argos/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 388, in do_execute cursor.execute(statement, parameters) psycopg2.OperationalError: SSL error: decryption failed or bad record mac The above exception was the direct cause of the following exception: sqlalchemy.exc.OperationalError: (OperationalError) SSL error: decryption failed or bad record mac The error is triggered by a simple Flask-SQLAlchemy method: result = models.Event.query.get(id) uwsgi is being managed by supervisor, which has a config: [program:my_app] command=/usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/myapp.ini --catch-exceptions directory=/path/to/my/app stopsignal=QUIT autostart=true autorestart=true and uwsgi's config looks like: [uwsgi] socket = /tmp/my_app.sock logto = /var/log/my_app.log plugins = python3 virtualenv = /path/to/my/venv pythonpath = /path/to/my/app wsgi-file = /path/to/my/app/application.py callable = app max-requests = 1000 chmod-socket = 666 chown-socket = www-data:www-data master = true processes = 2 no-orphans = true log-date = true uid = www-data gid = www-data The furthest that I can get is that it has something to do with uwsgi's forking. But beyond that I'm not clear on what needs to be done.
The issue ended up being uwsgi's forking. When working with multiple processes with a master process, uwsgi initializes the application in the master process and then copies the application over to each worker process. The problem is if you open a database connection when initializing your application, you then have multiple processes sharing the same connection, which causes the error above. The solution is to set the lazy configuration option for uwsgi, which forces a complete loading of the application in each process: lazy Set lazy mode (load apps in workers instead of master). This option may have memory usage implications as Copy-on-Write semantics can not be used. When lazy is enabled, only workers will be reloaded by uWSGI’s reload signals; the master will remain alive. As such, uWSGI configuration changes are not picked up on reload by the master. There's also a lazy-apps option: lazy-apps Load apps in each worker instead of the master. This option may have memory usage implications as Copy-on-Write semantics can not be used. Unlike lazy, this only affects the way applications are loaded, not master’s behavior on reload. This uwsgi configuration ended up working for me: [uwsgi] socket = /tmp/my_app.sock logto = /var/log/my_app.log plugins = python3 virtualenv = /path/to/my/venv pythonpath = /path/to/my/app wsgi-file = /path/to/my/app/application.py callable = app max-requests = 1000 chmod-socket = 666 chown-socket = www-data:www-data master = true processes = 2 no-orphans = true log-date = true uid = www-data gid = www-data # the fix lazy = true lazy-apps = true
NGINX
22,752,521
51
At my ubuntu server, I install nginx and setup virtual host using this article. https://www.digitalocean.com/community/articles/how-to-set-up-nginx-virtual-hosts-server-blocks-on-ubuntu-12-04-lts--3 The virtual host's domain name is like www.example.com. When I go to www.example.com, I can see my application's index page. However, when I go to the real ip address, I still see the nginx welcome page. What can I do to remove this welcome page or point to www.example.com if someone uses ip address to access my site? I setup a A record to point ip xxx.xxx.xxx.xxx to www.example.com.
You need to remove the file default, located in /etc/nginx/sites-enabled: rm /etc/nginx/sites-enabled/default Then restart nginx: service nginx reload
NGINX
19,215,641
51
We use Nginx as a reverse proxy with this setup: upstream frontends { server 127.0.0.1:8000; server 127.0.0.1:8001; server 127.0.0.1:8002; [...] } server { location / { proxy_pass http://frontends; [...] } [...] } As part of the access log, I would like to record the upstream server that has served the request, which in our case just means the associated localhost port. The variables in the documentation (http://wiki.nginx.org/HttpProxyModule#Variables) mention $proxy_host and $proxy_port but in the log they always end up with the values "frontends" and "80".
First add new logging format log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name $host to: $upstream_addr: $request $status upstream_response_time $upstream_response_time msec $msec request_time $request_time'; Example output: [18/Nov/2019:10:08:15 -0700] <request IP> - - - <config host> <request host> to: 127.0.0.1:8000: GET /path/requested HTTP/1.1 200 upstream_response_time 0.000 msec 1574096895.474 request_time 0.001 and then redefine accesslog as access_log /var/log/nginx/access.log upstreamlog; log_format goes to http {} section, access_log can be inside location.
NGINX
18,627,469
51
As someone new to the Java EE ecosystem, I'm confused with these products which share a tremendous amount of keywords. And half of them come from Apache software foundation. Can someone address me with a brief distinctive explanation for each of them?
Jetty and Tomcat are web-containers, while Geronimo, Glassfish and JBoss support the whole J2EE stack (more or less). And, tataaa, they use/include Tomcat or Jetty for web-containers. The most important part of a fullblown J2EE server besides the web-container used to be the EJB-container allowing for deployment of EJBs, having them run in a transactional context etc. Today, J2EE is actually called Java EE. Entity EJBs (JPA) can run outside the EJB-container, say in Tomcat, but then outside the transaction handling that an EJB-container would provide.
NGINX
4,712,689
51
I first got my nginx docker image: docker pull nginx Then I started it: docker run -d -p 80:80 --name webserver nginx Then I stopped it: docker stop webserver Then I tried to restart it: $docker run -d -p 80:80 --name webserver nginx docker: Error response from daemon: Conflict. The container name "/webserver" is already in use by container 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74. You have to remove (or rename) that container to be able to reuse that name.. See 'docker run --help'. Well, it's an error. But in fact there's nothing in container list now: docker container list CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Why I restart nginx image failed? How to fix it?
It is because you have used --name switch. container is stopped and not removed You find it stopped docker ps -a You can simply start it using below command: docker start webserver EDIT: Alternatives If you want to start the container with below command each time, docker run -d -p 80:80 --name webserver nginx then use one of the following: method 1: use --rm switch i.e., container gets destroyed automatically as soon as it is stopped docker run -d -p 80:80 --rm --name webserver nginx method 2: remove it explicitly after stopping the container before starting the command that you are currently using. docker stop <container name> docker rm <container name>
NGINX
42,760,216
50
I try to deploy a django project with Nginx and Gunicorn with this tutorial. i did all to-dos but, when i create /etc/nginx/sites-available/myproject file with below code: server { listen 80; server_name server_domain_or_IP; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /home/sammy/myproject; } location / { include proxy_params; proxy_pass http://unix:/home/sammy/myproject/myproject.sock; } } and then run sudo nginx -t for find errors, i get this error: nginx: [emerg] open() "/etc/nginx/proxy_params" failed (2: No such file or directory) in /etc/nginx/sites-enabled/myproject:11 nginx: configuration file /etc/nginx/nginx.conf test failed How can I solve the problem?
You're getting the path wrong for proxy_params 99% of the time (From my experience), the default location for the proxy_params file is /etc/nginx/proxy_params but that doesn't seem to be the same for you. The proxy_params file contains the following: proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; These parameters are used to forward information to the application that you're proxying to. I've worked with an old CentOS server that didn't have a proxy_params file, Instead of creating one myself, I just included these parameters directly; and location block looked like this: location / { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://unix:/home/sammy/myproject/myproject.sock; } So it's up to you. If the file exists in another location just include it with the right location: include /path/to/proxy_params else you can include the params directly in the location block (Like I did above) Or create one yourself and place it in /etc/nginx (If you want to stick with your current code)
NGINX
42,589,781
50
I'm working through https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-16-04. I've completed the tut but I'm getting a 502 error. My nginx server block configuration file: server { listen 80; server_name 198..xxx.xxx.xxx mysite.org; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /home/deploy/mysite3; } location / { include uwsgi_params; uwsgi_pass unix:/run/uwsgi/mysite3.sock; } } deploy@server:/etc/nginx/sites-enabled$ sudo systemctl status nginx ● nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2017-02-06 17:30:53 EST; 4s ago Process: 7374 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 --pidfile /run/nginx.pid (code=exited, status=0/SUCCESS) Process: 7383 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS) Process: 7380 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS) Main PID: 7384 (nginx) CGroup: /system.slice/nginx.service ├─7384 nginx: master process /usr/sbin/nginx -g daemon on; master_process on └─7385 nginx: worker process Feb 06 17:30:53 server systemd[1]: Starting A high performance web server and a reverse proxy server... Feb 06 17:30:53 server systemd[1]: nginx.service: Failed to read PID from file /run/nginx.pid: Invalid argument Feb 06 17:30:53 server systemd[1]: Started A high performance web server and a reverse proxy server. nginx error log shows: 2017/02/06 21:10:32 [error] 7385#7385: *15 upstream prematurely closed connection while reading response header from upstream, client: 64.xxx.xxx.xxx, server: 198.xxx.xxx.xxx, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:/run/uwsgi/mysite3.sock:", host: "mysite.org" It looks to me that uwsgi is running ok: Feb 06 17:43:42 server uwsgi[7434]: WSGI app 0 (mountpoint='') ready in 1 seconds on interpreter 0xc7ac10 pid: 7435 (default app) Feb 06 17:43:42 server uwsgi[7434]: *** uWSGI is running in multiple interpreter mode *** Feb 06 17:43:42 server uwsgi[7434]: spawned uWSGI master process (pid: 7435) Feb 06 17:43:42 server uwsgi[7434]: Mon Feb 6 17:43:42 2017 - [emperor] vassal mysite3.ini has been spawned Feb 06 17:43:42 server uwsgi[7434]: spawned uWSGI worker 1 (pid: 7439, cores: 1) Feb 06 17:43:42 server uwsgi[7434]: spawned uWSGI worker 2 (pid: 7440, cores: 1) Feb 06 17:43:42 server uwsgi[7434]: spawned uWSGI worker 3 (pid: 7441, cores: 1) Feb 06 17:43:42 server uwsgi[7434]: spawned uWSGI worker 4 (pid: 7442, cores: 1) Feb 06 17:43:42 server uwsgi[7434]: spawned uWSGI worker 5 (pid: 7443, cores: 1) Feb 06 17:43:42 server uwsgi[7434]: Mon Feb 6 17:43:42 2017 - [emperor] vassal mysite3.ini is ready to accept requests How can I fix this? edit: root@server:~# mkdir /etc/systemd/system/nginx.service.d root@server:~# printf "[Service]\nExecStartPost=/bin/sleep 0.1\n" > /etc/systemd/system/nginx.service.d/override.conf root@server:~# systemctl daemon-reload root@server:~# systemctl restart nginx root@server:~# systemctl status nginx ● nginx.service - A high performance web server and a reverse proxy server Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: en Drop-In: /etc/systemd/system/nginx.service.d └─override.conf Active: active (running) since Tue 2017-02-07 08:18:26 EST; 6s ago Process: 10076 ExecStop=/sbin/start-stop-daemon --quiet --stop --retry QUIT/5 Process: 10084 ExecStartPost=/bin/sleep 0.1 (code=exited, status=0/SUCCESS) Process: 10082 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (cod Process: 10079 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process Main PID: 10083 (nginx) CGroup: /system.slice/nginx.service ├─10083 nginx: master process /usr/sbin/nginx -g daemon on; master_pr └─10085 nginx: worker process Feb 07 08:18:26 server systemd[1]: Starting A high performance web server and a Feb 07 08:18:26 server systemd[1]: Started A high performance web server and a r root@server:~#
That warning with the nginx.pid file is a know bug (at least for Ubutnu if not for other distros as well). More details here: https://bugs.launchpad.net/ubuntu/+source/nginx/+bug/1581864 Workaround (on a ssh console, as root, use the commands bellow): mkdir /etc/systemd/system/nginx.service.d printf "[Service]\nExecStartPost=/bin/sleep 0.1\n" > /etc/systemd/system/nginx.service.d/override.conf systemctl daemon-reload systemctl restart nginx Then check if you still see that nginx.pid error and also if nginx is actually running and if you can connect to port 80 on your server. I would also check if this actually exists and the permissions on it: /run/uwsgi/mysite3.sock If nginx is running and uWSGI is running as well then I guess it's a configuration problem I understand you want to use Django so I would recommend to review your actual configuration and compare it with the one from here: http://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html I hope it helps!
NGINX
42,078,674
50
I am trying to run nginx (reverse proxy) as a windows service so that it's possible to proxy a request even when a user is not connected. I searched a lot around and found winsw that should create a service from an .exe file (such as nginx). i found many tutorials online saying to create an xml file as following <service> <id>nginx</id> <name>nginx</name> <description>nginx</description> <executable>c:\nginx\nginx.exe</executable> <logpath>c:\nginx\</logpath> <logmode>roll</logmode> <depend></depend> <startargument>-p c:\nginx</startargument> <stopargument>-p c:\nginx -s stop</stopargument> </service> (i have nginx.exe in a folder called nginx under c: o the paths are correct). Now the problem is that the service is created but i can't seem to make it start, every time i try to start it a windows pops up saying Error 1053: The service didn't respond to the start or control request in a timely fashion Does anyone know how can i fix this or a different way to run nginx as a window service?
Just stumbled here and managed to get things working with this free open source alternative: https://nssm.cc/ It basically is just a GUI to help you create a service. Steps I used: Download NGinx (http://nginx.org/en/download.html) and uzip to C:\foobar\nginx Download nssm (https://nssm.cc/) Run "nssm install nginx" from the command line In NSSM gui do the following: On the application tab: set path to C:\foobar\nginx\nginx.exe, set startup directory to C:\foorbar\nginx On the I/O tab type "start nginx" on the Input slow. Optionally set C:\foobar\nginx\logs\service.out.log and C:\foobar\nginx\logs\service.err.log in the output and error slots. Click "install service". Go to services, start "nginx". Hit http://localhost:80 and you should get the nginx logon. Turn off the service, disable browser cache and refresh, screen should now fail to load. You should be good to go from then on.
NGINX
40,846,356
50
I have a pod that responds to requests to /api/ I want to do a rewrite where requests to /auth/api/ go to /api/. Using an Ingress (nginx), I thought that with the ingress.kubernetes.io/rewrite-target: annotation I could do it something like this: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myapi-ing annotations: ingress.kubernetes.io/rewrite-target: /api kubernetes.io/ingress.class: "nginx" spec: rules: - host: api.myapp.com http: paths: - path: /auth/api backend: serviceName: myapi servicePort: myapi-port What's happening however is that /auth/ is being passed to the service/pod and a 404 is rightfully being thrown. I must be misunderstanding the rewrite annotation. Is there a way to do this via k8s & ingresses?
I don't know if this is still an issue, but since version 0.22 it seems you need to use capture groups to pass values to the rewrite-target value From the nginx example available here Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group. For your specific needs, something like this should do the trick apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myapi-ing annotations: ingress.kubernetes.io/rewrite-target: /api/$2 kubernetes.io/ingress.class: "nginx" spec: rules: - host: api.myapp.com http: paths: - path: /auth/api(/|$)(.*) backend: serviceName: myapi servicePort: myapi-port
NGINX
47,837,087
49
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. I am facing this error on mac while trying to run this command docker run --rm --gpus all -v static_volume:/home/app/staticfiles/ -v media_volume:/app/uploaded_videos/ --name=deepfakeapplication abhijitjadhav1998/deefake-detection-20framemodel How to solve this error?
Put this line --platform linux/amd64 after docker run. It works for me, using Macbook M1.
NGINX
72,152,446
49
I want to add/remove servers in my nginx running inside a docker container I use ADD command in Dockerfile to add my nginx.conf to /etc/nginx dir. # Copy a configuration file from the current directory ADD nginx.conf /etc/nginx/ then in my running nginx container that have a conf like this # List of application servers upstream app_servers { server 172.17.0.91:9000; server 172.17.0.92:9000; server 172.17.0.93:9000; } how do restart my nginx to take effect of the edited nginx.conf? thank you in advance!
restarting the container is not advisable when you initialize Docker Swarm because it may remove the nginx service. So if you need an alternative aside docker restart; You can go inside the container and just run nginx -s reload For example, in docker env, if you have the container named nginx docker exec <nginx_container_id> nginx -s reload
NGINX
26,291,260
49
I have a CentOS server. System is nginx/php-fpm. It has 16GB RAM. CPUs : 8 CPU Frequency: 2660.203 MHz Why am I getting this error in my error log? php-fpm/error.log: [02-Aug-2014 17:14:04] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 21 total children This is my php-fpm configuration for the www pool: php-fpm/www.conf: pm = dynamic pm.max_children = 32768 pm.start_servers = 10 pm.min_spare_servers = 10 pm.max_spare_servers = 10 pm.max_requests = 5000 How to fix the problem?
It is a tough cookie because there could be numerous factors involved. The first problem with your config is the max_children is ridiculously high. If each child process is using 50MB then 50 x 32768 would easily deplete 16GB. A better way to determine max_children is to find out how much each child process uses, then factor in the maximum RAM you would like php-fpm to use and then divide the values. E.g. If I have a 16GB server, I can run the following command to determine how much ram each php-fpm child consumes: ps -ylC php-fpm --sort:rss Note! It may be required to explicitly specify user if php-fpm is running under the different one. ps -ylC php-fpm --sort:rss -u www-data where www-data is the user under which php-fpm is being run. You are on the lookout for the RSS column; it states resident memory and is measured in KB. If I have an average of 50MB per process and I want to use a maximum of 10GB for php-fpm processes, then all I do is 10000MB \ 50MB = 200. So, on that basis, I can use 200 children for my stated memory consumption. Now, with regards to the servers, you will want to set the max_spare_servers to x2 or x4 the number of cores. So if you have an 8 core CPU then you can start off with a value of 16 for max_spare_servers and go up to 32. The start_servers value should be around half of the max_spare_servers value. You should also consider dropping the max_requests to around 500. Also, in addition to dynamic, the pm value can also be set to static or ondemand. static will always have a fixed number of servers running at any given time. This is good if you have a consistent amount of users or you want to guarantee you don't breach the max memory. ondemand will only start processes when there is a need for them. The downside is obviously having to constantly start/kill processes which will usually translate into a very slight delay in request handling. The upside, you only use resources when you need them. dynamic always starts X amount of servers specified in the start_servers option and creates additional processes on an as-need basis. If you are still experiencing issues with memory then consider changing pm to ondemand. This is a general guideline, your settings may need further tweaking. It is really a case of playing with the settings and running benchmarks for maximum performance and optimal resource usage. It is somewhat tedious but it is the best way to determine these types of settings because each setup is different.
NGINX
25,097,179
49
I configured nginx as reverse proxy to my node.js application for file uploads with proxy_pass directive. It works, but my problem is that nginx waits for the whole file body to be uploaded before passing it to the upstream. This causes problems for me, because I want to track upload progress at my application. Any idea how to configure nginx in order to stream file body in real time to the upstream?
There is no way to (at least as of now). Full request will be always buffered before nginx will start sending it to an upstream. To track uploaded files you may try upload progress module. Update: in nginx 1.7.11 the proxy_request_buffering directive is available, which allows to disable buffering of a request body. It should be used with care though, see docs.
NGINX
12,282,342
49
I'm building a Node.js applications and I'm using nginx as a reverse proxy. My application has some static files I need to serve and a Socket.io server. I know that I can serve static files directly with Express (using express.static middleware). Also I can point nginx directly to the directory where my static files are located, so they would be served by nginx. So, the question: which one is the better approach? Which pros and cons can I face while using each approach?
for development: express, mainly because of flexibility it provides... you can change your static location and structure very easily during development for production: nginx, because its much much faster. Node/express are good for executing logic, but for serving raw content... nothing can beat nginx. You also get additional capabilities such as gzip, load balancing... Nevertheless, this question has been asked in stackoverflow a number of times already: see node.js itself or nginx frontend for serving static files? or Using Node.js only vs. using Node.js with Apache/Nginx or Which is most efficient : serving static files directly by nginx or by node via nginx reverse proxy?
NGINX
44,796,056
48
I have to add ssl (https) for a website, I was given a SSL.CSR and a SSL.KEY file. I 'dos2unix'ed them (because they have trailing ^M) and copied them to the server(CSR -> mywebsite.crt, KEY -> mywebsite.key). I did the following modification to nginx.conf: @@ -60,8 +60,13 @@ } server { - listen 80; + listen 443; server_name ...; + ssl on; + ssl_certificate mywebsite.crt; + ssl_certificate_key mywebsite.key; + ssl_session_cache shared:SSL:10m; + ssl_session_timeout 10m; # Set the max size for file uploads to 500Mb client_max_body_size 500M; Error happens when I restart nginx: nginx: [emerg] PEM_read_bio_X509_AUX("/etc/nginx/mywebsite.crt") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: TRUSTED CERTIFICATE) I figure it's because the first line of mywebsite.crt file contains 'REQUEST', so I remove 'REQUEST' from the first and last of the lines, and restart nginx again, and hit another error: nginx: [emerg] PEM_read_bio_X509_AUX("/etc/nginx/mywebsite.crt") failed (SSL: error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag error:0D06C03A:asn1 encoding routines:ASN1_D2I_EX_PRIMITIVE:nested asn1 error error:0D08303A:asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error:Field=algorithm, Type=X509_ALGOR error:0D08303A:asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error:Field=signature, Type=X509_CINF error:0D08303A:asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error:Field=cert_info, Type=X509 error:0906700D:PEM routines:PEM_ASN1_read_bio:ASN1 lib) Any idea?
You should never share your private key. You should consider the key you posted here compromised and generate a new key and signing request. You have a certificate request and not an actual signed certificate. You provide the request ('CSR') to the signing party. They use that request to create a signed certificate ('CRT') which they then make available to you. The key is never disclosed to anyone.
NGINX
21,870,644
48
I'm having some trouble defining a rule to cache my static files. I've found this solution: location ~* \.(ico|js|css|png|gif|jpe?g)$ { expires 7d; } which actually looks like what I need. The problem is, if I include this code into my NGINX.conf, no static files are delivered anymore and my site is blank. Any ideas/hints what might cause this result? Maybe I have to add, that the static files are distributed in different directories :/. My NGINX config file looks like this: server { server_name bla.domain.com; listen 80; root /var/repo/; location / { default_type text/html; index index.html; if ($request_method !~ ^(GET)$ ) { return 444; } if ($http_user_agent ~* LWP::Simple|BBBike|wget) { return 403; } if ( $http_referer ~* (babes|forsale|girl|jewelry|love|nudit|organic|poker|porn|sex|teen) ) { return 403; } } location /bf/football/ { alias /var/repos/f20; } location /bf/f20/ { alias /var/repo/f20; } location /bf/zoo/ { alias /var/repo/zoo/; } location /kbloader/ { alias /var/repo/kbloader/; } } Would be nice if someone could help me out with this or point me in the right direction.
Put this before your other location block: location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { expires 30d; add_header Vary Accept-Encoding; access_log off; } That should work. You could also use this: ## All static files will be served directly. location ~* ^.+\.(?:css|cur|js|jpe?g|gif|htc|ico|png|html|xml|otf|ttf|eot|woff|woff2|svg)$ { access_log off; expires 30d; add_header Cache-Control public; ## No need to bleed constant updates. Send the all shebang in one ## fell swoop. tcp_nodelay off; ## Set the OS file cache. open_file_cache max=3000 inactive=120s; open_file_cache_valid 45s; open_file_cache_min_uses 2; open_file_cache_errors off; }
NGINX
19,515,132
48
I am trying to enable client certificate authentication in nginx where the certificates have been signed by an intermediate CA. I am able to get this working fine when using a certificate signed by a self-signed root CA; however, this does not work when the signing CA is an intermediate CA. My simple server section looks like this: server { listen 443; server_name _; ssl on; ssl_certificate cert.pem; ssl_certificate_key cert.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; ssl_client_certificate ca.pem; ssl_verify_client on; ssl_verify_depth 1; location / { root html; index index.html index.htm; } } For the contents of ca.pem, I have tried using only the intermediate CA and also concatenating the intermediate CA cert and the root CA cert, i.e. something like: cp intermediate.crt ca.pem cat root.crt >> ca.pem I have also validated that the certificate is valid from openssl's perspective when using that same CA chain: openssl verify -CAfile /etc/nginx/ca.pem certs/client.crt certs/client.crt: OK I have experimented with setting ssl_verify_depth explicitly to 1 (as above) and then even 0 (not sure what that number means exactly), but still get same error. The error I get in all variants of the intermed CA is "400 Bad Request" and more specifically "The SSL certificate error" (not sure what that means exactly). Maybe nginx just doesn't support cert chains for intermediate certs? Any help greatly appreciated!
Edit: I had also this "problem", solution and explanation is at the bottom of the text. It seemed like nginx doesn't support intermediate certificates. My certs self created: (RootCA is selfsigned, IntrermediateCA1 is signed by RootCA, etc.) RootCA -> IntermediateCA1 -> Client1 RootCA -> IntermediateCA2 -> Client2 I want to use in nginx "IntermediateCA1", to allow access to site only to owner of the "Client1" certificate. When I put to "ssl_client_certificate" file with IntermediateCA1 and RootCA, and set "ssl_verify_depth 2" (or more) , clients can login to site both using certificate Client1 and Client2 (should only Client1). The same result is when I put to "ssl_client_certificate" file with only RootCA - both clients can login. When I put to "ssl_client_certificate" file with only IntermediateCA1, and set "ssl_verify_depth 1" (or "2" or more - no matter) , it is imposible to log in, I get error 400. And in debug mode i see logs: verify:0, error:20, depth:1, subject:"/C=PL/CN=IntermediateCA1/[email protected]",issuer: "/C=PL/CN=RootCA/[email protected]" verify:0, error:27, depth:1, subject:"/C=PL/CN=IntermediateCA1/[email protected]",issuer: "/C=PL/CN=RootCA/[email protected]" verify:1, error:27, depth:0, subject:"/C=PL/CN=Client1/[email protected]",issuer: "/C=PL/CN=IntermediateCA1/[email protected]" (..) client SSL certificate verify error: (27:certificate not trusted) while reading client request headers, (..) I thing this is a bug. Tested on Ubuntu, nginx 1.1.19 and 1.2.7-1~dotdeb.1, openssl 1.0.1. I see that nginx 1.3 has few more options about using client certificates, but I'dont see solution to this problem. Currently, the only one way to separate clients 1 and 2 is to create two, selfsigned RootCAs, but this is only workaround.. Edit 1: I've reported this issue here: http://trac.nginx.org/nginx/ticket/301 Edit 2" *Ok, it's not a bug, it is feature ;)* I get response here: http://trac.nginx.org/nginx/ticket/301 It is working, you must only check what your ssl_client_i_dn is (. Instead of issuer you can use also subject of certificate, or what you want from http://wiki.nginx.org/HttpSslModule#Built-in_variables This is how certificate verification works: certificate must be verified up to a trusted root. If chain can't be built to a trusted root (not intermediate) - verification fails. If you trust root - all certificates signed by it, directly or indirectly, will be successfully verified. Limiting verification depth may be used if you want to limit client certificates to a directly issued certificates only, but it's more about DoS prevention, and obviously it can't be used to limit verificate to intermediate1 only (but not intermediate2). What you want here is some authorization layer based on the verification result - i.e. you may want to check that client's certificate issuer is intermediate1. Simplest solution would be to reject requests if issuer's DN doesn't match one allowed, e.g. something like this (completely untested): [ Edit by me, it is working correctly in my configuration ] server { listen 443 ssl; ssl_certificate ... ssl_certificate_key ... ssl_client_certificate /path/to/ca.crt; ssl_verify_client on; ssl_verify_depth 2; if ($ssl_client_i_dn != "/C=PL/CN=IntermediateCA1/[email protected]") { return 403; } }
NGINX
8,431,528
48
I'm using apache+mod_wsgi for django. And all css/js/images are served through nginx. For some odd reason, when others/friends/colleagues try accessing the site, jquery/css is not getting loaded for them, hence the page looks jumbled up. My html files use code like this - <link rel="stylesheet" type="text/css" href="http://x.x.x.x:8000/css/custom.css"/> <script type="text/javascript" src="http://1x.x.x.x:8000/js/custom.js"></script> My nginx configuration in sites-available is like this - server { listen 8000; server_name localhost; access_log /var/log/nginx/aa8000.access.log; error_log /var/log/nginx/aa8000.error.log; location / { index index.html index.htm; } location /static/ { autoindex on; root /opt/aa/webroot/; } } There is a directory /opt/aa/webroot/static/ which have corresponding css & js directories. The odd thing is that the pages show fine when I access them. I have cleared my cache/etc, but the page loads fine for me, from various browsers. Also, I don't see 404 any error in the nginx log files. Any pointers would be great.
I think using root in location block is incorrect. I use alias and it works fine, even without re-configuring django. # django settings.py MEDIA_URL = '/static/' # nginx server config server { ... location /static { autoindex on; alias /opt/aa/webroot/; } } Hope this makes things simpler.
NGINX
2,451,739
48
EDIT: Updated the text in general to keep it shorter and more concise. I am trying to configure HTTPS when I run npm run dev so I can test MediaStream and alike locally (for which browsers require me to provide HTTPS). I am trying to configure it through nuxt.config.js but without any success. Here is my nuxt.config.js file: import fs from "fs"; import pkg from "./package"; export default { mode: "spa", /* ** Headers of the page */ head: { title: pkg.name, meta: [ { charset: "utf-8" }, { name: "viewport", content: "width=device-width, initial-scale=1" }, { hid: "description", name: "description", content: pkg.description }, ], link: [ { rel: "icon", type: "image/x-icon", href: "/favicon.ico" }, ], }, /* ** Customize the progress-bar color */ loading: { color: "#fff" }, /* ** Global CSS */ css: [ "element-ui/lib/theme-chalk/index.css", "@makay/flexbox/flexbox.min.css", ], /* ** Plugins to load before mounting the App */ plugins: [ "@/plugins/element-ui", "@/plugins/vue-upload", "@/plugins/axios-error-event-emitter", "@/plugins/eventemitter2", "@/plugins/vue-awesome", "@/plugins/webrtc-adapter", "@/plugins/vue-browser-detect-plugin", ], /* ** Nuxt.js modules */ modules: [ // Doc: https://axios.nuxtjs.org/usage "@nuxtjs/axios", "@nuxtjs/pwa", ], /* ** Axios module configuration */ axios: { // See https://github.com/nuxt-community/axios-module#options baseURL: process.env.NODE_ENV === "production" ? "https://startupsportugal.com/api/v1" : "http://localhost:8080/v1", }, /* ** Build configuration */ build: { transpile: [/^element-ui/, /^vue-awesome/], filenames: { app: ({ isDev }) => (isDev ? "[name].[hash].js" : "[chunkhash].js"), chunk: ({ isDev }) => (isDev ? "[name].[hash].js" : "[chunkhash].js"), }, /* ** You can extend webpack config here */ extend(config, ctx) { // Run ESLint on save if (ctx.isClient) config.devtool = "#source-map"; if (ctx.isDev) { config.devServer = { https: { key: fs.readFileSync("server.key"), cert: fs.readFileSync("server.crt"), ca: fs.readFileSync("ca.pem"), }, }; } if (ctx.isDev && ctx.isClient) { config.module.rules.push({ enforce: "pre", test: /\.(js|vue)$/, loader: "eslint-loader", exclude: /(node_modules)/, }); } }, }, }; Also, here you can see my dependencies in package.json: "dependencies": { "@makay/flexbox": "^3.0.0", "@nuxtjs/axios": "^5.3.6", "@nuxtjs/pwa": "^2.6.0", "cross-env": "^5.2.0", "element-ui": "^2.4.11", "eventemitter2": "^5.0.1", "lodash": "^4.17.11", "nuxt": "^2.8.0", "pug": "^2.0.3", "pug-plain-loader": "^1.0.0", "quagga": "^0.12.1", "stylus": "^0.54.5", "stylus-loader": "^3.0.2", "vue-awesome": "^3.5.3", "vue-browser-detect-plugin": "^0.1.2", "vue-upload-component": "^2.8.20", "webrtc-adapter": "^7.2.4" }, "devDependencies": { "@nuxtjs/eslint-config": "^0.0.1", "babel-eslint": "^10.0.1", "eslint": "^5.15.1", "eslint-config-airbnb-base": "^13.1.0", "eslint-config-standard": ">=12.0.0", "eslint-import-resolver-webpack": "^0.11.1", "eslint-loader": "^2.1.2", "eslint-plugin-import": ">=2.16.0", "eslint-plugin-jest": ">=22.3.0", "eslint-plugin-node": ">=8.0.1", "eslint-plugin-nuxt": ">=0.4.2", "eslint-plugin-promise": ">=4.0.1", "eslint-plugin-standard": ">=4.0.0", "eslint-plugin-vue": "^5.2.2", "nodemon": "^1.18.9" } However when I run npm run dev it still does not provide HTTPS, but does not provide any error output as well... The output is exactly the same as if I didn't have the HTTPS configurations in nuxt.config.js: $ npm run dev > [email protected] dev /mnt/d/tralha/clothing-demo-app/frontend > nuxt --hostname 0.0.0.0 --port 3000 ╭────────────────────────────────────────────────╮ │ │ │ Nuxt.js v2.8.1 │ │ Running in development mode (spa) │ │ │ │ Listening on: http://192.168.126.241:3000/ │ │ │ ╰────────────────────────────────────────────────╯ ℹ Preparing project for development 14:30:34 ℹ Initial build may take a while 14:30:35 ✔ Builder initialized 14:30:35 ✔ Nuxt files generated
HTTPS on local dev - NUXT style Solution is described in NUXT documentation: https://nuxtjs.org/api/configuration-server/#example-using-https-configuration This may be achieved with: Go to project main dir; Create private and public key; openssl genrsa 2048 > server.key chmod 400 server.key openssl req -new -x509 -nodes -sha256 -days 365 -key server.key -out server.crt Add requirements to the top of the nuxt.config.js; import path from 'path' import fs from 'fs' Extend or add configuration of server in nuxt.config.js; server: { https: { key: fs.readFileSync(path.resolve(__dirname, 'server.key')), cert: fs.readFileSync(path.resolve(__dirname, 'server.crt')) } }
NGINX
56,966,137
47
So, I found an answer to removing the .html extension on my page, that works fine with this code: server { listen 80; server_name _; root /var/www/html/; index index.html; if (!-f "${request_filename}index.html") { rewrite ^/(.*)/$ /$1 permanent; } if ($request_uri ~* "/index.html") { rewrite (?i)^(.*)index\.html$ $1 permanent; } if ($request_uri ~* ".html") { rewrite (?i)^(.*)/(.*)\.html $1/$2 permanent; } location / { try_files $uri.html $uri $uri/ /index.html; } } But if I open mypage.com it redirects me to mypage.com/index Wouldn't this be fixed by declaring index.html as index? Any help is appreciated.
The "Holy Grail" Solution for Removing ".html" in NGINX: UPDATED ANSWER: This question piqued my curiosity, and I went on another, more in-depth search for a "holy grail" solution for .html redirects in NGINX. Here is the link to the answer I found, since I didn't come up with it myself: https://stackoverflow.com/a/32966347/4175718 However, I'll give an example and explain how it works. Here is the code: location / { if ($request_uri ~ ^/(.*)\.html(\?|$)) { return 302 /$1; } try_files $uri $uri.html $uri/ =404; } What's happening here is a pretty ingenious use of the if directive. NGINX runs a regex on the $request_uri portion of incoming requests. The regex checks if the URI has an .html extension and then stores the extension-less portion of the URI in the built-in variable $1. From the docs, since it took me a while to figure out where the $1 came from: Regular expressions can contain captures that are made available for later reuse in the $1..$9 variables. The regex both checks for the existence of unwanted .html requests and effectively sanitizes the URI so that it does not include the extension. Then, using a simple return statement, the request is redirected to the sanitized URI that is now stored in $1. The best part about this, as original author cnst explains, is that Due to the fact that $request_uri is always constant per request, and is not affected by other rewrites, it won't, in fact, form any infinite loops. Unlike the rewrites, which operate on any .html request (including the invisible internal redirect to /index.html), this solution only operates on external URIs that are visible to the user. What does "try_files" do? You will still need the try_files directive, as otherwise NGINX will have no idea what to do with the newly sanitized extension-less URIs. The try_files directive shown above will first try the new URL by itself, then try it with the ".html" extension, then try it as a directory name. The NGINX docs also explain how the default try_files directive works. The default try_files directive is ordered differently than the example above so the explanation below does not perfectly line up: NGINX will first append .html to the end of the URI and try to serve it. If it finds an appropriate .html file, it will return that file and will maintain the extension-less URI. If it cannot find an appropriate .html file, it will try the URI without any extension, then the URI as a directory, and then finally return a 404 error. UPDATE: What does the regex do? The above answer touches on the use of regular expressions, but here is a more specific explanation for those who are still curious. The following regular expression (regex) is used: ^/(.*)\.html(\?|$) This breaks down as: ^: indicates beginning of line. /: match the character "/" literally. Forward slashes do NOT need to be escaped in NGINX. (.*): capturing group: match any character an unlimited number of times \.: match the character "." literally. This must be escaped with a backslash. html: match the string "html" literally. (\?|$): match a literal "?" or the end of the string. This is done to avoid mishandling file names with something after ".html". The capturing group (.*) is what contains the non-".html" portion of the URL. This can later be referenced with the variable $1. NGINX is then configured to re-try the request (return 302 /$1;) and the try_files directive internally re-appends the ".html" extension so the file can be located. UPDATE: Retaining the query string To retain query strings and arguments passed to a .html page, the return statement can be changed to: return 302 /$1$is_args$args; This should allow requests such as /index.html?test to redirect to /index?test instead of just /index. Note that this is considered safe usage of the `if` directive. From the NGINX page If Is Evil: The only 100% safe things which may be done inside if in a location context are: return ...; rewrite ... last; Also, note that you may swap out the '302' redirect for a '301'. A 301 redirect is permanent, and is cached by web browsers and search engines. If your goal is to permanently remove the .html extension from pages that are already indexed by a search engine, you will want to use a 301 redirect. However, if you are testing on a live site, it is best practice to start with a 302 and only move to a 301 when you are absolutely confident your configuration is working correctly.
NGINX
38,228,393
47
I have a really basic nginx setup which is failing for some reason; server { listen 80; server_name librestock.com; location = /favicon.ico { access_log off; log_not_found off; } location /static/ { root /home/david/StockSearch/stocksearch; } location / { include proxy_params; proxy_pass unix:/home/david/StockSearch/stocksearch/stocksearch.sock; } } according to everything I've read I'm setting the server name correctly. when I replace librestock.com with the ip of the server it works. error: $ nginx -t nginx: [emerg] invalid URL prefix in /etc/nginx/sites-enabled/stocksearch:12 nginx: configuration file /etc/nginx/nginx.conf test failed
You need the http:// prefix on your unix: path, as in: proxy_pass http://unix:/home/david/StockSearch/stocksearch/stocksearch.sock; See http://nginx.org/en/docs/http/ngx_http_proxy_module.html
NGINX
32,992,908
47
I have a website which should only be reachable over HTTPS except one URL-pattern (because I have on some pages http-iframe's and I would like to avoid security warnings) E.g. this pages should be redirected to https: http://example.com http://example.com/a/this-is-an-article http://example.com/v/this-is-a-video This pages should not be redirected to https (or should be redirected form https to http) http://example.com/l/page-with-unsafe-iframe http://example.com/l/other-page-with-unsafe-iframe
If the iframe pages are always in the same directory, simple prefix locations could be used. server { listen 443; location /l/ { # redirect https iframe requests to http server return 301 http://$server_name$request_uri; } # ... } server { listen 80; location / { # the default location redirects to https return 301 https://$server_name$request_uri; } location /l/ {} # do not redirect requests for iframe location # ... }
NGINX
27,857,334
47
Currently every invalid page is 500 (Internal Server Error) because I probably messed up with my server block configuration. I decided to shut down my website a while ago and created a simple one-page, thank-you homepage. However old links and external sites are still trying to access other parts of the site, which no longer exists. How do I force redirect all non-homepage (any invalid URL) to the homepage? I tried with the following block, but it didn't work: location / { try_files $uri $uri/ $document_uri/index.html; } My current configuration is (I don't even serve PHP files right now, ie homepage is simple html): server { server_name www.example.com example.com; access_log /srv/www/example.com/logs/access.log; error_log /srv/www/example.com/logs/error.log; root /srv/www/example.com/public_html; index index.php index.html; location / { try_files $uri $uri/ $document_uri/index.html; } # Disable favicon.ico logging location = /favicon.ico { log_not_found off; access_log off; } # Allow robots and disable logging location = /robots.txt { allow all; log_not_found off; access_log off; } # Enable permalink structures if (!-e $request_filename) { rewrite . /index.php last; } # Handle php requests location ~ \.php$ { try_files $uri = 404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_send_timeout 900; fastcgi_read_timeout 900; fastcgi_connect_timeout 900; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # Disable static content logging and set cache time to max location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { access_log off; log_not_found off; expires max; } # Deny access to htaccess and htpasswd files location ~ /\.ht { deny all; } # Deny access to hidden files (beginning with a period) location ~ /\. { access_log off; log_not_found off; deny all; } }
Setting the error page to the home page like this error_page 404 /index.html; has a small problem, the status code of the home page will be "404 not found", if you want to load the home page with a "200 ok" status code you should do it like this error_page 404 =200 /index.html; This will convert the "404 not found" error code to a "200 ok" code, and load the home page The second method which @jvperrin mentioned is good too, try_files $uri $uri/ /index.html; but you need to keep 1 thing in mind, since it's the location / any asset that doesn't match another location and is not found will also load the index.html, for example missing images, css, js files, but in your case I can see you already have another location that's matching the assets' extensions, so you shouldn't face this problem.
NGINX
19,487,365
47
I have installed nginx on Ubuntu 12.04. However, nginx does not seem to follow symlinks. I understand that there is a config change required for this but I am not able to find where to make the change. Any help appreciated.
Have a look at the following config option from nginx docs: Syntax: disable_symlinks off; disable_symlinks on | if_not_owner [from=part]; Default: disable_symlinks off; Context: http, server, location This directive appeared in version 1.1.15.
NGINX
12,624,358
47
I can specify custom log format for access_log on Nginx, but it doesn't work for error_log. Is there anyway to achieve this?
You can't specify your own format, but in nginx build-in several level's of error_log-ing. Syntax: error_log file [ debug | info | notice | warn | error | crit ] Default: ${prefix}/logs/error.log Specifies the file where server (and fastcgi) errors are logged. Default values for the error level: in the main section - error in the HTTP section - crit in the server section - crit In my error_log, time always presented int begin of each error string in log.
NGINX
4,246,756
47
I'm trying to get webpack-dev-server running inside a Docker container then accessing it through a NGINX host. The initial index.html loads but the Web Sockets connection to the dev server cannot connect. VM47:35 WebSocket connection to 'ws://example.com/sockjs-node/834/izehemiu/websocket' failed: Error during WebSocket handshake: Unexpected response code: 400 I'm using the following config. map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream webpack_dev_server { server node; } server { server_name _; listen 80; root /webpack_dev_server; location / { proxy_pass http://webpack_dev_server; } location /sockjs-node/ { proxy_pass http://webpack_dev_server/sockjs-node/; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass proxy_http_version 1.1; # recommended with keepalive connections - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version # WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } }
Proxy pass should be ip and port of your webpack-dev-server container and you need proxy_redirect off; location /sockjs-node { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://node:8080; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } Also don't forget to add poll to your webpack-dev middleware watchOptions: { aggregateTimeout: 300, poll: 1000 }
NGINX
40,516,288
46
I'm hosting a website behind a Cloudflare proxy, which means that all requests to my server are over port 80, even though Cloudflare handles HTTP (port 80) and HTTPS (port 443) traffic. To distinguish between the two, Cloudflare includes an X-Forwarded-Proto header which is set to "http" or "https" based on the user's connection. I would like to redirect every request with an X-Forwarded-Proto: http header to the SSL version of my site. How can I achieve this with an nginx configuration?
The simplest way to do this is with an if directive. If there is a better way, please let me know, as people say the if directive is inefficient. Nginx converts dashes to underscores in headers, so X-Forwarded-Proto becomes $http_x_forwarded_proto. server { listen 80; server_name example.com; # Replace this with your own hostname if ($http_x_forwarded_proto = "http") { return 301 https://example.com$request_uri; } # Rest of configuration goes here... }
NGINX
26,223,733
46
I have some error with subj. Server doesn't high loaded: ~15% CPU, there are several Gb of memory, HDD is not buisy. But error 502 throws approximately in 3% of cases. Programs: Debian 6, nginx/0.7.62, php5-fpm (5.3.3-1). In error.log of nginx is this error: connect() to unix:/var/run/php5-fpm.sock failed State of php5-fpm usually like this: accepted conn: 41680 pool: www process manager: dynamic idle processes: 258 active processes: 1 total processes: 259 I think, this mean loading is not high. I have increased backlog params: in sysctl - net.core.somaxconn = 5000, in php-fpm pool - listen.backlog = 5000. No effect. I quote a configuration: /etc/nginx/nginx.conf user www-data; worker_processes 8; timer_resolution 100ms; worker_rlimit_nofile 20240; worker_priority -5; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_min_length 1100; gzip_buffers 64 8k; gzip_comp_level 3; gzip_http_version 1.1; gzip_proxied any; gzip_types text/plain application/xml application/x-javascript text/css; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; client_max_body_size 100M; server_tokens off; } /etc/nginx/php_location fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; fastcgi_buffers 256 128k; #fastcgi_buffer_size 16k; #fastcgi_busy_buffers_size 256k; fastcgi_connect_timeout 300s; fastcgi_send_timeout 300s; fastcgi_read_timeout 300s; include fastcgi_params; php-fpm pool [www] listen = /var/run/php5-fpm.sock listen.backlog = 5000 listen.owner = www-data listen.group = www-data listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 1024 pm.start_servers = 64 pm.min_spare_servers = 64 pm.max_spare_servers = 128 pm.max_requests = 32000 pm.status_path = /system/php5-fpm-status slowlog = /var/www/log/php-fpm.log.slow chdir = /var/www What can I do to optimize this system and make this use all server resources? PS. I'm sorry, my english is bad.
The issue is socket itself, its problems on high-load cases is well-known. Please consider using TCP\IP connection instead of unix socket, for that you need to make these changes: in php-fpm pool configuration replace listen = /var/run/php5-fpm.sock with listen = 127.0.0.1:7777 in /etc/nginx/php_location replace fastcgi_pass unix:/var/run/php5-fpm.sock; with fastcgi_pass 127.0.0.1:7777;
NGINX
10,470,109
46
I want to test nginx subdomains before uploading config to the server. Can i test it on localhost? I try server { listen 80; server_name localhost; location / { proxy_pass http://localhost:8080; } } server { listen 80; server_name sub.localhost; location / { proxy_pass http://localhost:8080/sub; } } And it does not work. Shoulld i change my hosts file in order to make it work? Also, after uploading site to the server should i change DNS records and add sub.mydomain.com?
Yes, add '127.0.0.1 sub.localhost' to your hosts file. That sub has to be resolved somehow. That should work. Then once you're ready to go to the net, yes, add an a or cname record for the subdomain sub. When I use proxy_pass I also include the proxy.conf from nginx. http://wiki.nginx.org/HttpProxyModule
NGINX
10,095,219
46
I work for a rather busy internet site that is often gets very large spikes of traffic. During these spikes hundreds of pages per second are requested and this produces random 502 gateway errors. Now we run Nginx (1.0.10) and PHP-FPM on a machine with 4x SAS 15k drives (raid10) with a 16 core CPU and 24GB of DDR3 ram. Also we make use of the latest Xcache version. The DB is located on another machine, but this machine's load is very low, and has no issues. Under normal load everything runs perfect, system load is below 1, and PHP-FPM status report never really shows more than 10 active processes at one time. There is always about 10GB of ram still available. Under normal load the machine handles about 100 pageviews per second. The problem arises when huge spikes of traffic arrive, and hundreds of page-views per second are requested from the machine. I notice that FPM's status report then shows up to 50 active processes, but that is still way below the 300 max connections that we have configured. During these spikes Nginx status reports up to 5000 active connections instead of the normal average of 1000. OS Info: CentOS release 5.7 (Final) CPU: Intel(R) Xeon(R) CPU E5620 @ 2.40GH (16 cores) php-fpm.conf daemonize = yes listen = /tmp/fpm.sock pm = static pm.max_children = 300 pm.max_requests = 1000 I have not setup rlimit_files, because as far as I know it should use the system default if you don't. fastcgi_params (only added values to standard file) fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_pass unix:/tmp/fpm.sock; nginx.conf worker_processes 8; worker_connections 16384; sendfile on; tcp_nopush on; keepalive_timeout 4; Nginx connects to FPM via Unix Socket. sysctl.conf net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 1 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.icmp_echo_ignore_broadcasts = 1 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.all.secure_redirects = 0 net.ipv4.conf.all.log_martians = 1 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.conf.default.secure_redirects = 0 net.ipv4.icmp_echo_ignore_broadcasts = 1 net.ipv4.icmp_ignore_bogus_error_responses = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.tcp_timestamps = 0 net.ipv4.conf.all.rp_filter=1 net.ipv4.conf.default.rp_filter=1 net.ipv4.conf.eth0.rp_filter=1 net.ipv4.conf.lo.rp_filter=1 net.ipv4.ip_conntrack_max = 100000 limits.conf * soft nofile 65536 * hard nofile 65536 These are the results for the following commands: ulimit -n 65536 ulimit -Sn 65536 ulimit -Hn 65536 cat /proc/sys/fs/file-max 2390143 Question: If PHP-FPM is not running out of connections, the load is still low, and there is plenty of RAM available, what bottleneck could be causing these random 502 gateway errors during high traffic? Note: by default this machine's ulimit's were 1024, since I changed it to 65536 I have not fully rebooted the machine, as it's a production machine and it would mean too much downtime.
This should fix it... You have: fastcgi_buffers 4 256k; Change it to: fastcgi_buffers 256 16k; // 4096k total Also set fastcgi_max_temp_file_size 0, that will disable buffering to disk if replies start to exceeed your fastcgi buffers.
NGINX
8,772,015
46
I have an application written in Angular 7 that I am deploying to a Docker container with NGINX. When I run the container, everything works perfectly except that if i Refresh the page in the browser (F5) I get an NGINX 404 error page. Here is my nginx.conf file from which you can see ive tried "try_files" user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; gzip on; include /etc/nginx/conf.d/*.conf; server { listen 80; location / { root /usr/share/nginx/html; index index.html; try_files $uri /index.html; } } } My Dockerfile: FROM node:alpine as builder RUN apk update && apk add --no-cache make git WORKDIR /app COPY package.json package-lock.json /app/ RUN cd /app && npm install COPY . /app RUN cd /app && npm run build FROM nginx:alpine RUN rm -rf /usr/share/nginx/html/* && rm -rf /etc/nginx/nginx.conf COPY ./nginx.conf /etc/nginx/nginx.conf COPY --from=builder /app/dist/hyper-client-admin /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] Directory on the deployed container is: /usr/share/nginx/html # ls -la total 23564 drwxr-xr-x 1 root root 4096 May 20 00:18 . drwxr-xr-x 1 root root 4096 Mar 8 03:05 .. drwxr-xr-x 2 root root 4096 May 20 00:18 assets -rw-r--r-- 1 root root 290728 May 20 00:18 es2015-polyfills.js -rw-r--r-- 1 root root 211178 May 20 00:18 es2015-polyfills.js.map -rw-r--r-- 1 root root 997 May 20 00:18 favicon.png -rw-r--r-- 1 root root 770 May 20 00:18 index.html -rw-r--r-- 1 root root 114749 May 20 00:18 main.js -rw-r--r-- 1 root root 115163 May 20 00:18 main.js.map -rw-r--r-- 1 root root 241546 May 20 00:18 polyfills.js -rw-r--r-- 1 root root 240220 May 20 00:18 polyfills.js.map -rw-r--r-- 1 root root 6224 May 20 00:18 runtime.js -rw-r--r-- 1 root root 6214 May 20 00:18 runtime.js.map -rw-r--r-- 1 root root 1117457 May 20 00:18 styles.js -rw-r--r-- 1 root root 1191427 May 20 00:18 styles.js.map -rw-r--r-- 1 root root 10048515 May 20 00:18 vendor.js -rw-r--r-- 1 root root 10505601 May 20 00:18 vendor.js.map And here is the console output: 172.17.0.1 - - [20/May/2019:00:18:30 +0000] "GET / HTTP/1.1" 200 371 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36" "-"lopment\hyper-client-admin> 172.17.0.1 - - [20/May/2019:00:18:30 +0000] "GET /runtime.js HTTP/1.1" 200 6224 "http://localhost:81/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36" "-" 172.17.0.1 - - [20/May/2019:00:18:30 +0000] "GET /polyfills.js HTTP/1.1" 200 241546 "http://localhost:81/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36" "-" 172.17.0.1 - - [20/May/2019:00:18:30 +0000] "GET /main.js HTTP/1.1" 200 114749 "http://localhost:81/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36" "-" 172.17.0.1 - - [20/May/2019:00:18:30 +0000] "GET /styles.js HTTP/1.1" 200 1117457 "http://localhost:81/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36" "-" 172.17.0.1 - - [20/May/2019:00:18:30 +0000] "GET /vendor.js HTTP/1.1" 200 10048515 "http://localhost:81/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36" "-" 172.17.0.1 - - [20/May/2019:00:18:31 +0000] "GET /assets/logo-white.svg HTTP/1.1" 200 4519 "http://localhost:81/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36" "-" 172.17.0.1 - - [20/May/2019:00:18:31 +0000] "GET /favicon.png HTTP/1.1" 200 997 "http://localhost:81/login" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36" "-" 172.17.0.1 - - [20/May/2019:00:18:35 +0000] "GET /login HTTP/1.1" 404 188 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36" "-" 2019/05/20 00:18:35 [error] 6#6: *4 open() "/usr/share/nginx/html/login" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "GET /login HTTP/1.1", host: "localhost:81" Any ideas whats going on here? UPDATE: The actual answer to this lies in the comments of @Rajesh's Answer. The issue is that I was working on /etc/nginx/nginx.conf and I needed to be working on /etc/nginx/conf.d/default.conf
With a refresh on Angular app, we need to tell nginx web server to first look at the index.html file if the requested route exists or not before showing the error page. This is working fine for me: nginx.conf server { listen 80; server_name localhost; location / { root /usr/share/nginx/html; try_files $uri $uri/ /index.html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } Dockerfile FROM node:16-alpine as node WORKDIR /app COPY . . RUN npm install RUN npm run build --prod FROM nginx:alpine COPY ./nginx.conf /etc/nginx/conf.d/default.conf # Not /etc/nginx/nginx.conf COPY --from=node /app/dist/myapp /usr/share/nginx/html
NGINX
56,213,079
45
I'm out of ideas and I need help please! I create my SSL using Openssl with this: openssl req -x509 -newkey rsa:4096 -sha256 -nodes -keyout key.pem -out cert.pem -days 3650 The cert.pem looks like this: -----BEGIN CERTIFICATE----- cert -----END CERTIFICATE----- The key.pem looks like this: -----BEGIN PRIVATE KEY----- key -----END PRIVATE KEY----- In docker-compose I have the cert/key sent to etc/nginx/ssl/... volumes: - ./sites:/etc/nginx/conf.d - ./conf/nginx.conf:/etc/nginx/nginx.conf - ./ssl/cert.pem:/etc/nginx/ssl/cert.pem - ./ssl/key.pem:/etc/nginx/ssl/key.pem In nginx I have it added like this: listen 443 ssl; server_name localhost; ssl_certificate ssl/cert.pem; ssl_certificate_key ssl/key.pem; When I start up docker-compose, I get this error with nginx: web_1 | 2018/08/17 16:38:47 [emerg] 1#1: PEM_read_bio_X509_AUX("/etc/nginx/ssl/cert.pem") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: TRUSTED CERTIFICATE) web_1 | nginx: [emerg] PEM_read_bio_X509_AUX("/etc/nginx/ssl/cert.pem") failed (SSL: error:0906D06C:PEM routines:PEM_read_bio:no start line:Expecting: TRUSTED CERTIFICATE) I've been working on this for several days now and I'm not sure why I keep getting this error. I've tried making it crt/key instead of .pem and I get the same error. If I just remove ssl all together the server works fine, but I need SSL very badly. Pleeeaaase help!
A "normal" certificate, once encoded in PEM will look like this: -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- (the ... is Base64 encoding of a DER structure) This is normally (with the associated key, typically in separate file) the thing needed by any TLS enabled application when it wants to show its identity to the remote end. As a side note, since it seems to be popular (wrong) belief, the filename by itself, including the extension, has explicitly no consequences on the working (or not) status of the content. You can name your files foobar.42 and buzz.666 and if their content is valid they will work as well... of course maintenance by the human would be harder, hence the convention of using often .crt for a certificate (or .cert for non-DOS based constrained environments) and .key for a keyfile, and using typically the site name (for a website) or part of it for the name, such as example.com.crt. But again, those are only one possible set of conventions, and any program needing these files do not care about the name, just the content. Some are using the .pem extension also. See https://en.wikipedia.org/wiki/X.509#Certificate_filename_extensions for all the above it has a good discussion/presentation of options. Now in your case the error message was telling you it expected to have a content written as such: -----BEGIN TRUSTED CERTIFICATE----- ... -----END TRUSTED CERTIFICATE----- the only difference being the added TRUSTED keyword. But why, and when does it happen? A certificate is signed by one "certificate authority" through one or more intermediates. This builds a chain of trust up to a root certificate, where the issuer is equal to the subject, this certificate signs itself. You generated your certificate yourself, so this is a "self-signed" certificate, indistinguishable technically from a CA certificate, except that no system by default, including your own, will give trust to such certificate without specific configuration. This is basically what the error message tells you: the application says it is loading a certificate based on your configuration that it can not validate (because it is self signed) and at the same time you did not explicitely configure it to trust it. This may be different depending on the application or its version, because the guide at https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-16-04 does basically the same thing as you and it works, but without showing the content of the certificate. In your openssl call, if you add -trustout it will generate BEGIN TRUSTED CERTIFICATE instead of BEGIN CERTIFICATE. This may happen by default also, depending on how openssl is installed/configured on your system. On the contrary, you have -clrtrust. See the "Trust Settings" section of the openssl x509 command at https://www.openssl.org/docs/man1.1.0/apps/x509.html
NGINX
51,899,844
45
I have two docker containers: Nginx and App. The app container extends PHP-fpm and also has my Laravel Code. In my docker-compose.yml I'm doing: version: '2' services: nginx: build: context: ./nginx dockerfile: ./Dockerfile ports: - "80:80" links: - app app: build: context: ./app dockerfile: ./Dockerfile In my Nginx Dockerfile i'm doing: FROM nginx:latest WORKDIR /var/www ADD ./nginx.conf /etc/nginx/conf.d/default.conf ADD . /var/www EXPOSE 80 In my App Dockerfile I'm doing: FROM php:7-fpm WORKDIR /var/www RUN apt-get update && apt-get install -y libmcrypt-dev mysql-client && docker-php-ext-install mcrypt pdo_mysql ADD . /var/www After successfully running docker-compose up, I have the following error when I try localhost The stream or file "/var/www/storage/logs/laravel.log" could not be opened: failed to open stream: Permission denied From my understanding, the storage folder needs to writable by the webserver. What should I be doing to resolve this?
Make your Dockerfile something as below - FROM php:7-fpm WORKDIR /var/www RUN apt-get update && apt-get install -y libmcrypt-dev mysql-client && docker-php-ext-install mcrypt pdo_mysql ADD . /var/www RUN chown -R www-data:www-data /var/www This makes directory /var/www owned by www-data which is the default user for php-fpm. Since it is compiled with user www-data. Ref- https://github.com/docker-library/php/blob/57b41cfc2d1e07acab2e60d59a0cb19d83056fc1/7.0/jessie/fpm/Dockerfile
NGINX
48,619,445
45
I'm attempting to deploy my create-react-app SPA on a Digital Ocean droplet with Ubuntu 14.04 and Nginx. Per the static server deployment instructions, I can get it working when I run serve -s build -p 4000, but the app comes down as soon as I close the terminal. It is not clear to me from the create-react-app repo readme how to keep it running forever, similar to something like forever. Without running serve, I get Nginx's 502 Bad Gateway error. Nginx Conf server { listen 80; server_name app.mydomain.com; root /srv/app-name; index index.html index.htm index.js; access_log /var/log/nginx/node-app.access.log; error_log /var/log/nginx/node-app.error.log; location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm|svg)$ { root /srv/app-name/build; } location / { proxy_pass http://127.0.0.1:4000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Access-Control-Allow-Origin *; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
One of the major benefits of React (and Create React App) is that you don't need the overhead of running a Node server (or proxying to it with Nginx); you can serve the static files directly. From the Deployment documentation you've linked to, Create React App describes what to do: npm run build creates a build directory with a production build of your app. Set up your favorite HTTP server so that a visitor to your site is served index.html, and requests to static paths like /static/js/main.<hash>.js are served with the contents of the /static/js/main.<hash>.js file. In your case, run npm run build to create the build/ directory and then make the files available in a location Nginx can access them. Your build is probably best done on your local machine and then you can securely copy the files across to your server (via SCP, SFTP etc). You could run npm run build on your server, but if you do, resist the temptation to directly serve the build/ directory as the next time you run a build, clients could receive an inconsistent set of resources whilst you're building. Whichever build method you choose, once your build/ directory is on your server, then check its permissions to ensure Nginx can read the files and configure your nginx.conf like so: server { listen 80; server_name app.mydomain.com; root /srv/app-name; index index.html; # Other config you desire (TLS, logging, etc)... location / { try_files $uri /index.html; } } This configuration is based upon your files being in /srv/app-name. In short, the try_files directive attempts to load CSS/JS/images etc first and for all other URIs, loads the index.html file in your build, displaying your app. For note, you should be deploying using HTTPS/SSL to serve it rather than with insecure HTTP on port 80. Certbot provides automatic HTTPS for Nginx with free Let's Encrypt certificates, if the cost or process of obtaining a certificate would otherwise hold you back.
NGINX
46,880,853
45
I am making a practice web service (client's artbook display web site) The client can upload artbook images to the server. But I get the following error when the client uploads too many images 413 Request Entity Too Large I tried adding client_max_body_size 100M; in nginx.conf #user nobody; #Defines which Linux system user will own and run the Nginx server worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #Specifies the file where server logs. #pid logs/nginx.pid; #nginx will write its master process ID(PID). events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #access_log logs/access.log main; sendfile on; server { listen 80; server_name xxxx.net; client_max_body_size 100M; keepalive_timeout 5; return 301 https://$server_name$request_uri; } # HTTPS server # server { listen 443 default_server ssl; server_name xxx.net; ssl_certificate /etc/letsencrypt/live/xxxx.net/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/xxxx.net/privkey.pem; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header HOST $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://127.0.0.1:8000; proxy_redirect off; } } } and tried: sudo service nginx restart sudo service nginx reload and retry runserver but still get 413 Request Entity Too Large Can anybody help?
You've fixed the issue on your HTTP server, but your HTTP server is set to 301 redirect to your HTTPS server... your HTTPS server does not have client_max_body_size configured, so it is defaulting to 1M & causing this 413 (Request Entity Too Large) error. To fix this issue, you simply need to add client_max_body_size to BOTH the HTTP server block and the HTTPS server block, as shown in the example below: http { ... ###################### # HTTP server ###################### server { ... listen 80; server_name xxxx.net; client_max_body_size 100M; ... } ###################### # HTTPS server ###################### server { ... listen 443 default_server ssl; server_name xxxx.net; client_max_body_size 100M; ... } } More info on client_max_body_size here: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size Syntax: client_max_body_size size; Default: client_max_body_size 1m; Context: http, server, location Sets the maximum allowed size of the client request body, specified in the “Content-Length” request header field. If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client. Please be aware that browsers cannot correctly display this error. Setting size to 0 disables checking of client request body size. Read More about configuring HTTPS servers here: http://nginx.org/en/docs/http/configuring_https_servers.html
NGINX
36,994,828
45
I have got my EV SSL Certificate. I am following tutorials on how to use my certificate with NGINX on Ubuntu When I am trying to restart my nginx, I get: **invalid number of arguments in "ssl_certificate_key" directive in /etc/nginx/sites-enabled/default What I did so far: sudo nano /etc/nginx/sites-enabled/default upstream app { # Path to Unicorn SOCK file, as defined previously server unix:/home/zhall/zoulfia/shared/sockets/unicorn.sock fail_timeout=0; } server { listen 80; server_name moneytree.space www.moneytree.space " " 178.62.19.65; rewrite ^/(.*) https://moneytree.space/$1 permanent; } # HTTPS server server { listen 443; server_name moneytree.space www.moneytree.space " " 178.62.19.65; root /home/zhall/zoulfia/public; ssl on; ssl_certificate /home/zhall/moneytree.space.chained.crt; **ssl_certificate_key /home/zhall/ moneytree.space.key** ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; ssl_prefer_server_ciphers on; location / { try_files $uri $uri/ =404; } } When i restart nginx with --- sudo service nginx restart In my log file ---- sudo nano /var/log/nginx/error.log, I get: **invalid number of arguments in "ssl_certificate_key" directive in /etc/nginx/sites-enabled/default Everything is new to me so I need your help to solve this. What am I doing wrong and most importantly how to correct this mistake? Thank you, Zoulfia
It looks like you may be missing a semicolon at the end of the ssl_certificate_key line.
NGINX
33,001,692
45
basically an server instance is running at somesite.com/production/folder/here?param=here&count=1 I want to point someite.com/demo to /production/folder/here so when user types somesite.com/production/demo?param=here it will work without redirecting to /production/folder/here
server { server_name example.com; root /path/to/root; location / { # bla bla } location /demo { alias /path/to/root/production/folder/here; } } If you need to use try_files inside /demo you'll need to replace alias with a root and do a rewrite because of the bug explained here
NGINX
21,399,789
45
How can I compile my Lua scripts into a single executable file, while also gaining the super fast performance benefits of LuaJIT? Background: My Lua scripts are for a web application I created (e.g. to host http://example.com) My current technology stack is NGINX (web server), Lua/LuaJIT (language to retrieve dynamic content) I have around 50+ .lua files that make up my web application (from Models/Views/Controllers) FreeBSD 9 operating system For simplicity sake in deployment, I'd like to compile down all of my .lua scripts that run my web application down to a single executable. Is this possible and how? It appears that Lua official comes with a library called SRLua What are the negatives to compiling down my .lua to a single executable (e.g. would performance be worse, etc)?
Translate all of the Lua source code files to object files and put them in a static library: for f in *.lua; do luajit -b $f `basename $f .lua`.o done ar rcus libmyluafiles.a *.o Then link the libmyluafiles.a library into your main program using -Wl,--whole-archive -lmyluafiles -Wl,--no-whole-archive -Wl,-E. This line forces the linker to include all object files from the archive and to export all symbols. For example, a file named foo.lua can now be loaded with local foo = require("foo") from within your application. Details about the -b option can be found on Running LuaJIT.
NGINX
11,317,269
45
I was running a vuejs app on its own dev server, now I can access the site by public IP of machine, But after pointing it with a domain using nginx its showing an error loop in console error in console Invalid Host header [WDS] Disconnected! Due to this the script,style injection and auto reload not working. config of dev server dev: { assetsSubDirectory: "static", assetsPublicPath: "/", disableHostCheck: true, host: "0.0.0.0", // '192.168.2.39',//can be overwritten by process.env.HOST port: 8080, autoOpenBrowser: false, errorOverlay: false, notifyOnErrors: false, poll: true, devtool: "cheap-module-source-map", cacheBusting: true, cssSourceMap: true }, nginx config for the domain server { listen 80 ; listen [::]:80; server_name prajin.prakash.com; location / { proxy_pass http://localhost:8081; } }
I believe you need to change vue.config.js module.exports = { devServer: { disableHostCheck: true } }
NGINX
51,084,089
44
I'm using nginx as a load balancer in front of several upstream app servers and I want to set a trace id to use to correlate requests with the app server logs. What's the best way to do that in Nginx, is there a good 3rd party module for this? Otherwise a pretty simple way would be to base it off of timestamp (possibly plus a random number if that's not precise enough) and set it as an extra header on the request, but the only set_header command I see in the docs is for setting a response header.
nginx 1.11.0 added the new variable $request_id which is a unique identifier, so you can do something like: location / { proxy_pass http://upstream; proxy_set_header X-Request-Id $request_id; } See reference at http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_id
NGINX
17,748,735
44
I'm trying to set up a simple virtual host, serving only static files. Trouble is, directing the browser to (in this case) jorum.dev displays the default nginx welcome page, as opposed to jorum.dev/index.html. Nginx was installed using Homebrew on Mac OS X Mountain Lion. hosts 127.0.0.1 jorum.dev jorum.dev server { listen 80; server_name jorum.dev; location / { root ~/Sites/jorum; index index.html index.htm; } } nginx.conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; gzip on; gzip_disable "msie6"; gzip_min_length 1100; gzip_vary on; gzip_proxied any; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/x-font-ttf font/opentype application/vnd.ms-fontobject; server_tokens off; client_max_body_size 4096k; client_header_timeout 10; client_body_timeout 10; keepalive_timeout 10 10; send_timeout 10; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }
the debian/ubuntu nginx package comes with a default sites-available that takes over the default host dispatch. Simply remove the default link and it should load your site instead. another gotcha, the debian/ubuntu nginx package also comes with a default index value for the default server in the default site that reads: index index.html index.htm index.nginx-debian.html; I've had the file index.nginx-debian.html show up in my www root directory, which took precedence over the cgi locations. nginx has a verbose test mode that can be used to dump the entire configuration as parsed by nginx, which you can sanity check against your expectations. nginx -T
NGINX
14,460,935
44
When you turn on the nginx rewrite log with rewrite_log on;, where does the system actually log that info? It doesn't seem to be in the documentation, and a decent search through google doesn't turn anything up. I have also tried enabling and looking in both the access and error logs. No luck.
If rewrite_log on; is used then the rewrite information will be logged to error_log at notice level. There is no separate log file.
NGINX
9,900,443
44
How to I get rid of all the below errors in nginx. I do not have a favicon.ico 2012/03/11 17:13:25 [error] 959#0: *116 open() "/usr/local/nginx/html/favicon.ico" failed (2: No such file or directory), client: 111.68.59.75, server: 127.0.0.1, request: "GET /favicon.ico HTTP/1.1" I would imagine some line in nginx conf?
location = /favicon.ico { log_not_found off; }
NGINX
9,657,065
44
My original question was how to enable HTTPS for a Django login page, and the only response, recommended that I - make the entire site as HTTPS-only. Given that I'm using Django 1.3 and nginx, what's the correct way to make a site HTTPS-only? The one response mentioned a middleware solution, but had the caveat: Django can't perform a SSL redirect while maintaining POST data. Please structure your views so that redirects only occur during GETs. A question on Server Fault about nginx rewriting to https, also mentioned problems with POSTs losing data, and I'm not familiar enough with nginx to determine how well the solution works. And EFF's recommendation to go HTTPS-only, notes that: The application must set the Secure attribute on the cookie when setting it. This attribute instructs the browser to send the cookie only over secure (HTTPS) transport, never insecure (HTTP). Do apps like Django-auth have the ability to set cookies as Secure? Or do I have to write more middleware? So, what is the best way to configure the combination of Django/nginx to implement HTTPS-only, in terms of: security preservation of POST data cookies handled properly interaction with other Django apps (such as Django-auth), works properly any other issues I'm not aware of :) Edit - another issue I just discovered, while testing multiple browsers. Say I have the URL https://mysite.com/search/, which has a search form/button. I click the button, process the form in Django as usual, and do a Django HttpResponseRedirect to http://mysite.com/search?results="foo". Nginx redirects that to https://mysite.com/search?results="foo", as desired. However - Opera has a visible flash when the redirection happens. And it happens every search, even for the same search term (I guess https really doesn't cache :) Worse, when I test it in IE, I first get the message: You are about to be redirected to a connection that is not secure - continue? After clicking "yes", this is immediately followed by: You are about to view pages over a secure connection - continue? Although the second IE warning has an option to turn it off - the first warning does not, so every time someone does a search and gets redirected to a results page, they get at least one warning message.
For the 2nd part of John C's answer, and Django 1.4+... Instead of extending HttpResponseRedirect, you can change the request.scheme to https. Because Django is behind Nginx's reverse proxy, it doesn't know the original request was secure. In your Django settings, set the SECURE_PROXY_SSL_HEADER setting: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') Then, you need Nginx to set the custom header in the reverse proxy. In the Nginx site settings: location / { # ... proxy_set_header X-Forwarded-Proto $scheme; } This way request.scheme == 'https' and request.is_secure() returns True. request.build_absolute_uri() returns https://... and so on...
NGINX
8,153,875
44
I'm serving Angular 2 application with nginx using location section this way: location / { try_files $uri $uri/ /index.html =404; } try_files directive tries to find the requested uri in root directory and if it fails to find one it simply returns index.html How to disable caching of index.html file?
Found a solution using nginx named locations: location = / { add_header Cache-Control no-cache; expires 0; try_files /index.html =404; } location / { gzip_static on; try_files $uri @index; } location @index { add_header Cache-Control no-cache; expires 0; try_files /index.html =404; }
NGINX
41,631,399
43
I'm unfortunately not much of a system administrator and have come upon a problem that has me banging my head against the wall. The short story is that I'm running Nginx on EC2 (Ubuntu 14.04.4 LTS) to (a) host my company's marketing site (https://example.com, which incidentally is Wordpress) and (b) serve as a reverse proxy to our Rails app running on Heroku (https:// app.example.com), for certain paths. We use the same SSL certificate for both example.com and app.example.com. All of this has worked great for 8-10 months, but I recently switched from Heroku's paid SSL addon to the new free SSL offering, and now our reverse proxy is broken. In checking the Nginx error logs, I see the following: SSL_do_handshake() failed (SSL: error:14094438:SSL routines:SSL3_READ_BYTES:tlsv1 alert internal error:SSL alert number 80) while SSL handshaking to upstream, client: ipaddress1, server: example.com, request: "GET /proxiedpath/proxiedpage HTTP/1.1", upstream: "https:// ipaddress2:443/proxiedpath/proxiedpage", host: "example.com" I've tried to search around for some additional guidance - I've upgraded Nginx (1.10.1) and OpenSSL (1.0.2h) with no luck. I suspected the issue might be due to Heroku's use of SNI in the new free SSL feature (https://devcenter.heroku.com/articles/ssl-beta), but haven't been able to determine why this might be a problem. A few additional points on my exploration to this point: When I switched to the new free Heroku SSL, I changed our app.example.com DNS record to point to app.example.com.herokudns.com, as instructed by the docs. The application can be accessed normally through app.example.com and when I run an nslookup on app.example.com and app.example.com.herokudns.com, I get the same IP address back. However... I cannot access the application through either the IP address returned from nslookup or app.example.com.herokudns.com. I suspect this is normal and expected but don't know enough to say exactly why this is. And... The IP address returned from the nslookup is NOT the same as the IP address referenced in the log's error message above ("ipaddress2"). In fact, "ipaddress2" is not consistent throughout the logs - it seems to change regularly. Again I don't know enough to know what I don't know... load balancing on Heroku's side? And finally, my Nginx reverse proxy is configured as follows in nginx.conf: http { client_max_body_size 500M; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server_names_hash_bucket_size 64; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; server { listen 443 default_server; server_name example.com; root /usr/share/nginx/html; index index.php index.html index.htm; ssl on; ssl_certificate mycompanycert.crt; ssl_certificate_key mycompanykey.key; ssl_session_timeout 5m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; ssl_prefer_server_ciphers on; error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location ^~ /proxiedpath/ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_pass https://app.example.com/proxiedpath/; } } } Any help is greatly appreciated - thanks very much!
I was able to solve this today and wanted to post the solution in case others run into the same issue. It turns out that the problem was related to SNI after all. I found this ticket on nginx.org: https://trac.nginx.org/nginx/ticket/229 Which led me to the proxy_ssl_server_name directive: http://nginx.org/r/proxy_ssl_server_name By setting to "on" in your config, you'll be able to proxy to upstream hosts using SNI. Thanks to all who commented with suggestions!
NGINX
38,375,588
43
I want to run a script, right after running `docker-compose up -d` Here is my snippet of docker-compose.yml . The other settings are mysql server, redis...etc....but they are not causing any problems web: image: nginx container_name: web-project volumes: - ./code:/srv working_dir: /srv/myweb extra_hosts: - "myweb.local:127.0.0.1" ports: - 8081:80 # tty: true command: sh /srv/scripts/post-run-web.sh So whenever i run docker-compose up -d or docker-compose up It all stops. (the containers do not keep running). Although my shell script is simple (running echos...or phpunit). Here is my script. #!/bin/bash echo running post install scripts for web..; cd /srv/myweb npm install composer self-update composer update And this is the error I get. It is like the server (nginx) is not running yet. Also if I connect to the server using exec bash, and i check out the processes. I do not see nginx running (yet). web_1 | You are already using composer version 7a9eb02190d334513e99a479510f87eed18cf958. web_1 | Loading composer repositories with package information web_1 | Updating dependencies (including require-dev) web_1 | Generating autoload files web-project exited with code 0 Gracefully stopping... (press Ctrl+C again to force) Stopping mysql-project... done Stopping rabbitmq-project... done Stopping redis-project... done So why is it exiting, although the script is syntax-wise correct? How can i make it run correctly? (what am I doing, setting up wrong!)
command overrides the default command. That's the reason your container stops: nginx never starts. At the end of your script you have to run nginx #!/bin/bash echo running post install scripts for web..; cd /srv/myweb npm install composer self-update composer update nginx By the way, I suggest you to change your script and run npm install and composer *update only if required (thus only if some file in /src/myweb does not exists), because it makes your container startup time increase in vain. Note that by doing so, NginX will never catch the SIGTERM signal sent by docker stop. That can cause it to be abruptly killed. Instead, if you want to be sure that SIGTERM is received by nginx, you have to replace the last line with exec nginx. This replaces the bash process with nginx itself.
NGINX
33,009,825
43
I have two docker containers with nginx. container1 is linked to container2. Docker then adds an entry to /etc/hosts which I entered into the nginx configuration like so: server { location ~ ^/some_url/(.*)$ { proxy_pass http://container1/$1; } } I can ping container1 from container2, but nginx cannot resolve it: *1 no resolver defined to resolve container1 How can I proxy_pass a request to another docker container?
Use an upstream block instead of the container name directly upstream backend { server container1; } server { location ~ ^/some_url/(.*)$ { proxy_pass http://backend/$1; } } This should allow normal name resolution to occur providing a way to easily use docker links with nginx.
NGINX
28,028,789
43
My host's file maps 127.0.0.1 to localhost. $ curl -I 'localhost' curl: (7) Failed to connect to localhost port 80: Connection refused And then $ curl -I 127.0.0.1 HTTP/1.1 200 OK Server: nginx/1.2.4 Date: Wed, 09 Apr 2014 04:20:47 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 23 Oct 2012 21:48:34 GMT Connection: keep-alive Accept-Ranges: bytes In my host's file I have 127.0.0.1 localhost It appears that the curl command fails to recognize entries in /etc/hosts. Can someone explain why? update: I've yet to try this but I've discovered you can configure nginx to respond to ipv4 and ipv6
Since you have a ::1 localhost line in your hosts file, it would seem that curl is attempting to use IPv6 to contact your local web server. Since the web server is not listening on IPv6, the connection fails. You could try to use the --ipv4 option to curl, which should force an IPv4 connection when both are available.
NGINX
22,952,676
43
I'm trying to set up SSL on Nginx. It doesn't work, and I am getting the following error in the error log, which is getting passed up from the OpenSSL library which nginx was compiled with. I don't know what that library is, but it's version 0.8.54 of nginx, and I installed it using apt-get on Ubuntu Linux. 2012/02/21 07:06:33 [emerg] 4071#0: SSL_CTX_use_PrivateKey_file("/exequias/certs/exequias.com.key") failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting password error: 0906A068:PEM routines:PEM_do_header:bad password read error:140B0009:SSL routines: SSL_CTX_use_PrivateKey_file:PEM lib) I have ensured that the file permissions on the private key file are not stopping nginx from reading it. It is an RSA private key, generated with openssl rsa. Any ideas what might be causing this?
Remove the key pass phrase: openssl rsa -in key.pem -out newkey.pem If the certificate and the key are together: openssl rsa -in mycert.pem -out newcert.pem openssl x509 -in mycert.pem >>newcert.pem Source: http://www.madboa.com/geek/openssl/#key-removepass
NGINX
9,380,403
43
What are the advantages of having nginx or another web-server running as a reverse-proxy in front of the Node.JS? What does it provide? (This question is intended for matters concerning web-apps, not web-pages). Thank you.
I think the greatest benefit is that you're then able to use the same port (80) for multiple applications. Otherwise, you'd need a new IP address for each nodejs application you have. Depending on how you set things up, you can also configure different folders and subdomains to different nodejs apps running on different ports. If you're building something big or complex, this is pretty great. Imagine being able to run your APIs on one node application, your website from another, and the logged-in website (member's area, dashboard, etc.) in another app. Your load balancer can determine who needs to go where (example.com/api* -> api.js, example.com/dashboard* -> dashboard.js, example.com -> app.js). This is not only useful for scaling, but also when things break, not everything breaks at once. To the maturity thing, meh. Nodejs + forever + node-http-proxy = Amazing. Run 1 proxy server for all of your apps with a minimal config/complexity (lower chance of failure). Then have fun with everything else. Don't forget to firewall off your internal ports, though ;). Some people make note of load balancing, which true, is a benefit. However, load balancing isn't something that most people will benefit from, since a single threaded, non-blocking nodejs thread can handle quite impressively large loads. I truly wouldn't even consider this as a difference if I were you. Load balancing is easy enough to implement when you need it, but otherwise utterly useless until you do. Also note, if you do go with a non-node proxy solution (nginx, tornado, etc.), just be sure NOT to use one that blocks. Apache blocks. Nginx doesn't. You don't want to throw away one of the greatest benefits of using nodejs in the first place on a crummy server.
NGINX
6,763,571
43
I'm converting my mediawiki site to use nginx as a frontend for static files with apache on the backend for php. I've gotten everything working so far except for when I view the root directory "example.com" it tries to serve a directory listing and gives a 403 error since I have that disabled and don't have an index file there. The apache rewrite rule I have in place right now is simply: RewriteRule ^$ /wiki/Main_Page [L] I tried something similar with a location directive in nginx, but it's not working: location = / { rewrite "^$" /wiki/Main_Page; } The rest of my location directives are: location / { try_files $uri $uri/ @rewrite; } location @rewrite { rewrite ^/wiki/(.*)$ /w/index.php?title=$1&$args; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { try_files $uri /w/index.php?title=$1&$args; expires max; log_not_found off; } location ~ \.php?$ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; } I can simply put an index.php file with header('Location:') in it, but I'd rather just do it properly with a rewrite rule. All the examples I've found online for running mediawiki with nginx run the wiki as wiki.example.com instead of a /wiki/ subdirectory. Edit: I also tried adding to the try_files like this: try_files $uri $uri/ @rewrite /wiki/Main_Page; with the same 403 error result.
I found help in the nginx irc chat. Basically what I needed to do was use a return instead of rewrite. So I changed this: location = / { rewrite "^$" /wiki/Main_Page; } to this: location = / { return 301 http://www.example.com/wiki/Main_Page; }
NGINX
17,738,088
42
I am running nginx as reverse proxy for the site example.com to loadbalance a ruby application running in backend server. I have the following proxy_set_header field in nginx which will pass host headers to backend ruby. This is required by ruby app to identify the subdomain names. location / { proxy_pass http://rubyapp.com; proxy_set_header Host $http_host; } Now I want to create an alias beta.example.com, but the host header passed to backend should still be www.example.com otherwise the ruby application will reject the requests. So I want something similar to below inside location directive. if ($http_host = "beta.example.com") { proxy_pass http://rubyapp.com; proxy_set_header Host www.example.com; } What is the best way to do this?
You cannot use proxy_pass in if block, so I suggest to do something like this before setting proxy header: set $my_host $http_host; if ($http_host = "beta.example.com") { set $my_host "www.example.com"; } And now you can just use proxy_pass and proxy_set_header without if block: location / { proxy_pass http://rubyapp.com; proxy_set_header Host $my_host; }
NGINX
14,352,690
42
I'm wondering how do I know if a particular location[s] used to process request in nginx. E.g.: # 1 location / {} # 2 location ~ /[\w\-]+\.html {} # 3 location ~ /\w+\.html {} How do I know if URI like /mysite is processed by 3rd location and not 2nd? I tend to use add_header for this matter: location / { add_header location 1; } location ~ /(\w+\-)\.html { add_header location 2; } location @named { add_header location named; } And I'd like to know is there a better solution or what do you personally use for debugging purposes?
If you just want to see which block was used, and don't care about returning otherwise valid results, it might be more straight-forward to use return rather than add_header. location / { return 200 'location 1'; } location ~ /(\w+\-)\.html { return 200 'location 2'; } location @named { return 200 'location named'; }
NGINX
12,703,702
42
I'm deploying a Rails app to production. It seems that Puma is fast and handles many of the things I want in a web server. I'm wondering if I even need to bother with Nginx, and what I'd be missing out on if just used Puma?
Nginx is a web server and puma is an application server. Both have their advantages, and you need both. Some examples: Static redirects- you could setup your nginx to redirect all http traffic to the same url with https. This way such trivial requests will never hit your app server. Multipart upload- Nginx is better suited to handle multipart uploads. Nginx will combine all the requests and send it as a single file to puma. Serving static assets- It is recommended to serve static assets (those in /public/ endpoint in rails) via a webserver without loading your app server. There are some basic DDoS protections built-in in nginx.
NGINX
50,516,699
41
Im trying to set the ALLOWED-FROM in Nginx but all settings I tried so far resulted in the following Chrome error: Invalid 'X-Frame-Options' header encountered when loading 'https://domain.com/#/register': 'ALLOW-FROM domain.com' is not a recognized directive. The header will be ignored. This options I tried are those: (tried also with FQDN with https:// prefix) add_header X-Frame-Options "Allow-From domain.com"; add_header X-Frame-Options "ALLOW-FROM domain.com"; add_header X-Frame-Options "ALLOW-FROM: domain.com"; add_header X-Frame-Options "Allow-From: domain.com"; add_header X-Frame-Options ALLOW-FROM "domain.com"; add_header X-Frame-Options ALLOW-FROM domain.com;
in Chrome and Safari you need to use Content-Security-Policy Content-Security-Policy: frame-ancestors domain.com You can check more details on this site: https://developer.mozilla.org/en-US/docs/Web/Security/CSP/CSP_policy_directives
NGINX
30,731,290
41
I want to set up Nginx as a reverse proxy for a https service, because we have a special usecase where we need to "un-https" a connection: http://nginx_server:8080/myserver ==> https://mysecureservice But what happens is that the actual https service isn't proxied. Nginx does redirect me to the actual service, so the URL in the browser changes. I want to interact with Nginx as it was the actual service, just without https. This is what I have: server { listen 0.0.0.0:8080 default_server; location /myserver { proxy_pass https://myserver/; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; } }
You have to use the proxy_redirect to handle the redirection. Sets the text that should be changed in the “Location” and “Refresh” header fields of a proxied server response. Suppose a proxied server returned the header field “Location:https://myserver/uri/”. The directive will rewrite this string to “Location: http://nginx_server:8080/uri/”. Example: proxy_redirect https://myserver/ http://nginx_server:8080/; Source: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
NGINX
24,520,540
41
Is there a way to inject a few lines of script etc. for each served php/html/etc. page? For example some custom javascript after -tag? I know, you should be able to use lua in nginx but is there a better solution? I am running multiple different web application behind the nginx, so it feels proper way to do this. I don't have access source code for each application and maintaining those would be cumbersome.
I found the way to do this: http://nginx.org/en/docs/http/ngx_http_sub_module.html location / { sub_filter </head> '</head><script language="javascript" src="$script"></script>'; sub_filter_once on; }
NGINX
19,700,871
41
Here is my situation: I will have one frontend server running Nginx, and multiple backends servers running Apache + passenger with different rails applications. I am NOT trying to do any load balancing. What I need to do is setup Nginx to proxy connections to specific servers based on the URL. IE, client.example.com should point to x.x.x.100:80, client2.example.com should point to x.x.x.101:80, etc. I am not that familiar with Nginx, but I could not find a specific configuration online that fit my situation.
You can match the different URLs with server {} blocks, then inside each server block, you'd have the reverse proxy settings. Below, an illustration; server { server_name client.example.com; # app1 reverse proxy follow proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://x.x.x.100:80; } server { server_name client2.example.com; # app2 reverse proxy settings follow proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://x.x.x.101:80; } Also, you may add further Nginx settings (such as error_page and access_log) as desired in each server {} block.
NGINX
13,240,840
41
I only want to redirect the root path from domain A to domain B. For example, if user type in https://www.a.com/ or https://www.a.com or http://a.com all redirect to https://www.b.com/, but if user type in https://www.a.com/something/ then it keep there without redirect. I tried the following: location / { return 301 https://www.b.com/; } but it redirect all to www.b.com even user type in https://www.a.com/something/.
I got it. location ~ ^/$ { return 301 https://www.b.com/; }
NGINX
41,755,100
40
My application uses nginx, with uWSGI on the server side. When I do a large request (with a response time > 4s), the following appears: SIGPIPE: writing to a closed pipe/socket/fd (probably the client disconnected) on request _URL_ (ip XX.XX.XX.XX) !!! uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 287] during GET _URL_ (XX.XX.XX.XX) OSError: write error It seems that uWSGI tries to write in a stream but this stream has already been closed. When I check nginx log (error.log): upstream prematurely closed connection while reading response header from upstream ... Of course, my client (REST client or browser) receives a 502 error. I always get this error after ~4s. However, I don't know how to prevent this issue. I tried to set some parameters in my nginx config file: location my_api_url { [...] uwsgi_buffer_size 32k; uwsgi_buffers 8 32k; uwsgi_busy_buffers_size 32k; uwsgi_read_timeout 300; uwsgi_send_timeout 300; uwsgi_connect_timeout 60; } But the issue is still here. I also tried to set these parameters in the uWSGI configuration file (wsgi.ini): buffer-size=8192 ignore-sigpipe=true ignore-write-errors=true Before to try to optimize the response time, I hope this issue has a solution. I don't find one that's working in another post. I work with a large amount of data, so my response time, for some case, will be between 4-10s. Hope you can help me :) Thanks a lot in advance.
It may be the case that when you upload things, you use chunked encoding. There is uWSGI option --chunked-input-timeout, that by default is 4 seconds (it defaults to value of --socket-timeout, which is 4 seconds). Though problem theoretically may lie somewhere else, I suggest you to try aforementioned options. Plus, annoying exceptions are the reason why I have ignore-sigpipe=true ignore-write-errors=true disable-write-exception=true in my uWSGI config (note that I provide 3 options, not 2): ignore-sigpipe makes uWSGI not show SIGPIPE errors; ignore-write-errors makes it not show errors with e.g. uwsgi_response_writev_headers_and_body_do; disable-write-exception prevents OSError generation on writes.
NGINX
36,156,887
40
NGINX acting as a caching proxy encounters problems when fetching content from CloudFront server over HTTPS: This is the extract from the NGINX's error log: 2014/08/14 16:08:26 [error] 27534#0: *11560993 SSL_do_handshake() failed (SSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure) while SSL handshaking to upstream, client: 82.33.49.135, server: localhost, request: "GET /static/images/media-logos/best.png HTTP/1.1", upstream: "https://x.x.x.x:443/static/images/media-logos/best.png", I tried different proxy setting like proxy_ssl_protocols and proxy_ssl_ciphers but no combination worked. Any ideas?
I had the exactly same problem and spent a couple of hours... I guess you are using older version of nginx (lower than 1.7)? In nginx 1.7 you can use this directive: proxy_ssl_server_name on; This will force nginx to use SNI Also, you should set the SSL protocols: proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; For earlier versions you may be able to use this patch (but I can't verify that that is working): http://trac.nginx.org/nginx/ticket/229 2019 Update: You should avoid TLSv1 and TLSv1.1 and disable them if possible. I'll leave them in the answer as they are still valid for SNI.
NGINX
25,329,941
40
Good morning, I am using amazon s3 bucket as the image server. And I want to use a subdomain of my site, how to address this bucket. eg: a picture is now in: https://s3-sa-east-1.amazonaws.com/nomeBucket/pasta/imag.png, and I access it through this same link. Would that it were so: imagens.mydomain.com.br / folder / imag.png Is there any way I can do this? appoint a subdomain address to a bucket? I've tried the amazon route 53, as CNAME. I tried this: https://s3-sa-east-1.amazonaws.com/nomeBucket/ I took the test yesterday, but apparently it did not work. Someone already did something similar, and / or know how to help me? Note: I'm using nginx. also need to configure it for subdomain? Thank you
You need to rename your bucket to match the custom domain name (e.g. imagens.mydomain.com.br) and set up that domain as a CNAME to <bucket-name>.s3.amazonaws.com. (in your case imagens.mydomain.com.br.s3.amazonaws.com. The full instructions are available here: http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
NGINX
18,765,934
40
Is it possible to have the 40x and 50x errors served by a single location rule? Something like: error_page 403 /403.html; error_page 404 /404.html; error_page 405 /405.html; error_page 500 501 502 503 504 /5xx.html; location ~ /(?:40[345]|5xx)[.]html$ { root /var/www/default/error; }
error_page 403 /error/403.html; error_page 404 /error/404.html; error_page 405 /error/405.html; error_page 500 501 502 503 504 /error/5xx.html; location ^~ /error/ { internal; root /var/www/default; } http://nginx.org/r/location http://nginx.org/r/error_page http://nginx.org/r/root
NGINX
13,621,915
40
I need to write a C++ interface that can read our data structure and provide the o/p based on query using http protocol. Server Need It should be able to serve 100 clients at the same time. Why C++ All code is already written in C++. So we need to just write a http layer in C++. That's why I am choosing C++ instead of a more conventional web-programming language. I am thinking to use nginx to serve static files and use its proxy pass to communicate with C++. There are two approaches I have found: Write a FastCGI c++ module. Write a node.js c++ module. Please just any other suggestion if you have Can you please list the pros and cons for each method based on prior experience?
No one here seems to have addressed the actual question, though some nice work arounds have been offered. I've been able to build C++ modules for nginx with a couple of minor changes. Change the module source file name to end with .cpp so gcc realizes it is dealing with C++. Make sure all your Nginx includes (e.g. ngx_config.h, ngx_core.h, etc.) are wrapped with an extern "C" { } structure. Similarly make sure any functions called through Nginx function pointers are declared with a wrapper. Add --with-ld-opt="-lstdc++" to your "configure" invocation when setting up Nginx. With those three steps your module should compile, build, link, and actually work.
NGINX
9,083,869
40
I see the Nginx HttpRewriteModule documentation has an example to rewrite a www-prefixed domain to a non-www-prefixed domain: if ($host ~* www\.(.*)) { set $host_without_www $1; rewrite ^(.*)$ http://$host_without_www$1 permanent; # $1 contains '/foo', not 'www.mydomain.com/foo' } How can I do the reverse-- rewrite a non-www-prefixed domain to a www-prefixed domain? I thought maybe I could do something like the following but Nginx doesn't like the nested if statement. if ($host !~* ^www\.) { # check if host doesn't start with www. if ($host ~* ([a-z0-9]+\.[a-z0-9]+)) { # check host is of the form xxx.xxx (i.e. no subdomain) set $host_with_www www.$1; rewrite ^(.*)$ http://$host_with_www$1 permanent; } } Also I wanted this to work for any domain name without explicitly telling Nginx to rewrite domain1.com -> www.domain1.com, domain2.com -> www.domain2.com, etc. since I have a large number of domains to rewrite.
As noted in the Nginx documentation, you should avoid using the if directive in Nginx where possible, because as soon as you have an if in your configuration your server needs to evaluate every single request to decide whether to match that if or not. A better solution would be multiple server directives. server { listen 80; server_name website.com; return 301 $scheme://www.website.com$request_uri; } server { listen 80; server_name www.website.com; ... } If you're trying to serve an SSL (HTTPS) enabled site, you got more or less three different options. Set up multiple IP addresses having each server directive listening on their own IP (or different ports if that's an option for you). This options needs SSL certificates for both website.com and www.website.com, so either you have a wild card certificate, a UNI certificate (multiple domains) or just plainly two different certificates. Do the rewrite in the application. Use the dreaded if directive. There is also an option to use SNI, but I'm not sure this is fully supported as of now.
NGINX
1,629,231
40
Introduction From NGINX version 1.9.11 and upwarts, a new feature is introduced: dynamic modules. With dynamic modules, you can optionally load separate shared object files at runtime as modules – both third-party modules and some native NGINX modules. (source) My setup and the problem I have NGINX installed from the mainline (currently 1.9.14) so it is capable to use dynamic modules. It has also the module I want dynamicly enabled: nginx -V nginx version: nginx/1.9.14 built by gcc 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.1) built with OpenSSL 1.0.1f 6 Jan 2014 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules ... --with-http_geoip_module=dynamic ... Note the --with-http_geoip_module=dynamic which loads the module I need (dynamically). Unfortunately, the documentation is lacking (some details) and I am unable to set this up.I have an existing NGINX installation (not from source). But so far as I can understand I just need to build the module, place the generated module file in the right NGINX folder and enable it in the config file. What I tried so far I tested this on a different machine (with the same configuration, but not a production machine), but I don't see the ngx_http_geoip_module.so file. The commands I used: wget http://nginx.org/download/nginx-1.9.14.tar.gz tar -xzf nginx-1.9.14.tar.gz cd nginx-1.9.14/ ./configure --with-http_geoip_module=dynamic The questions Is it a problem that I try to build the module on a system that has NGINX installed not from source? Why is there no .so file generated by my commands?
I had the same question, and @vladiastudillo answer was the missing piece I needed. First add the nginx stable repo: sudo add-apt-repository ppa:nginx/stable Then run apt update: sudo apt-get update And get the nginx geoip module: sudo apt-get install nginx-module-geoip This will download and load the module to /usr/lib/nginx/modules To load the nginx module, open nginx.conf: sudo nano /etc/nginx/nginx.conf add add below in the main context: load_module "modules/ngx_http_geoip_module.so"; The module will be loaded, when you reload the configuration or restart nginx. To dynamically “unload” a module, comment out or remove its load_module directive and reload the nginx configuration.
NGINX
36,554,405
39
After I changed ICG to nginx all routes except index page does not work. Laravel Config: #/etc/nginx/sites-enabled/laravel server { listen 80; root /var/www/home; index index.php; server_name 192.168.178.71; access_log /var/www/home/storage/app/logs/laravel-nginx-access.log; error_log /var/www/home/storage/app/logs/laravel-nginx-error.log error; location /home { root /home/public; try_files $uri $uri/ /index.php?$query_string; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { log_not_found off; access_log off; } # ERROR error_page 404 /index.php; # DENY HTACCESS location ~ /\.ht { deny all; } } Default config: # /etc/nginx/sites-enabled/default server { listen 80 default_server; listen [::]:80 default_server; root /var/www; # Add index.php to the list if you are using PHP index index.php index.html index.htm; server_name 192.168.178.71 localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ index.php?$query_string; autoindex on; # Remove trailing slash to please routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } } location ~ \.php$ { #try_files $uri /index.php =404; try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME document_root$fastcgi_script_name; } location ~ /\.ht { deny all; } } my nginx config #/etc/nginx/nginx.conf user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { disable_symlinks off; ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } What I tried: /var/www/home# (home folder is laravel folder) sudo chown -R www-data:www-data * /var/www/home# sudo chown -R root:root * also I tried to change try_files $uri $uri/ /index.php?$query_string; try_files $uri $uri/ /index.php$is_args$args; try_files $uri $uri/ /index.php; php artisan cache:clear Mostly questions in google i have read, but nothing helps me. My phpinfo - link
This is the correct basic config for Laravel and Nginx: server { listen 443 ssl default_server; root /var/www/laravel/public/; index index.php; ssl_certificate /path/to/cert; ssl_certificate_key /path/to/key; location / { try_files $uri $uri/ /index.php$is_args$args; } # pass the PHP scripts to FastCGI server listening on /var/run/php-fpm.sock location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_split_path_info ^(.+?\.php)(/.+)$; include fastcgi_params; } }
NGINX
35,634,530
39
After i login and the cookie is set I get error 502. When i read the log i get the error: 014/05/17 01:54:43 [error] 11013#0: *8 upstream sent too big header while reading response header from upstream, client: 83.248.134.236, server: , request: "GET /administration HTTP/1.1", upstream: After some fast googling i found: http://developernote.com/2012/09/how-i-fixed-nginx-502-bad-gateway-error/ and I want to try to set fastcgi_buffers and fastcgi_buffer_size to a different value. But how do i set variable on nginx in amazon elasticbeanstalk? The nginx server is before my docker instance.
Amazon actually recommends editing the staging version of the nginx deployment file. There are several located at /tmp/deployment/config/, one for editing the general 'http' context, and then a few for configuring different aspects of the server. I wanted to attach caching functionality to the default proxy server, so I wrote an .ebextensions config file to replace #etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf, which is then copied over to /etc/nginx/conf.d during deployment. You can inline the file if its simple enough, but I put mine in S3 so that different applications and pull it down and use it. Here's the config file: commands: 01-get-nginx-conf-file: command: aws s3 cp s3://<bucket-name>/custom-nginx.conf /home/ec2-user container_commands: 01-replace-default-nginx-config: command: mv -f /home/ec2-user/custom-nginx.conf /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf
NGINX
23,709,841
39
I'm setting up an Nginx server with an SSL. The domain with the SSL is www.mydomain.example I want to redirect all requests from: http://mydomain.example, http://www.mydomain.example, & https://mydomain.example to https://www.mydomain.example I have the following server blocks setup currently: server{ listen 443 ssl; root /www/mydomain.example/; ssl_certificate /ssl/domain.crt; ssl_certificate /ssl/domain.key; . . . } server{ listen 80; server_name mydomain.example; return 301 https://www.mydomain.example$request_uri; } server{ listen 80; server_name www.mydomain.example; return 301 https://www.mydomain.example$request_uri; } server{ listen ssl 443; server_name mydomain.example; return 301 https://www.mydomain.example$request_uri; } This currently does not work, but I don't understand why not. I can get a combination of either HTTP -> HTTPS working or no-www to -> www working, but mixing them as above does not work.
The SSL redirect won't work if your SSL certificate doesn't support the non-www domain. The config is correct but can be reduced to just 1 redirect server Also don't forget to reload Nginx sudo service nginx reload server { listen 80; listen 443 ssl; server_name example.com; # add ssl settings return 301 https://www.example.com$request_uri; }
NGINX
21,106,998
39
nginx.conf: server_tokens off; Why could this get ignored, the header is still sent: Server: nginx No, other included config files do not contain server_tokens configuration. Yes, I did restart all services.
To cite the docs on the server_tokens directive: Enables or disables emitting nginx version in error messages and in the “Server” response header field. According to the docs, it thus doesn't prevent the generation of the Server header but only prevents the addition of the exact version. If you want to completely remove the servers header, you could use the ngx_headers_more module.
NGINX
20,247,184
39
Can /etc/hosts be used instead of resolver when using proxy_pass? I need to perform a proxy_pass to the same nginx machine. Is there a way to resolve the domains using the machine's /etc/hosts file instead of specifying a DNS server thru the "resolver" property? This will save me the additional hops needed to reach the same server. I have tried setting up the internal IP mapped to the DNS in /etc/hosts file but nginx is still reading from the DNS server set in the resolver property. Or is there a way to make the HTTPProxy module to consider the /etc/hosts file settings? Thanks for any advice you could share.. This is the same question I posted in the nginx forum: http://forum.nginx.org/read.php?11,218997
You can get around this by installing dnsmasq and setting your resolver to 127.0.0.1. Basically this uses your local DNS as a resolver, but it only resolves what it knows about (among those things is your /etc/hosts) and forwards the rest to your default DNS.
NGINX
8,305,015
39
I've tried so many different things. The point I'm at right now is this: location ^~ /wordpress { alias /var/www/example.com/wordpress; index index.php index.html index.htm; try_files $uri $uri/ /wordpress/index.php; location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_split_path_info ^(/wordpress)(/.*)$; fastcgi_param SCRIPT_FILENAME /var/www/example.com/wordpress/index.php; fastcgi_param PATH_INFO $fastcgi_path_info; } } Right now, all resources as far as I can tell (images, etc) are loading correctly. And http://www.example.com/wordpress loads wordpress, but a page that says "page not found". (Wordpress is in use for this though). If I try any post urls I get the same result, "page not found". So I know the problem is that wordpress isn't obtaining the data about the path or something. Another potential problem is that if I run example.com/wp-admin.php then it will still run index.php. What data needs to be passed? What may be going wrong here?
Since your location alias end match, you should just use root. Also, not everything is routed through index.php on wordpress afaik. Also, unless you know you need path info, you probably dont. I think you want something like: location @wp { rewrite ^/wordpress(.*) /wordpress/index.php?q=$1; } location ^~ /wordpress { root /var/www/example.com; index index.php index.html index.htm; try_files $uri $uri/ @wp; location ~ \.php$ { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_pass 127.0.0.1:9000; } } or if you really do need path info (urls look like /wordpress/index.php/foo/bar): location ^~ /wordpress { root /var/www/example.com; index index.php index.html index.htm; try_files $uri $uri/ /wordpress/index.php; location ~ \.php { fastcgi_split_path_info ^(.*\.php)(.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_pass 127.0.0.1:9000; } } EDIT: Updated first server{} to strip initial /wordpress from uri and pass remainder as q param EDIT2: Named locations are only valid at server level
NGINX
6,154,879
39
I'm just starting to explore nginx on my ubuntu 10.04. I installed nginx and I'm able to get the "Welcome to Nginx" page on localhost. However I'm not able to add a new server_name. Even when I make the changes in site-available/default. I also tried reloading/restarting nginx, but nothing works.
To build on mark's answer, Debian/Ubuntu distros default configuration file has an include /etc/nginx/sites-enabled/*; directive with site configuration file stored in /etc/nginx/sites-available/, a default site is usually included in that dir. For examples beyond the default config, follow nginx beginner's guide or see wiki.nginx.org for more details. After creating a new configuration in sites-available, create a symbolic link with this command, assuming that your conf file is named "myapp" and nginx is at /etc/nginx (could also be at /usr/local/etc/nginx): ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/myapp By the way, you could always create your conf file directly in sites-enabled but the recommended way above allows you to "enable and disable" sites on the server very quickly without actually moving/deleting your conf files. P.S: Don't trust the tutorials: check your configuration! P.P.S: You can use the command nginx -t to test your sites conf and nginx -s reload to reload the conf.
NGINX
4,891,344
39
I'm trying to understand what makes Nginx so fast, and I have a few questions. As I understand it, Apache either spawns a new process to serve each request OR spawns a new thread to serve each request. Since each new thread shares virtual address space the memory usage keeps climbs if there are a number of concurrent requests coming in. Nginx solves this by having just one listening process(Master), with a single execution thread AND 2 or 3(number is configurable) worker processes. This Master process/thread is running an event loop. Effectively waiting for any incoming request. When a request comes in it gives that request to one of the worker processes. Please correct me if my above understanding is not correct If the above is correct, then I have a few questions: Isn't the worker process going to spawn multiple threads and going to run into the same problem as apache ? Or is nginx fast because its event based architecture uses nonblocking-IO underneath it all. Maybe the worker process spawns threads which do only non-blocking-IO, is that it ? What "exactly" is "event based architecture", can someone really simplify it, for soemone like me to understand. Does it just pertain to non-blocking-io or something else as well ? I got a reference of c10k, I am trying to go through it, but I don't think its about event based arch. it seems more for nonblocking IO.
Apache uses multiple threads to provide each request with it's own thread of execution. This is necessary to avoid blocking when using synchronous I/O. Nginx uses only asynchronous I/O, which makes blocking a non-issue. The only reason nginx uses multiple processes, is to make full use of multi-core, multi-CPU and hyper-threading systems. Even with SMP support, the kernel cannot schedule a single thread of execution over multiple CPUs. It requires at least one process or thread per logical CPU. So the difference is, nginx requires only enough worker processes to get the full benefit of SMP, whereas Apache's architecture necessitates creating a new thread (each with it's own stack of around ~8MB) per request. Obviously, at high concurrency, Apache will use much more memory and suffer greater overhead from maintaining large numbers of threads.
NGINX
4,764,731
39
I've set up Nginx as my main web server and have two Mochiweb based servers behind it. Certain requests are reverse-proxied to these two servers. now, I want to access phpmyadmin (located at /var/www/nginx-default/phpMyAdmin) using nginx, but it keeps saying Error 404 not found. Am I missing something obvious here? server { ############### General Settings #################### listen 80; server_name localhost; access_log /home/me/dev/wwwaccess.log; ############## Document Root ####################### location / { root /home/me/dev; index index.html index.htm index.php; } ############## PHPMyAdmin ####################### location /phpmyadmin { root /var/www/nginx-default/phpMyAdmin; index index.html index.htm index.php; } ############## Proxy Settings for FastCGI Server ##### location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/me/dev$fastcgi_script_name; include /etc/nginx/fastcgi_params; } ############# Proxy Settings for Mochi1 ############### location /mochi1 { proxy_pass http://127.0.0.1:8000; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 3600; proxy_buffering off; } ############# Proxy Settings for Mochi2 ############### location /mochi2 { proxy_pass http://127.0.0.1:8001; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 3600; proxy_buffering off; } ############# Error redirection pages ################ error_page 500 502 503 504 /50x.html; location = /50x.html { root /home/me/dev; } }
The problem here is that only the "best" location directive gets taken, in this order: location = <path> (longest match wins) location ^~ <path> (longest match wins) location ~ <path> (first defined match wins) location <path> (longest match wins) Using this ruleset, your /phpmyadmin location directive is beaten out by the regular expression ".php$" location directive, so the former is ignored entirely. Additionally, your php fastcgi directive is hard-wired to your /home/me/dev directory, which means that phpMyAdmin is totally inaccessible. You can use a rewrite to get the correct root for your phpMyAdmin scripts: location ~ \.php$ { set $php_root /home/me/dev; if ($request_uri ~* /phpmyadmin) { set $php_root /var/www/nginx-default/phpMyAdmin; } fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $php_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; }
NGINX
1,011,101
39
I have the following API(s): localhost:300/api/customers/ localhost:400/api/customers/:id/billing localhost:500/api/orders I'd like to use NGINX to have them all run under the following location: localhost:443/api/ This seems very difficult because of customers spanning two servers. Here's my failed attempt starting with orders server { listen 443; server_name localhost; location /api/orders { proxy_pass https://localhost:500/api/orders; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } server { listen 443; server_name localhost; location /api/customers/$id/billing { proxy_pass https://localhost:400/api/customers/$id/billing; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } server { listen 443; server_name localhost; location /api/customers { proxy_pass https://localhost:300/api/customers; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; } } Anything jump out as far as a fix? Thanks!
The three services are being proxied by the same server (as far as nginx is concerned) so must be structured as three location blocks within one server block. See this document for details. If you are just passing the original URI unmodified, you do not need to specify a URI on the proxy_pass statement. server { { listen 443; server_name localhost; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; location /api/orders { proxy_pass https://localhost:500; } location /api/customers { proxy_pass https://localhost:400; } location = /api/customers { proxy_pass https://localhost:300; } } If the proxy_set_header statements are identical, they can be specified once in the parent block. The type of location statement required is dependent on the range of URIs processed by the localhost:300/api/customers/ service. If it is one URI, the = syntax will work. If it is any URI that does not match /api/customers/:id/billing, then you will need to use a regular expression location block. See this document for details. I am not sure that this will work unless you terminate SSL here. That is configure the reverse proxy as a secure server.
NGINX
39,769,963
38
are the following two nginx server blocks semantically the same, or is there any difference? Does the JSON-specific configuration in the first example inherit the settings of the "/" location? Does it in the second example? server { location / { # ... location ~* \.json$ { # json-specific settings } } } server { location / { # ... } location ~* \.json$ { # json-specific settings } }
The inheritance of config directives in Nginx is such that directives can only be inherited from contexts higher up the configuration tree and never from contexts on the same level or lower. So, a location block cannot inherit from another location block but a nested location block can inherit from the parent location block. I stressed can because there are a number of different types of directives and the inheritance behaviour is a bit different for each. There are Standard Type Directives which only have one value or set of values attached. These will simply be inherited by contexts lower down the config tree or replaced within that lower context by new values. An example is "index". Array Type Directives which pass multiple separate values in an array. These will simply be inherited by contexts lower down the config tree or replaced within that lower context by new values. Note that you cannot add to the array. Changing part is replacing it all. An example is "proxy_param". So if you define proxy_param A and proxy_param B at the server level for instance, and then try to define proxy_param C in a location context, "A" and "B" would be wiped out (set to default values). as defining "C" has meant replacing the array. Command Type Directives such as "try_files" are generally not inherited at all. So specifically to your question, directives defined in one location block context cannot be inherited by another as in your second example. Standard and Array type directives defined in the parent location block will be inherited by the nested location block. Command type directives defined in the parent will not be inherited in general.
NGINX
32,104,731
38
I'm trying to serve request to /blog subdirectory of a site with the php code, located in a folder outside document root directory. Here's my host config: server { server_name local.test.ru; root /home/alex/www/test2; location /blog { alias /home/alex/www/test1; try_files $uri $uri/ /index.php$is_args$args; location ~ \.php$ { fastcgi_split_path_info ^(/blog)(/.*)$; fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; } } } And I get for requests like wget -O - http://local.test.ru/blog/nonExisting just a code of index.php file from /home/alex/www/test2/ folder. However, this config: server { server_name local.test.ru; root /home/alex/www/test2; location /blog { alias /home/alex/www/test1; try_files $uri $uri/ /blog$is_args$args; index index.php; location ~ \.php$ { fastcgi_split_path_info ^(/blog)(/.*)$; fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; } } } gives me index.html file from /home/alex/www/test2/. Please give me a clue - why? And how can I force NGINX to process /home/alex/www/test1/index.php instead?
We could not get it to work by specifying root within the location block. The solution for us was to use alias instead. Note that it is necessary repeat the location's path twice in the try_files directive, and then also in the .php configuration block: server { server_name localhost; root /app/frontend/www; location /backend/ { alias /app/backend/www/; # serve static files direct + allow friendly urls # Note: The seemingly weird syntax is due to a long-standing bug in nginx: https://trac.nginx.org/nginx/ticket/97 try_files $uri $uri/ /backend//backend/index.php?$args; location ~ /backend/.+\.php$ { include fastcgi_params; fastcgi_buffers 256 4k; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param HTTPS $proxied_https; fastcgi_pass phpfiles; } } # / location } Source: nginx/conf.d/app.conf from the debian-php-nginx stack in the docker-stack project
NGINX
20,426,812
38
i am trying to configure nginx to proxy pass the request to another server, only if the $request_body variable matches on a specific regular expression. My problem now is, that I don't how to configure this behaviour exactly. I am currently down to this one: server { listen 80 default; server_name test.local; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; if ($request_body ~* ^(.*)\.test) { proxy_pass http://www.google.de; } root /srv/http; } } but the problem here is, that root has always the upperhand. the proxy won't be passed either way. any idea on how I could accomplish this? thanks in advance
try this: server { listen 80 default; server_name test.local; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; if ($request_body ~* ^(.*)\.test) { proxy_pass http://www.google.de; break; } root /srv/http; } }
NGINX
7,878,334
38
I have an issue wherein I am building an nginx reverse proxy for directing to multiple microservices at different url paths. The system is entirely docker based and as a result the same environment is used for development and production. This has caused an issue for me when installing SSL as the SSL certs will only be available in production so when I configure NGINX with SSL the development environment no longer works as the ssl certs are not present. Here is the relevant part of my conf file - server { listen 80; listen 443 default_server ssl; server_name atvcap.server.com; ssl_certificate /etc/nginx/certs/atvcap_cabundle.crt; ssl_certificate_key /etc/nginx/certs/atvcap.key; ... } But this throws the following when running my application in development mode - nginx: [emerg] BIO_new_file("/etc/nginx/certs/atvcap_cabundle.crt") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/certs/atvcap_cabundle.crt','r') error:2006D080:BIO routines:BIO_new_file:no such file) Is it possible to only turn on SSL if the "/etc/nginx/certs/atvcap_cabundle.crt" is available? I had tried something like the following - if (-f /etc/nginx/certs/atvcap_cabundle.crt) { ssl_certificate /etc/nginx/certs/atvcap_cabundle.crt; ssl_certificate_key /etc/nginx/certs/atvcap.key; } But that threw the following error - nginx: [emerg] "ssl_certificate" directive is not allowed here in /etc/nginx/conf.d/default.conf:7 Any one have any ideas on how to achieve something like this? Thanks
You can create an additional file ssl.conf and put here ssl configs: ssl_certificate /etc/nginx/certs/atvcap_cabundle.crt; ssl_certificate_key /etc/nginx/certs/atvcap.key; Then include from the main config: server_name atvcap.server.com; include /somepath/ssl.conf*; Make sure to include * symbol - this will not break when the file does not exist at development mode.
NGINX
47,575,376
37
I'm trying to learn how to use docker compose with a simple setup of an nginx container that reroutes requests to a ghost container. I'm using the standard ghost image but have a custom nginx image (that inherits from the standard image). When I run the composition using "docker-compose up" it exits immediately with "docker_nginx_1 exited with code 0". However, when I build and run it manually, it runs fine and I can navigate my browser to the container and view the default nginx page. What am I misunderstanding about my compose file that causes it to behave differently than being custom built? What can I change to get it to stay running? Disclaimer: I am also learning nginx as I go, so learning two things at once may be causing me undue problems. EDIT: The original files were a bit more complex, but I've reduced the issue to simply: If I use the build command for a custom image that does nothing but inherit from the default nginx image, it exits immediately. If I use the default nginx image, it works. These are the now relevant files: Compose file: ghost: expose: - "2368" image: ghost nginx: # image: nginx << If I use this instead of my custom build, it doesn't exit build: ./nginx ports: - "80:80" - "443:443" links: - ghost nginx/Dockerfile: FROM nginx ORIGINAL FILES (with the same compose file as above): nginx/Dockerfile: FROM nginx RUN rm /etc/nginx/nginx.conf COPY conf/nginx.conf /etc/nginx/nginx.conf COPY conf/sites-available/ghost /etc/nginx/sites-available/ghost RUN mkdir /etc/nginx/sites-enabled RUN ln -s /etc/nginx/sites-available/ghost /etc/nginx/sites-enabled/ghost EXPOSE 80 443 # Is this even the right command I have no idea CMD service nginx start nginx/conf/nginx.conf: daemon off; user nginx; # Let nginx figure out the processes I guess worker_processes auto; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } nginx/conf/sites-available/ghost server { listen 80; server_name 127.0.0.1; access_log /var/log/nginx/localhost.log; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header HOST $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://0.0.0.0:2368; proxy_redirect off; } } Running compose-up: plays-MacBook-Pro:docker play$ docker-compose up Creating docker_ghost_1... Creating docker_nginx_1... Attaching to docker_ghost_1, docker_nginx_1 docker_nginx_1 exited with code 0 Gracefully stopping... (press Ctrl+C again to force) Stopping docker_ghost_1... done Running manually: plays-MacBook-Pro:nginx play$ docker build --no-cache -t nginx_custom . Sending build context to Docker daemon 8.704 kB Step 0 : FROM nginx ---> 914c82c5a678 Step 1 : RUN rm /etc/nginx/nginx.conf ---> Running in 4ce9de96bb36 ---> 98f97a9da4fc Removing intermediate container 4ce9de96bb36 Step 2 : ADD conf/nginx.conf /etc/nginx/nginx.conf ---> dd3e089208a9 Removing intermediate container 36b9a47e0806 Step 3 : ADD conf/sites-available/ghost /etc/nginx/sites-available/ghost ---> 55fae53e5810 Removing intermediate container a82741d24af4 Step 4 : RUN mkdir /etc/nginx/sites-enabled ---> Running in 7659ead01b7b ---> 406be1c42394 Removing intermediate container 7659ead01b7b Step 5 : RUN ln -s /etc/nginx/sites-available/ghost /etc/nginx/sites-enabled/ghost ---> Running in e9658a08affa ---> 021a84216e8a Removing intermediate container e9658a08affa Step 6 : EXPOSE 80 443 ---> Running in 230e4523794c ---> 23d85e1a04cb Removing intermediate container 230e4523794c Step 7 : CMD service nginx start ---> Running in 209e129cae21 ---> d7004d6fa223 Removing intermediate container 209e129cae21 Successfully built d7004d6fa223 plays-MacBook-Pro:nginx play$ docker run -t nginx_custom [It sits here on an empty line, running in the background]
The CMD in your Dockerfile should start a process which needs to run in foreground. The command service nginx start runs the process in deamon mode and thus your container exits cleanly because the service command exits. Use the following CMD ["nginx", "-g", "daemon off;"] to start nginx (taken from official image) and it should work correctly.
NGINX
33,724,125
37
UPDATE: See the answer I've provided below for the solution I eventually got set up on AWS. I'm currently experimenting with methods to implement a global load-balancing layer for my app servers on Digital Ocean and there's a few pieces I've yet to put together. The Goal Offer highly-available service to my users by routing all connections to the closest 'cluster' of servers in SFO, NYC, LON, and eventually Singapore. Additionally, I would eventually like to automate the maintenance of this by writing a daemon that can monitor, scale, and heal any of the servers on the system. Or I'll combine various services to achieve the same automation goals. First I need to figure out how to do it manually. The Stack Ubuntu 14.04 Nginx 1.4.6 node.js MongoDB from Compose.io (formerly MongoHQ) Global Domain Breakdown Once I rig everything up, my domain would look something like this: **GLOBAL** global-balancing-1.myapp.com global-balancing-2.myapp.com global-balancing-3.myapp.com **NYC** nyc-load-balancing-1.myapp.com nyc-load-balancing-2.myapp.com nyc-load-balancing-3.myapp.com nyc-app-1.myapp.com nyc-app-2.myapp.com nyc-app-3.myapp.com nyc-api-1.myapp.com nyc-api-2.myapp.com nyc-api-3.myapp.com **SFO** sfo-load-balancing-1.myapp.com sfo-load-balancing-2.myapp.com sfo-load-balancing-3.myapp.com sfo-app-1.myapp.com sfo-app-2.myapp.com sfo-app-3.myapp.com sfo-api-1.myapp.com sfo-api-2.myapp.com sfo-api-3.myapp.com **LON** lon-load-balancing-1.myapp.com lon-load-balancing-2.myapp.com lon-load-balancing-3.myapp.com lon-app-1.myapp.com lon-app-2.myapp.com lon-app-3.myapp.com lon-api-1.myapp.com lon-api-2.myapp.com lon-api-3.myapp.com And then if there's any strain on any given layer, in any given region, I can just spin up a new droplet to help out: nyc-app-4.myapp.com, lon-load-balancing-5.myapp.com, etc… Current Working Methodology A (minimum) trio of global-balancing servers receive all traffic. These servers are "DNS Round-Robin" balanced as illustrated in this (frankly confusing) article: How To Configure DNS Round-Robin Load Balancing. Using the Nginx GeoIP Module and MaxMind GeoIP Data the origin of any given request is determined down to the $geoip_city_continent_code. The global-balancing layer then routes the request to the least connected server on the load-balancing layer of the appropriate cluster: nyc-load-balancing-1, sfo-load-balancing-3, lon-load-balancing-2, etc.. This layer is also a (minimum) trio of droplets. The regional load-balancing layer then routes the request to the least connected server in the app or api layer: nyc-app-2, sfo-api-1, lon-api-3, etc… The details of the Nginx kung fu can be found in this tutorial: Villiage Idiot: Setting up Nginx with GSLB/Reverse Proxy on AWS. More general info about Nginx load-balancing is available here and here. Questions Where do I put the global-balancing servers? It strikes me as odd that I would put them either all in one place, or spread that layer out around the globe either. Say, for instance, I put them all in NYC. Then someone from France hits my domain. The request would go from France, to NYC, and then be routed back to LON. Or if I put one of each in SFO, NYC, and LON then isn't it still possible that a user from Toronto (Parkdale, represent) could send a request that ends up going to LON only to be routed back to NYC? Do subsequent requests get routed to the same IP? As in, if a user from Toronto sends a request that the global-balancing layer determines should be going to NYC, does the next request from that origin go directly to NYC, or is it still luck of the draw that it will hit the nearest global-balancing server (NYC in this case). What about sessions? I've configured Nginx to use the ip_hash; directive so it will direct the user to the same app or api endpoint (a node process, in my case) but how will global balancing affect this, if at all? Any DNS Examples? I'm not exactly a DNS expert (I'm currently trying to figure out why my CNAME records aren't resolving) but I'm a quick study when provided with a solid example. Has anyone gone through this process before and can provide a sample of what the DNS records look like for a successful setup? What about SSL/TLS? Would I need a certificate for every server, or just for the three global-balancing servers since that's the only public-facing gateway? If you read this whole thing then reward yourself with a cupcake. Thanks in advance for any help.
The Goal: Offer highly-available service to my users by routing all connections to the closest 'cluster' of servers in SFO, NYC, LON, and eventually Singapore. The global-balancing layer then routes the request to theleast connected server... If I'm reading your configuration correctly, you're actually proxying from your global balancers to the balancers at each region. This does not meet your goal of routing users to the nearest region. There are three ways that I know of to get what you're looking for: 30x Redirect Your global balancers receive the HTTP request and then redirect it to a server group in or near the region it thinks the request is coming from, based on IP address. This sounds like what you were trying to set up. This method has side effects for some applications, and also increases the time it takes for a user to get data since you're adding a ton of overhead. This only makes sense if the resources you're redirecting to are very large, and the local regional cluster will be able to serve much more efficiently. Anycast (taking advantage of BGP routing) This is what the big players like Akamai use for their CDN. Basically, there are multiple servers out on the internet with the exact same routable IP address. Suppose I have servers in several regions, and they have the IP address of 192.0.2.1. If I'm in the US and try to connect to 192.0.2.1, and someone is in Europe that tries to connect to 192.0.2.1, it's likely that we'll be routed to the nearest server. This uses the internet's own routing to find the best path (based on network conditions) for the traffic. Unfortunately, you can't just use this method. You need your own AS number, and physical hardware. If you find a VPS provider that lets you have a chunk of their Anycast block, let me know! Geo-DNS There are some DNS providers that provide a service often marketed as "Geo-DNS". They have a bunch of DNS servers hosted on anycast addresses which can route traffic to your nearest servers. If a client queries a European DNS server, it should return the address for your European region servers, vs. some in other regions. There are many variations on the Geo DNS services. Others simply maintain a geo-IP database and return the server for the region they think is closer, just like the redirect method but for DNS before the HTTP request is ever made. This is usually the good option, for price and ease of use. Do subsequent requests get routed to the same IP? Many load balancers have a "stickiness" option that says requests from the same network address should be routed to the same end server (provided that end server is still up and running). What about sessions? This is exactly why you would want that stickiness. When it comes to session data, you are going to have to find a way to keep all your servers up-to-date. Realistically, this isn't always guaranteed. How you handle it depends on your application. Can you keep a Redis instance or whatever out there for all your servers to reliably hit from around the world? Do you really need that session data in every region? Or can you have your main application servers dealing with session data in one location? Any DNS Examples? Post separate questions for these. Everyone's "successful setup" looks differently. What about SSL/TLS? If you're proxying data, only your global balancers need to handle HTTPS. If you're redirecting, then all the servers need to handle it.
NGINX
25,678,049
37
I want to deploy a django site (it is the open source edx code on github). I am faced with choosing between using Apache with mod_wsgi nginx with gunicorn I have used Apache with mod_wsgi and it's cool enough, but i have no experience with the second option. Which of these would be a better option in terms of speed and also to some extent, ease of use? NB: I would need to run two different django sites on say, port 80 and 81 and access them from two different subdomains.
Nginx is a really light and easy to use solution and along with gunicorn it allows us to run any wsgi application and scale it easily. Nginx is better at handling requests since it does not spawn a new process for every request unlike Apache. I have written an answer on how to deploy django with nginx for a related question: Deploying Django project with Gunicorn and nginx
NGINX
18,048,318
37
I am trying to enable gzip compression for components of my website. I use Ubuntu 11.04 server and Nginx 1.2. In my Nginx configuration of the website, I have this: gzip on; #gzip_min_length 1000; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/html text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml; #it was gzip_buffers 16 8k; gzip_buffers 128 4k; #my pagesize is 4 gzip_disable "MSIE [1-6]\.(?!.*SV1)"; Yslow and Google speed measures advise me to use gzip to reduce transmission over the network. Now when I try to curl -I my_js_file, I get: curl -I http://www.albawaba.com/sites/default/files/js/js_367664096ca6baf65052749f685cac7b.js HTTP/1.1 200 OK Server: nginx/1.2.0 Date: Sun, 14 Apr 2013 13:15:43 GMT Content-Type: application/x-javascript Content-Length: 208463 Connection: keep-alive Last-Modified: Sun, 14 Apr 2013 10:58:06 GMT Vary: Accept-Encoding Expires: Thu, 31 Dec 2037 23:55:55 GMT Cache-Control: max-age=315360000 Pragma: public Cache-Control: public Accept-Ranges: bytes Any idea of what I have done wrong or what shall I do to get compressed content?
As others have written, it's not enough to enable gzip compression in your server -- the client also needs to ask for it in its requests via the Accept-Encoding: gzip header (or a superset thereof). Modern browsers include this header automatically, but for curl you'll need to include one of the following in your command: -H "Accept-Encoding: gzip" : You should see the Content-Encoding: gzip header in the response (might need to output headers with curl's -v flag), as well as some seemingly garbled output for the content, the actual gzip stream. --compressed : You should still see Content-Encoding: gzip in the response headers, but curl knows to decompress the content before outputting it.
NGINX
15,999,606
37
Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code: views.py DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) cursor = db.cursor() def list(request): statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() I have tried the following, but it still does not work: views.py class DB: conn = None DBNAME = "test" DBIP = "localhost" DBUSER = "django" DBPASS = "password" def connect(self): self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME) def cursor(self): try: return self.conn.cursor() except (AttributeError, MySQLdb.OperationalError): self.connect() return self.conn.cursor() db = DB() cursor = db.cursor() def list(request): cursor = db.cursor() statement = "SELECT item from table where selected = 1" cursor.execute(statement) results = cursor.fetchall() Currently, my only workaround is to do MySQLdb.connect() in each function that uses mysql. Also I noticed that when using django's manage.py runserver, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because list() is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this? Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware def process_request_handler(sender, **kwargs): t = threading.Thread(target=dispatch.execute, args=[kwargs['nodes'],kwargs['callback']], kwargs={}) t.setDaemon(True) t.start() return process_request.connect(process_request_handler)
Sometimes if you see "OperationalError: (2006, 'MySQL server has gone away')", it is because you are issuing a query that is too large. This can happen, for instance, if you're storing your sessions in MySQL, and you're trying to put something really big in the session. To fix the problem, you need to increase the value of the max_allowed_packet setting in MySQL. The default value is 1048576. So see the current value for the default, run the following SQL: select @@max_allowed_packet; To temporarily set a new value, run the following SQL: set global max_allowed_packet=10485760; To fix the problem more permanently, create a /etc/my.cnf file with at least the following: [mysqld] max_allowed_packet = 16M After editing /etc/my.cnf, you'll need to restart MySQL or restart your machine if you don't know how.
NGINX
14,163,429
37
I'm migrating a website from a server that has Apache web-server to another server that is running Nginx web-server, and I wanted to convert the .htaccess files, the problem is not just the syntax but also the file name, is it also ".htaccess" or what?
Here's a tool I use: http://www.anilcetin.com/convert-apache-htaccess-to-nginx/ It is not 100% accurate but it's pretty good base Also, here's a link about converting the rules: http://nginx.org/en/docs/http/converting_rewrite_rules.html This one can help a little: http://wiki.nginx.org/HttpRewriteModule#rewrite EDIT: The file name should be nginx.conf
NGINX
8,711,678
37
I have an nginx server and can't seem to find any information on how to send Vary: Accept-Encoding headers for CSS and JS files. Does anyone have info about this? Thanks!
This is from the nginx documentation. gzip_vary syntax: gzip_vary on|off default: gzip_vary off context: http, server, location Enables response header of "Vary: Accept-Encoding". Note that this header causes IE 4-6 not to cache the content due to a bug (see 2 ). There if you just add gzip_vary on; it should do it's job. Also make sure you have any one of the directives gzip, gzip_static, or gunzip are active.
NGINX
6,637,678
37
I've created database, for example 'mydb'. CREATE DATABASE mydb CHARACTER SET utf8 COLLATE utf8_bin; CREATE USER 'myuser'@'%' IDENTIFIED BY PASSWORD '*HASH'; GRANT ALL ON mydb.* TO 'myuser'@'%'; GRANT ALL ON mydb TO 'myuser'@'%'; GRANT CREATE ON mydb TO 'myuser'@'%'; FLUSH PRIVILEGES; Now i can login to database from everywhere, but can't create tables. How to grant all privileges on that database and (in the future) tables. I can't create tables in 'mydb' database. I always get: CREATE TABLE t (c CHAR(20) CHARACTER SET utf8 COLLATE utf8_bin); ERROR 1142 (42000): CREATE command denied to user 'myuser'@'...' for table 't'
GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'@'%' WITH GRANT OPTION; This is how I create my "Super User" privileges (although I would normally specify a host). IMPORTANT NOTE While this answer can solve the problem of access, WITH GRANT OPTION creates a MySQL user that can edit the permissions of other users. The GRANT OPTION privilege enables you to give to other users or remove from other users those privileges that you yourself possess. For security reasons, you should not use this type of user account for any process that the public will have access to (i.e. a website). It is recommended that you create a user with only database privileges for that kind of use.
MariaDB
5,016,505
718
I'm importing a MySQL dump and getting the following error. $ mysql foo < foo.sql ERROR 1153 (08S01) at line 96: Got a packet bigger than 'max_allowed_packet' bytes Apparently there are attachments in the database, which makes for very large inserts. This is on my local machine, a Mac with MySQL 5 installed from the MySQL package. Where do I change max_allowed_packet to be able to import the dump? Is there anything else I should set? Just running mysql --max_allowed_packet=32M … resulted in the same error.
You probably have to change it for both the client (you are running to do the import) AND the daemon mysqld that is running and accepting the import. For the client, you can specify it on the command line: mysql --max_allowed_packet=100M -u root -p database < dump.sql Also, change the my.cnf or my.ini file (usually found in /etc/mysql/) under the mysqld section and set: max_allowed_packet=100M or you could run these commands in a MySQL console connected to that same server: set global net_buffer_length=1000000; set global max_allowed_packet=1000000000; (Use a very large value for the packet size.)
MariaDB
93,128
543
I am trying to optimize one part of my code that inserts data into MySQL. Should I chain INSERTs to make one huge multiple-row INSERT or are multiple separate INSERTs faster?
https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html The time required for inserting a row is determined by the following factors, where the numbers indicate approximate proportions: Connecting: (3) Sending query to server: (2) Parsing query: (2) Inserting row: (1 × size of row) Inserting indexes: (1 × number of indexes) Closing: (1) From this it is clear that sending one large statement will save you an overhead of 7 per insert statement. This brings us to the documentation that states: If you are inserting many rows from the same client at the same time, use INSERT statements with multiple VALUES lists to insert several rows at a time. This is considerably faster (many times faster in some cases) than using separate single-row INSERT statements.
MariaDB
1,793,169
230
I have below query and need to cast id to varchar Schema create table t9 (id int, name varchar (55)); insert into t9( id, name)values(2, 'bob'); What I tried select CAST(id as VARCHAR(50)) as col1 from t9; select CONVERT(VARCHAR(50),id) as colI1 from t9; but they don't work. Please suggest.
You will need to cast or convert as a CHAR datatype, there is no varchar datatype that you can cast/convert data to: select CAST(id as CHAR(50)) as col1 from t9; select CONVERT(id, CHAR(50)) as colI1 from t9; See the following SQL — in action — over at SQL Fiddle: /*! Build Schema */ create table t9 (id INT, name VARCHAR(55)); insert into t9 (id, name) values (2, 'bob'); /*! SQL Queries */ select CAST(id as CHAR(50)) as col1 from t9; select CONVERT(id, CHAR(50)) as colI1 from t9; Besides the fact that you were trying to convert to an incorrect datatype, the syntax that you were using for convert was incorrect. The convert function uses the following where expr is your column or value: CONVERT(expr,type) or CONVERT(expr USING transcoding_name) Your original query had the syntax backwards.
MariaDB
15,368,753
161
I tried to use UTF-8 and ran into trouble. I have tried so many things; here are the results I have gotten: ???? instead of Asian characters. Even for European text, I got Se?or for Señor. Strange gibberish (Mojibake?) such as Señor or 新浪新闻 for 新浪新闻. Black diamonds, such as Se�or. Finally, I got into a situation where the data was lost, or at least truncated: Se for Señor. Even when I got text to look right, it did not sort correctly. What am I doing wrong? How can I fix the code? Can I recover the data, if so, how?
This problem plagues the participants of this site, and many others. You have listed the five main cases of CHARACTER SET troubles. Best Practice Going forward, it is best to use CHARACTER SET utf8mb4 and COLLATION utf8mb4_unicode_520_ci. (There is a newer version of the Unicode collation in the pipeline.) utf8mb4 is a superset of utf8 in that it handles 4-byte utf8 codes, which are needed by Emoji and some of Chinese. Outside of MySQL, "UTF-8" refers to all size encodings, hence effectively the same as MySQL's utf8mb4, not utf8. I will try to use those spellings and capitalizations to distinguish inside versus outside MySQL in the following. Overview of what you should do Have your editor, etc. set to UTF-8. HTML forms should start like <form accept-charset="UTF-8">. Have your bytes encoded as UTF-8. Establish UTF-8 as the encoding being used in the client. Have the column/table declared CHARACTER SET utf8mb4 (Check with SHOW CREATE TABLE.) <meta charset=UTF-8> at the beginning of HTML Stored Routines acquire the current charset/collation. They may need rebuilding. UTF-8 all the way through More details for computer languages (and its following sections) Test the data Viewing the data with a tool or with SELECT cannot be trusted. Too many such clients, especially browsers, try to compensate for incorrect encodings, and show you correct text even if the database is mangled. So, pick a table and column that has some non-English text and do SELECT col, HEX(col) FROM tbl WHERE ... The HEX for correctly stored UTF-8 will be For a blank space (in any language): 20 For English: 4x, 5x, 6x, or 7x For most of Western Europe, accented letters should be Cxyy Cyrillic, Hebrew, and Farsi/Arabic: Dxyy Most of Asia: Exyyzz Emoji and some of Chinese: F0yyzzww More details Specific causes and fixes of the problems seen Truncated text (Se for Señor): The bytes to be stored are not encoded as utf8mb4. Fix this. Also, check that the connection during reading is UTF-8. Black Diamonds with question marks (Se�or for Señor); one of these cases exists: Case 1 (original bytes were not UTF-8): The bytes to be stored are not encoded as utf8. Fix this. The connection (or SET NAMES) for the INSERT and the SELECT was not utf8/utf8mb4. Fix this. Also, check that the column in the database is CHARACTER SET utf8 (or utf8mb4). Case 2 (original bytes were UTF-8): The connection (or SET NAMES) for the SELECT was not utf8/utf8mb4. Fix this. Also, check that the column in the database is CHARACTER SET utf8 (or utf8mb4). Black diamonds occur only when the browser is set to <meta charset=UTF-8>. Question Marks (regular ones, not black diamonds) (Se?or for Señor): The bytes to be stored are not encoded as utf8/utf8mb4. Fix this. The column in the database is not CHARACTER SET utf8 (or utf8mb4). Fix this. (Use SHOW CREATE TABLE.) Also, check that the connection during reading is UTF-8. Mojibake (Señor for Señor): (This discussion also applies to Double Encoding, which is not necessarily visible.) The bytes to be stored need to be UTF-8-encoded. Fix this. The connection when INSERTing and SELECTing text needs to specify utf8 or utf8mb4. Fix this. The column needs to be declared CHARACTER SET utf8 (or utf8mb4). Fix this. HTML should start with <meta charset=UTF-8>. If the data looks correct, but won't sort correctly, then either you have picked the wrong collation, or there is no collation that suits your need, or you have Double Encoding. Double Encoding can be confirmed by doing the SELECT .. HEX .. described above. é should come back C3A9, but instead shows C383C2A9 The Emoji 👽 should come back F09F91BD, but comes back C3B0C5B8E28098C2BD That is, the hex is about twice as long as it should be. This is caused by converting from latin1 (or whatever) to utf8, then treating those bytes as if they were latin1 and repeating the conversion. The sorting (and comparing) does not work correctly because it is, for example, sorting as if the string were Señor. Fixing the Data, where possible For Truncation and Question Marks, the data is lost. For Mojibake / Double Encoding, ... For Black Diamonds, ... The Fixes are listed here: 5 different fixes for 5 different situations; pick carefully Related: Illegal mix of collations
MariaDB
38,363,566
114
Created a new database but can't create new user account due to this error. Does anyone know how to fix this? I can't find any solution to fix this. 1030 - Got error 176 "Read page with wrong checksum" from storage engine Aria
In my case above solution not worked. But the solution is similar to suggest by the @user13439511 Follow the below steps. Select "mysql" database from the list of database. Select all tables inside "mysql" database. Scroll down and select "Repair Table" option in combobox. Click on Go button. Done.
MariaDB
60,864,367
109