question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I’m trying to delete by a secondary index or column key in a table. I'm not concerned with performance as this will be an unusual query. Not sure if it’s possible? E.g.: CREATE TABLE user_range ( id int, name text, end int, start int, PRIMARY KEY (id, name) ) cqlsh> select * from dat.user_range where id=774516966; id | name | end | start -----------+-----------+-----+------- 774516966 | 0 - 499 | 499 | 0 774516966 | 500 - 999 | 999 | 500 I can: cqlsh> select * from dat.user_range where name='1000 - 1999' allow filtering; id | name | end | start -------------+-------------+------+------- -285617516 | 1000 - 1999 | 1999 | 1000 -175835205 | 1000 - 1999 | 1999 | 1000 -1314399347 | 1000 - 1999 | 1999 | 1000 -1618174196 | 1000 - 1999 | 1999 | 1000 Blah blah… But I can’t delete: cqlsh> delete from dat.user_range where name='1000 - 1999' allow filtering; Bad Request: line 1:52 missing EOF at 'allow' cqlsh> delete from dat.user_range where name='1000 - 1999'; Bad Request: Missing mandatory PRIMARY KEY part id Even if I create an index: cqlsh> create index on dat.user_range (start); cqlsh> delete from dat.user_range where start=1000; Bad Request: Non PRIMARY KEY start found in where clause Is it possible to delete without first knowing the primary key?
No, deleting by using a secondary index is not supported: CASSANDRA-5527
Cassandra
18,515,874
16
I know that there are TTLs on columns in Cassandra. But is it also possible to set a TTL on a row? Setting a TTL on each column doesn't solve my problem as can be seen in the following usecase: At some point a process wants to delete a complete row with a TTL (let's say row "A" with TTL 1 week). It could do this by replacing all existing columns with the same content but with a TTL of 1 week. But there may be another process running concurrently on that row "A" which inserts new columns or replaces existing ones without a TTL because that process can't know that the row is to be deleted (it runs concurrently!). So after 1 week all columns of row "A" will be deleted because of the TTL except for these newly inserted ones. And I also want them to be deleted. So is there or will there be Cassandra support for this use case or do I have to implement something on my own? Kind Regards Stefan
There is no way of setting a TTL on a row in Cassandra currently. TTLs are designed for deleting individual columns when their lifetime is known when they are written. You could achieve what you want by delaying your process - instead of wanting to insert a TTL of 1 week, run it a week later and delete the row. Row deletes have the following semantics: any column inserted just before will get deleted but columns inserted just after won't be. If columns that are inserted in the future still need to be deleted you could insert a row delete with a timestamp in the future to ensure this but be very careful: if you later wanted to insert into that row you couldn't, columns would just disappear when written to that row (until the tombstone is garbage collected).
Cassandra
16,544,051
16
Are there any distinct advantages for using cql over thrift or is it simply a case of developers being too used to SQL? I'm wanting to switch from thrift querying to cql, the only problem is I'm not sure about the downsides of doing so. What are they?
Lyuben's answer is a good one, but I believe he may be misinformed on a few points. First, you should be aware that the Thrift API is not going to be getting new features; it's there for backwards compatibility, and not recommended for new projects. There are already some features that can not be used through the Thrift interface. Another factor is that the quoted benchmarks from Acunu are misleading; they don't measure the performance of CQL with prepared statements. See, for example, the graphs at https://issues.apache.org/jira/browse/CASSANDRA-3634 (probably the same data set on which the Acunu post is based, since Eric Evans wrote both). There have also been some improvements to CQL parsing and execution speed in the last year. It is not likely that you will observe any real speed difference between CQL 3 and Thrift. Finally, I don't think I even agree that Thrift is more flexible. The CQL 3 datamodel allows using the same data structures that Thrift does for nearly all usages that are not antipatterns; it just allows you to think about the model in a more organized way. For example, Lyuben mentioned rows with differing numbers of columns. A CQL 3 table may still utilize that capability: there is a difference between "storage engine rows" (which is Cassandra's low level storage, and what Thrift uses directly) and "CQL rows" (what you see through the Thrift interface). CQL just does the extra work necessary to visualize wide storage engine rows as structured tables. It's a little difficult to explain in a quick SO answer, but see this post for a somewhat gentle explanation.
Cassandra
15,701,263
16
I'm attempting to insert a modified document back to Cassandra DB with a new key. I'm having hard time figuring out what is the issue the error message is pointing at. When looking for others that have had similar problems the answers seem to be related to the keys, and in my case the None is just a value of few of the keys. How do I solve this issue? keys = ','.join(current.keys()) params = [':' + x for x in current.keys()] values = ','.join(params) query = "INSERT INTO wiki.pages (%s) Values (%s)" % (keys, values) query = query.encode('utf-8') cursor.execute(query, current) Here's the data for query and current: INSERT INTO wiki.pages (changed,content,meta,attachment,revision,page,editor) VALUES (:changed,:content,:meta,:attachment,:revision,:page,:editor) { u'changed': '2013-02-15 16:31:49', u'content': 'Testing', u'meta': None, u'attachment': None, u'revision': 2, u'page': u'FrontPage', u'editor': 'Anonymous' } This fails with the following error: cql.apivalues.ProgrammingError: Bad Request: line 1:123 no viable alternative at input 'None'
The "no viable alternative" means that the data type for some key doesn't match the schema for that column family column, unfortunately it doesn't plainly say that in the error message. In my case the data type for meta was: map<text,text> for this reason None was considered a bad value at insertion time. I fixed the problem by replacing the None with an empty dict prior to insert: if current['meta'] is None: current['meta'] = dict() The CQL driver accepts empty dict fine as new value for a map type, while None is not allowed, even though querying the map column returns None if it is empty. Returning None and not accepting None did not feel intuitive, so later I decided to create custom wrapper for cursor.fetchone() that returns a map of columns instead of a list of columns, and also checks if MapType, ListType or SetType has returned None. If there are None values, it replaces them with empty dict(), list() or set() to avoid issues like the one I had when inserting modified data back to Cassandra. This seems to work nicely.
Cassandra
14,897,599
16
I'm planning to insert data to bellow CF that has compound keys. CREATE TABLE event_attend ( event_id int, event_type varchar, event_user_id int, PRIMARY KEY (event_id, event_type) #compound keys... ); But I can't insert data to this CF from python using cql. (http://code.google.com/a/apache-extras.org/p/cassandra-dbapi2/) import cql connection = cql.connect(host, port, keyspace) cursor = connection.cursor() cursor.execute("INSERT INTO event_attend (event_id, event_type, event_user_id) VALUES (1, 'test', 2)", dict({}) ) I get the following traceback: Traceback (most recent call last): File "./v2_initial.py", line 153, in <module> db2cass.execute() File "./v2_initial.py", line 134, in execute cscursor.execute("insert into event_attend (event_id, event_type, event_user_id ) values (1, 'test', 2)", dict({})) File "/usr/local/pythonbrew/pythons/Python-2.7.2/lib/python2.7/site-packages/cql-1.4.0-py2.7.egg/cql/cursor.py", line 80, in execute response = self.get_response(prepared_q, cl) File "/usr/local/pythonbrew/pythons/Python-2.7.2/lib/python2.7/site-packages/cql-1.4.0-py2.7.egg/cql/thrifteries.py", line 80, in get_response return self.handle_cql_execution_errors(doquery, compressed_q, compress) File "/usr/local/pythonbrew/pythons/Python-2.7.2/lib/python2.7/site-packages/cql-1.4.0-py2.7.egg/cql/thrifteries.py", line 98, in handle_cql_execution_errors raise cql.ProgrammingError("Bad Request: %s" % ire.why) cql.apivalues.ProgrammingError: Bad Request: unable to make int from 'event_user_id' What am I doing wrong?
It looks like you are trying to follow the example in: http://pypi.python.org/pypi/cql/1.4.0 import cql con = cql.connect(host, port, keyspace) cursor = con.cursor() cursor.execute("CQL QUERY", dict(kw='Foo', kw2='Bar', kwn='etc...')) However, if you only need to insert one row (like in your question), just drop the empty dict() parameter. Also, since you are using composite keys, make sure you use CQL3 http://www.datastax.com/dev/blog/whats-new-in-cql-3-0 connection = cql.connect('localhost:9160', cql_version='3.0.0') The following code should work (just adapt it to localhost if needed): import cql con = cql.connect('172.24.24.24', 9160, keyspace, cql_version='3.0.0') print ("Connected!") cursor = con.cursor() CQLString = "INSERT INTO event_attend (event_id, event_type, event_user_id) VALUES (131, 'Party', 3156);" cursor.execute(CQLString)
Cassandra
13,217,434
16
My team has asked me to choose between Cassandra and SOLR for faster response @ frond end queries. I told them that Cassandra is NOSQL db thing while SOLR is indexing thing. But then they say that we can push our complete db to SOLR (like using SOLR as db) or we can just use Cassandra with SOLR. All confused. Amount of data we are dealing is like 1 Billion spread over 4 MySQL table(fetched using joins) and we get only read queries from the website. We dont need FULL TEXT SEARCH I think something in which SOLR cannot be beated easily is is its full text search feature but then we dont need it on our case. So what else SOLR has which Cassandra cannot provide and what does Cassandra has that it can replace SOLR in our particular case? In other words, who is going to perform better? Cassandra alone? SOLR as a db alone? Or both together? And most importantly why and why not? Its really important for me to backup my choice with strong point as if why one is better than other during my next team meeting. And thanks in advance. EDIT: SOLANDRA is not an option because it not that mature and no more maintained I guess DataStax is not an option because SOLR feature is provided in only Enterprise Edition
If you don't need Solr's full-text search capabilities, there's very little reason to choose it over Cassandra, in my opinion. (Disclosure: I work for DataStax.) Operationally, handling a Cassandra cluster will be much simpler due to the Dynamo-based architecture. Sharding Solr can be quite painful, which is one of the big reasons why we at DataStax built search into DSE; it's something that a lot of people want to avoid. I'm not trying to sell you on DSE, just pointing out the downside to Solr. For example, when you want to change the number of shards with Solr, you have to create and build an entirely new index. You have to worry about deadlock with a Solr cluster. There are several other limitations: http://wiki.apache.org/solr/DistributedSearch You haven't said much about what kind of queries you need to be able to support. Adding that info would get you better answers.
Cassandra
10,184,858
16
I would like to know whenever it is possible in Cassandra to specify unique constrain on row key. Something similar to SQL Server's ADD CONSTRAINT myConstrain UNIQUE (ROW_PK) In case of insert with already existing row key, the existing data will be not overwritten, but I receive kind of exception or response that update cannot be performed due to constrain violation. Maybe there is a workaround for this problem - there are counters which updates seams to be atomic.
Lightweight transactions? http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_ltwt_transaction_c.html INSERT INTO customer_account (customerID, customer_email) VALUES (‘LauraS’, ‘[email protected]’) IF NOT EXISTS;
Cassandra
8,154,332
16
How do you rename a live Cassandra keyspace through the cassandra-cli? Previous versions had an option in cassandra-cli ("rename keyspace"). However, that option has been dropped in recent releases.
Renaming keyspaces (and column families) is no longer supported, since it was prone to race conditions. See https://issues.apache.org/jira/browse/CASSANDRA-1585.
Cassandra
7,649,104
16
I found a lot. But which one is the best? And why? I didn't find yet anything really complete and centralized in one good article or documentation. At least a good book? Thanks.
Our (Riptano's) Cassandra documentation is probably the best one-stop resource: http://www.riptano.com/docs A good complement from the ASF wiki is http://wiki.apache.org/cassandra/ArticlesAndPresentations.
Cassandra
4,536,211
16
I think these three are the most popular non-relational db:s out there at this moment. I want to give them a try, but I wonder which one of these are most suitable for Rails when it comes to Gem, documentation and tutorial support. Eg. if I install a very good gem that is for Rails but this just use AR and mongodb, then it would be a pity I didn't use mongodb. How many gems are supporting each one of these databases? Which one is the most popular and main-stream in ruby/rails community, thus has more online documentations/tutorials? Which one offers tight integration of Rails?
To make an informed selection, you'll really need to know your data model. MongoDB and CouchDB are document-oriented data stores. Cassandra is quite different, it is a bit more special-purpose and its distributed design is its strength. It's more of a distributed key/value store but with slicing, timestamp sorting, range queries, with limited data types. If you had a huge amount of data and knew exactly how it needed to be indexed for retrieval, Cassandra might work. Mongo and Couch are better for ad-hoc queries, and for example an AR replacement for a Rails app. As far as popularity, I'd say MongoDB is currently more popular with Rubyists, but in general CouchDB seems to have more mindshare and a lot of momentum. See also http://nosql-database.org/ for more information on the differences.
Cassandra
3,550,306
16
Consider a M:M relation that needs to be represented in a Cassandra data store. What M:M modeling options are available? For each alternative, when is it to prefer? What M:M modeling choices have you made in your Cassandra powered projects?
Instead of using a join table the way you would with an rdbms, you would have one ColumnFamily containing a row for each X and a list of Ys associated with it, then a CF containing a row for each Y and a list of each X associated with it. If it turns out you don't really care about querying one of those directions then only keep the CF that you do care about.
Cassandra
2,573,106
16
Apache cassandra version 3.7 is running on Ubuntu server 16.04 fine, all parts of apache cassandra started up no problem, the issue is, i go to connect using cqlsh: $ CQLSH (My IP Address) 9160 then it says: Connection error: ('Unable to connect to any servers', {'10.0.0.13': TypeError('ref() does not take keyword arguments',)} ) i seen there was a bug for it: https://issues.apache.org/jira/browse/CASSANDRA-11850 but its for version cqlsh --version: cqlsh 5.0.1 cassandra -v: 3.5 (also occurs with 3.0.6) Someone commented on my Apache Cassandra ticket: https://issues.apache.org/jira/browse/CASSANDRA-12402 stating: Use the workaround described in the ticket: If you have an up-to-date cassandra-driver installed, you can disable the embedded driver by setting the environment variable CQLSH_NO_BUNDLED to any non empty string, for example export CQLSH_NO_BUNDLED=true. QUESTIONS ARE: How do i disable the up-to-date cassandra-driver? what directory is it in? what file name? also if i disable it, will i be able to connect using CQLSH? what tool did you guys use to connect to apache cassandra to run commands etc. Besides CQLSH directly on the server?
As described in the ticket - define environment variable CQLSH_NO_BUNDLED and export it. export CQLSH_NO_BUNDLED=true It will tell cqlsh (which is Python program) to use external Cassandra Python driver, not the one bundled with the distribution. The bundled Cassandra driver is located in /opt/datastax-ddc-3.7.0/bin, the file name is cassandra-driver-internal-only-3.0.0-6af642d.zip Then run cqlsh, which is located in /opt/datastax-ddc-3.7.0/bin. ./cqlsh It is possible that you will need to install Cassandra Python driver (if it was not installed already) using: pip install cassandra-driver Note - folder names are for Datastax Cassandra build.
Cassandra
38,883,435
15
I am new to cassandra and I am using it for analytics tasks (good indexing needed ). I read in this post (and others): cassandra, select via a non primary key that I can't query my DB with a non-primary key columns with WHERE clause. To do so, it seems that there is 3 possibilities (ALL with major disadvantages): Create a secondary index (not recommended for performance issues). Create a new table (I don't want redundant data even if it's ok with cassandra). Put the column I want to query by within the primary key and in this case I need to define all the parts of the primary key in my WHERE clause and I can't uses other operator than IN or =. Is there an other way to to what I am trying to do (WHERE clause with non-primary key column) without having the 3 constraints above?
From within Cassandra itself you are limited to the options that you have specified above. If you want to know why take a look here: A Deep Look to the CQL Where Clause However if you are trying to run analytics on information stored within Cassandra then have you looked at using Spark. Spark is built for large scale data processing on distributed systems. In fact if you are looking at using Datastax (see here) which has some nice integration features between Spark and Cassandra specifically for loading and saving data. It has both a free (Community) and paid (Enterprise) editions.
Cassandra
35,524,516
15
I am doing a large migration from physical machines to ec2 instances. As of right now I have 3 x.large nodes each with 4 instance store drives (raid-0 1.6TB). After I set this this up I remembered that "The data on an instance store volume persists only during the life of the associated Amazon EC2 instance; if you stop or terminate an instance, any data on instance store volumes is lost." What do people usually do in this situation? I am worried that if one of the boxes crash then all of the data will be lost on that box if it is not 100% replicated on another. http://www.hulen.com/?p=326 I read in the above link that these guys use ephermal drives and periodically backup the content using the EBS drives and snapshots." In this question here: How do I take a backup of aws ec2 instance/ephemeral storage? People claim that you cannot backup ephermal data onto EBS snapshots. Is my best choice to use a few EBS drives and raid0 them together and be able to take snapshots directly from them? I know this is probably the most expensive solution, however, it seems to make the most sense. Any info would be great. Thank you for your time.
I have been running Cassandra on EC2 for over 2 years. To address your concerns, you need to form a proper availability architecture on EC2 for your Cassandra cluster. Here is a bullet list for you to consider: Consider at least 3 zones for setting up your cluster; Use NetworkTopologyStrategy with EC2Snitch/EC2MultiRegionSnitch to propagate a replica of your data to each zone; this means that the machines in each zone will have your full data set combined; for example the strategy_options would be like {us-east:3}. The above two tips should satisfy basic availability in AWS and in case your queries are sent using LOCAL_QUORUM, your application will be fine even if one zone goes down. If you are concerned about 2 zones going down (don't recall it happened in AWS for the past 2 years of my use), then you can also add another region to your cluster. With the above, if any node dies for any reason, you can restore it from nodes in other zones. After all, CAssandra was designed to provide you with this kind of availability. About EBS vs Ephemeral: I have always been against using EBS volumes in anything production because it is one of the worst AWS service in terms of availability. They go down several times a year, and their downside usually cascades to other AWS services like ELBs and RDS. They are also like attached Network storage, so any read/write will have to go over the Network. Don't use them. Even DataStax doesn't recommend them: http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.html#cassandra/architecture/../../cassandra/architecture/architecturePlanningEC2_c.html About Backups: I use a solution called Priam (https://github.com/Netflix/Priam) which was written by Netflix. It can take a nightly snapshot of your cluster and copy everything to S3. If you enable incremental_backups, it also uploads incremental backups to S3. In case a node goes down, you can trigger a restore on the specific node using a simple API call. It restores a lot faster and does not put a lot of streaming load on your other nodes. I also added a patch to it which let's you do fancy things like bringing up multiple DCs inside one AWS region. You can read about my setup here: http://aryanet.com/blog/shrinking-the-cassandra-cluster-to-fewer-nodes Hope above helps.
Cassandra
21,386,671
15
I use Cassandra DB and Helenus module for nodejs to operate with this. I have some rows which contains TimeUUID columns. How to get timestamp from TimeUUID in javascript?
this lib ( UUID_to_Date ) is very simple and fast!! only used native String function. maybe this Javascript API can help you to convert the UUID to date format, Javascript is simple language and this simple code can help to writing API for every language. this API convert UUID v1 to sec from 1970-01-01 all of you need: get_time_int = function (uuid_str) { var uuid_arr = uuid_str.split( '-' ), time_str = [ uuid_arr[ 2 ].substring( 1 ), uuid_arr[ 1 ], uuid_arr[ 0 ] ].join( '' ); return parseInt( time_str, 16 ); }; get_date_obj = function (uuid_str) { var int_time = this.get_time_int( uuid_str ) - 122192928000000000, int_millisec = Math.floor( int_time / 10000 ); return new Date( int_millisec ); }; Example: var date_obj = get_date_obj( '8bf1aeb8-6b5b-11e4-95c0-001dba68c1f2' ); date_obj.toLocaleString( );// '11/13/2014, 9:06:06 PM'
Cassandra
17,571,100
15
I was just wondering what would be the best way to backup an entire keyspace in Cassandra... what do you think? Previously i just copied the data folder into my backup hard drive, but then i had problems to restore the database after updating.
The best way is by doing snapshots (nodetool snapshot). You can learn a lot about how that works and how best to use it in this Datastax documentation (disclaimer: I work for Datastax). You'll want to make sure you have JNA enabled (some relevant instructions can be found on this page). If you do, snapshots are extremely fast; they're just hard links to existing sstables. No copying needs to be done. You can combine snapshots with other backup tools (or just rsync, cp, etc) if you want to keep track of your backups in a particular way.
Cassandra
10,466,192
15
I've searched on this topic and can't find anything in the nginx configuration that says if this is "ok" or not? This appears to work just fine, other than messing up the syntax highlighting in vim: add_header Content-Security-Policy "default-src 'self' *.google-analytics.com; object-src 'none'; report-uri /csp-report;"; But is it actually valid? Am I relying on browsers understanding line breaks inside a CSP, or does nginx render it into one line before serving it? Fiddler appears to show it as one line, but again I don't know if nginx is serving it as that or if Fiddler is interpreting it as that. (This is obviously a much simplified version of my true CSP, which is certainly very much long enough that I consider it beneficial to my sanity to split it onto multiple lines!)
You can use variable nesting like this, which still in the end creates a one liner: set $SCRIPT "script-src 'self'"; set $SCRIPT "${SCRIPT} https://www.a.com"; # comment each line if you like set $SCRIPT "${SCRIPT} https://b.com"; set $STYLE "style-src 'self'"; set $STYLE "${STYLE} https://a.com"; set $IMG "img-src 'self' data:"; set $IMG "${IMG} https://a.com"; set $IMG "${IMG} https://www.b.com"; set $FONT "font-src 'self' data:"; set $FONT "${FONT} https://a.com"; set $DEFAULT "default-src 'self'"; set $CONNECT "connect-src 'self'"; set $CONNECT "${CONNECT} https://www.a.com"; set $CONNECT "${CONNECT} https://www.b.com"; set $FRAME "frame-src 'self'"; set $FRAME "${FRAME} https://a.com"; set $FRAME "${FRAME} https://b.com"; add_header Content-Security-Policy "${SCRIPT}; ${STYLE}; ${IMG}; ${FONT}; ${DEFAULT}; ${CONNECT}; ${FRAME}";
NGINX
50,018,881
36
I am trying to use some site of mine as an iframe from a different site of mine. My problem is- the other site is always consistently changes his IP address and does not have an domain name. So, I read that you can allo a specific domain by adding this lint to the /etc/nginx/nginx.conf: add_header X-Frame-Options "ALLOW-FROM https://subdomain.example.com/"; My question is: It is possible to allow my site to be imported as an iframe from all IP addressed and domains? What should I write in order to achieve this? I am using Ubuntu 16.04 and nginx 1.10.0.
If you set it, then you can only set it to DENY, SAMEORIGIN, or ALLOW-FROM (a specific origin). Allowing all domains is the default. Don't set the X-Frame-Options header at all if you want that. Note that the successor to X-Frame-Options — CSP's frame-ancestors directive — accepts a list of allowed origins so you can easily allow some origins instead of none, one or all.
NGINX
44,436,659
36
So i am using following settings to create one reverse proxy for site as below. server { listen 80; server_name mysite.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /home/ubuntu/p3; location / { proxy_pass https://mysiter.com/; proxy_redirect https://mysiter.com/ $host; proxy_set_header Accept-Encoding ""; } } But getting BAD GATE WAY 502 error and below is the log. 2016/08/13 09:42:28 [error] 26809#0: *60 SSL_do_handshake() failed (SSL: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error) while SSL handshaking to upstream, client: 103.255.5.68, server: mysite.com, request: "GET / HTTP/1.1", upstream: "https://105.27.188.213:443/", host: "mysite.com" 2016/08/13 09:42:28 [error] 26809#0: *60 SSL_do_handshake() failed (SSL: error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error) while SSL handshaking to upstream, client: 103.255.5.68, server: mysite.com, request: "GET / HTTP/1.1", upstream: "https://105.27.188.213:443/", host: "mysite.com" Any help will be greatly appreciated.
Seeing the exact same error on Nginx 1.9.0 and it looks like it was caused by the HTTPS endpoint using SNI. Adding this to the proxy location fixed it: proxy_ssl_server_name on; https://en.wikipedia.org/wiki/Server_Name_Indication http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ssl_server_name
NGINX
38,931,468
36
I have a problem with my nginx+uwsgi configuration for my django app, I keep getting this errors in the uwsgi error log: Wed Jan 13 15:26:04 2016 - uwsgi_response_writev_headers_and_body_do(): Broken pipe [core/writer.c line 296] during POST /company/get_unpaid_invoices_chart/ (86.34.48.7) IOError: write error Wed Jan 13 15:26:20 2016 - uwsgi_response_write_headers_do(): Broken pipe [core/writer.c line 238] during GET /gestiune/print_pdf/nir/136194/ (89.122.255.186) IOError: write error I am not getting them for all the requests but I do get a couple of them each minute. I searched for it and I understand that this happens because nginx closes the connection to uwsgi by the time uwsgi wants to write the response. This looks strange because in my nginx configuration I have this: include uwsgi_params; uwsgi_pass unix:/home/project/django/sbo_cloud/site.sock; uwsgi_read_timeout 600; uwsgi_send_timeout 600; uwsgi_connect_timeout 60; I am certain that none of the requests for which the error appears has exceeds the 600 seconds timeout. Any idea why this would happen? Thanks
The problem is that clients abort the connection and then Nginx closes the connection without telling uwsgi to abort. Then when uwsgi comes back with the result the socket is already closed. Nginx writes a 499 error in the log and uwsgi throws a IOError. The non optimal solution is to tell Nginx not to close the socket and wait for uwsgi to come back with a response. Put uwsgi_ignore_client_abort in your nginx.config. location @app { include uwsgi_params; uwsgi_pass unix:///tmp/uwsgi.sock; # when a client closes the connection then keep the channel to uwsgi open. Otherwise uwsgi throws an IOError uwsgi_ignore_client_abort on; } It is not clear if it is possible to tell Nginx to close the uwsgi connection. There is another SO questin about this issues: (Propagate http abort/close from nginx to uwsgi / Django)
NGINX
34,768,527
36
I am trying add CORS directive to my nginx file for as simple static HTML site. (taken from here http://enable-cors.org/server_nginx.html) Would there be a reason why it would complain about the first add_header directive saying 'add_header" directive is not allowed here' My config file sample server { if ($http_origin ~* (https?://[^/]*\.domain\.com(:[0-9]+)?)$) { set $cors "true"; } if ($request_method = 'OPTIONS') { set $cors "${cors}options"; } if ($request_method = 'GET') { set $cors "${cors}get"; } if ($request_method = 'POST') { set $cors "${cors}post"; } if ($cors = "trueget") { add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Credentials' 'true'; } if ($cors = "truepost") { add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Credentials' 'true'; } if ($cors = "trueoptions") { add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'Authorization,Content-Type,Accept,Origin,User-Agent,DNT,Cache-Control,X-Mx-ReqToken,Keep-Alive,X-Requested-With,If-Modified-Since'; add_header 'Content-Length' 0; add_header 'Content-Type' 'text/plain charset=UTF-8'; return 204; } listen 8080; location / { root /var/www/vhosts/mysite; } }
The rule if in location can be bypassed by some tricks, so that you don't have to write/include CORS rules in every location block. server { set $cors_origin ""; set $cors_cred ""; set $cors_header ""; set $cors_method ""; if ($http_origin ~* "^http.*\.yourhost\.com$") { set $cors_origin $http_origin; set $cors_cred true; set $cors_header $http_access_control_request_headers; set $cors_method $http_access_control_request_method; } add_header Access-Control-Allow-Origin $cors_origin; add_header Access-Control-Allow-Credentials $cors_cred; add_header Access-Control-Allow-Headers $cors_header; add_header Access-Control-Allow-Methods $cors_method; } This works because nginx will not return a header if its value is an empty string.
NGINX
27,955,233
36
Nginx was working fine on Mavericks, and now after I upgraded to Yosemite its displaying nginx command not found , I tried to install nginx with brew install nginx and it displays an error Error: You must brew link pcre before nginx can be installed And brew link pcre displays Linking /usr/local/Cellar/pcre/8.35... Error: No such file or directory - /usr/local/Cellar/pcre/8.34/share/doc/pcre Its trying to link 8.34. I reinstalled still its same, How do i solve it?
I had the same problem, that is, after upgrading from Mavericks to Yosemite I got the following error: nginx: [emerg] mkdir() "/usr/local/var/run/nginx/client_body_temp" failed (2: No such file or directory) All I needed to do to solve this issue was to create the folder: mkdir -p /usr/local/var/run/nginx/client_body_temp
NGINX
26,450,085
36
I have a config file with a virtual server setup, this is running on port 443 for ssl. I would also like this same virtual server to handle non ssl traffic on port 80. I was hoping to do the following but it doesn't seem to work. server { listen 443 ssl; listen 80; server_name example.com; ... } It looks like the ssl options below these settings are causing problems for the non ssl traffic.
Yes, of course. server { listen 80; listen 443 ssl; # force https-redirects if ($scheme = http) { return 301 https://$server_name$request_uri; } } Here is my post, name "Nginx Configuration for HTTPS" which contains more info.
NGINX
25,399,814
36
I want to check if a parameter is present in a url in nginx and then rewrite.How can i do that? For e.g if url is http://website.com/?mobile then redirect user to http://m.website.com
You better use http://example.com/?mobile=1 (argument with value). In this case checking is simple: if ($arg_mobile) { return 302 http://m.example.com/; } Checking for argument existance is usually done with regexp like if ($args ~ mobile) but it's error-prone, because it will match mobile anywhere, e.g. http://example.com/?tag=automobile.
NGINX
23,988,344
36
I'm having a lot of trouble setting up this alias inside nginx to display my website correctly. The website I'm concerned with should be accessible from mywebsite.com/mr and is different from the site located at mywebsite.com/. The website is located at /fullpath (shortened for simplicity) The site needs to serve three kinds of content: The index file located in /fullpath/index.html. Other html files (without showing the .html extension in the browser). Static assets (js/css/img) located in /fullpath and subdirectories. I've tried changing around the order of matches in the try_files and found situations where they all worked, just not at the same time: location /mr { default_type "text/html"; alias /fullpath; # with this one 1 and 3 work # try_files $uri/index.html $uri.html $uri; # with this one 2 and 3 work # try_files $uri $uri.html $uri/index.html; # with this one 1 and 2 work try_files $uri.html $uri/index.html $uri; } When one doesn't work it 404's. Does anybody know how I can serve all kinds of files correctly?
Apparently alias and try_files don't work together. However, I don't think you need to use alias. location /mr { default_type "text/html"; try_files /fullpath/$uri /fullpath/$uri.html /fullpath/$uri/index.html /fullpath/index.html; } Which would try: Exact file. File with .html added. Index in the path. Default index. I think the root directive does work with try files but am unable to test. server{ location /mr { root /home/mysite/fullpath; default_type "text/html"; try_files $uri $uri.html $uri/index.html index.html; } }
NGINX
15,451,191
36
The idea is to take incoming requests to http://abc.example.com/... and rewrite them to http://example.com/abc/... That's easy enough to do with a 301/302 redirect: # rewrite via 301 Moved Permanently server { listen 80; server_name abc.example.com; rewrite ^ $scheme://example.com/abc$request_uri permanent; } The trick is to do this URL change transparently to the client when abc.example.com and example.com point at the same Nginx instance. Put differently, can Nginx serve the contents from example.com/abc/... when abc.example.com/... is requested and without another client round trip? Starting Point Config Nginx config that accomplishes the task with a 301: # abc.example.com server { listen 80; server_name abc.example.com; rewrite ^ $scheme://example.com/abc$request_uri permanent; } # example.com server { listen 80; server_name example.com; location / { # ... } }
# abc.example.com server { listen 80; server_name abc.example.com; location / { proxy_pass http://127.0.0.1/abc$request_uri; proxy_set_header Host example.com; } }
NGINX
14,491,944
36
I'm running nginx, Phusion Passenger and Rails. I am running up against the following error: upstream sent too big header while reading response header from upstream, client: 87.194.2.18, server: xyz.com, request: "POST /user_session HTTP/1.1", upstream: "passenger://unix:/tmp/passenger.3322/master/helper_server.sock It is occuring on the callback from an authentication call to Facebook Connect. After googling, and trying to change nginx settings including proxy_buffer_size and large_client_header_buffers is having no effect. How can I debug this?
Try to add this to the config: http { ... proxy_buffers 8 16k; proxy_buffer_size 32k; }
NGINX
2,307,231
36
I am a docker beginner and the first thing i did was download nginx and tried to mount it on 80:80 port but Apache is already sitting there. docker container run --publish 80:80 nginx and docker container run --publish 3000:3000 nginx I tried doing it like this 3000:3000 to use it on port 3000 but it doesn't work .And it doesn't log anything either which i could use for referance.
The accepted answer does not change the actual port that nginx is starting up on. If you want to change the port nginx starts up on inside the container, you have to modify the /etc/nginx/nginx.conf file inside the container. For example, to start on port 9080: Dockerfile FROM nginx:1.17-alpine COPY <your static content> /usr/share/nginx/html COPY nginx.conf /etc/nginx/ EXPOSE 9080 CMD ["nginx", "-g", "daemon off;"] nginx.conf # on alpine, copy to /etc/nginx/nginx.conf user root; worker_processes auto; error_log /var/log/nginx/error.log warn; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile off; access_log off; keepalive_timeout 3000; server { listen 9080; root /usr/share/nginx/html; index index.html; server_name localhost; client_max_body_size 16m; } } Now to access the server from your computer: docker build . -t my-app docker run -p 3333:9080 my-app navigating to localhost:3333 in a browser, you'll see your content. There is probably a way to include the default nginx.conf, and override only the server.listen = PORT property, but I'm not too familiar with nginx config, so I just overwrote the entire default configuration.
NGINX
47,364,019
35
I'm using Namecheap Domains and Vultr Hosting. I'm trying to redirect DNS www to non-www. www.example.com to example.com I contacted Vultr and asked how to do this with their DNS Manager, they said they would not help as it is self-managed. So I contacted Namecheap, they said they would not help becuase they don't have access to Vultr's DNS Manager, would not tell me if the records I showed them are correct, and I would need to contact Vultr. So I am in an endless support loop. Vultr DNS Manager I followed this answer on how to set up a CNAME to redirect to non-www. Type   | Name | Data         | Seconds -------------------------------------- A      |      | ipv4 address | 300 AAAA  |      | ipv6 address | 300 CNAME  | .    | example.com  | 300 CNAME  | www | example.com  | 300 After waiting all night for it to propgate, the www can still be visited and does not redirect. It does not allow me to make another A record, only CNAME. It says: Unable to add record: A CNAME record is not allowed to coexist with any other data. NGINX I followed this guide and tried redirecting it with sites-available config. Http and Https work, but www does not redirect to non-www. server { # Redirect http to https listen 80; return 301 https://$host$request_uri; } server { # Redirect www to non-www server_name www.example.com; return 301 $scheme://example.com$request_uri; } server { listen 443 ssl default_server; ssl on; ssl_certificate /etc/nginx/ssl/cert_chain.crt; ssl_certificate_key /etc/nginx/ssl/example_com.key; ssl_protocols TLSv1.1 TLSv1.2; server_name example.com; ...
DNS cannot redirect your www site to non-www. The only purpose of DNS is to point both www and non-www to your server's IP address using A, AAAA or CNAME records (it makes little difference). The nginx configuration is responsible for performing the redirect from www to non-www. Your second server block is intended to redirect from www to non-www, but currently only handles http connections (on port 80). You can move the default server and use that to redirect everything to the intended domain name. For example: ssl_certificate /etc/nginx/ssl/cert_chain.crt; ssl_certificate_key /etc/nginx/ssl/example_com.key; server { listen 80 default_server; listen 443 ssl default_server; return 301 https://example.com$request_uri; } server { listen 443 ssl; server_name example.com; ... } Assuming that you have a common certificate for both the www and non-www domain names, you can move the ssl_ directives into the outer block and allow them to be inherited into both server blocks (as shown above). See this document for more.
NGINX
43,081,780
35
I configured nginx installation and configuration (together with setup SSL certificates for https site) via ansible. SSL certificates are under passphrases. I want to write ansilbe task which is restarting nginx. The problem is the following. Normally, nginx with https site inside asks for PEM pass phrase during restart. Ansible doesn't ask for that passphrase during execution of playbook. There is solution with storing decrypted cert and key in some private directory. But I don't really want to leave my cert and key somewhere unencrypted. How to pass password to nginx (or to openssl) during restart via ansible? Perfect scenario is following: Ansible is asking for SSL password (via vars_promt). Another option is to use ansible vault. Ansible is restarting nginx, and when nginx is asking for PEM pass phrase, ansible is passing password to nginx. Is it possible?
Nginx has ssl_password_file parameter. Specifies a file with passphrases for secret keys where each passphrase is specified on a separate line. Passphrases are tried in turn when loading the key. Example: http { ssl_password_file /etc/keys/global.pass; ... server { server_name www1.example.com; ssl_certificate_key /etc/keys/first.key; } server { server_name www2.example.com; # named pipe can also be used instead of a file ssl_password_file /etc/keys/fifo; ssl_certificate_key /etc/keys/second.key; } } What you could do is keep that ssl_password_file in ansible-vault, copy it over, restart nginx and then if successful delete it. I have no first-hand experience if it'll actually work or what other side-effects this might have(for example manual service nginx restart will probably fail), but it seems like a logical approach to me.
NGINX
33,084,347
35
I'm now deploying an django app with nginx and gunicorn on ubuntu 12. And I configure the nginx virtual host file as below: server { listen 80; server_name mydomain.com; access_log /var/log/nginx/gunicorn.log; location / { proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /static/ { root /var/www/django/ecerp/erp/static/; } } I can request the django well, but when request a static file, it response with 404 status. I'm sure the root path of static file is correct. Can anyone help?
You should use alias instead of root. root appends the trailing URL parts to your local path (e.g. http://test.ndd/trailing/part, it will add /trailing/part to your local path). Instead of that, alias does exactly what you want: when http://test.ndd/static/ is requested, /static is mapped to your alias exactly, without appending static again. For example: location /static { alias /var/www/django/ecerp/erp/static/; } And if file /var/www/django/ecerp/erp/static/foo.html exists then going to /static/foo.html will return its contents.
NGINX
26,237,936
35
I have a location block as location @test{ proxy_pass http://localhost:5000/1; } but nginx complains that "proxy_pass cannot have URI part in location given by regular expression..." Does anyone know what might be wrong? I'm trying to query localhost:5000/1 when an upload is complete: location /upload_attachment { upload_pass @test; upload_store /tmp; ... }
Technically just adding the URI should work, because it's documented here and it says that it should work, so location @test{ proxy_pass http://localhost:5000/1/; # with a trailing slash } Should have worked fine, but since you said it didn't I suggested the other way around, the trick is that instead of passing /my/uri to localhost:5000/1, we pass /1/my/uri to localhost:5000, That's what my rewrite did rewrite ^ /1$1 Meaning rewrite the whole URL, prepend it with /1 then add the remaining, the whole block becomes location @test{ rewrite ^ /1$1; proxy_pass http://localhost:5000; } Note: @Fleshgrinder provided an answer explaining why the first method didn't work.
NGINX
21,662,940
35
I've been doing web-programming for a while now and am quite familiar with the LAMP stack. I've decided to try playing around with the nginx/starman/dancer stack and I'm a bit confused about how to understand, from a high-level, how all the pieces relate to each other. Setting up the stack doesn't seem as straight forward as setting up the LAMP stack, but that's probably because I don't really understand how the pieces relate. I understand the role nginx is playing - a lightweight webserver/proxy - but I'm confused about how starman relates to pgsi, plack and dancer. I would appreciate a high-level breakdown of how these pieces relate to each other and why each is necessary (or not necessary) to get the stack setup. Thanks!
I've spent the last day reading about the various components and I think I have enough of an understanding to answer my own question. Most of my answer can be found in various places on the web, but hopefully there will be some value to putting all the pieces in one place: Nginx: The first and most obvious piece of the stack to understand is nginx. Nginx is a lightweight webserver that can act as a replacement for the ubiquitous Apache webserver. Nginx can also act as a proxy server. It has been growing rapidly in its use and currently serves about 10% of all web domains. One crucial advantage of nginx is that it is asynchronous and event-driven instead of creating a process thread to handle each connection. In theory this means that nginx is able to handle a large number of connections without using a lot of system resources. PSGI: PSGI is a protocol (to distinguish it from a particular implementation of the protocol, such as Plack). The main motivation for creating PSGI, as far as I can gather, is that when Apache was first created there was no native support for handling requests with scripts written in e.g., Perl. The ability to do this was tacked on to Apache using mod_cgi. To test your Perl application, you would have to run the entire webserver, as the application ran within the webserver. In contrast, PSGI provides a protocol with which a webserver can communicate with a server written in e.g. Perl. One of the benefits of this is that it's much easier to test the Perl server independently of the webserver. Another benefit is that once an application server is built, it's very easy to switch in different PSGI-compatible webservers to test which provides the best performance. Plack: This is a particular implementation of the PSGI protocol that provides the glue between a PSGI-compatible webserver and a perl application server. Plack is Perl's equivalent of Ruby's Rack. Starman: A perl based webserver that is compatible with the PSGI protocol. One confusion I had was why I would want to use both Starman and Nginx at the same time, but thankfully that question was answered quite well here on Stackoverflow. The essence is that it might be better to let nginx serve static files without requiring a perl process to do that, while also allowing the perl application server to run on a higher port. Dancer: A web application framework for Perl. Kind of an equivalent of Ruby on Rails. Or to be more precise, an equivalent of Sinatra for Ruby (the difference is that Sinatra is a minimalist framework, whereas Ruby on Rails is a more comprehensive web framework). As someone who dealt with PHP and hadn't really used a web framework before, I was a bit confused about how this related to the serving stack. The point of web frameworks is they abstract away common tasks that are very frequently performed in web applications, such as converting database queries into objects/data structures in the web application. Installation (on ubuntu): sudo apt-get install nginx sudo apt-get install build-essential curl sudo cpan App::cpanminus sudo cpanm Starman sudo cpanm Task::Plack sudo apt-get install libdancer-perl Getting it running: cd dancer -a mywebapp sudo plackup -s Starman -p 5001 -E deployment --workers=10 -a mywebapp/bin/app.pl Now you will have a starman server running your Dancer application on port 5001. To make nginx send traffic to the server you have to modify /etc/nginx/nginx.conf and add a rule something like this to the http section: server { server_name permanentinvesting.com listen 80; location /css/ { alias /home/ubuntu/mywebapp/public/css/; expires 30d; access_log off; } location / { proxy_pass http://localhost:5001; proxy_set_header X-Real-IP $remote_addr; } } The first location rule specifies that nginx should handle static content in the /css directory by getting it from /home/ubuntu/mywebapp/public/css/. The second location rule says that traffic to the webserver on port 80 should be sent to the Starman server to handle. Now we just need to start nginx: sudo service nginx start
NGINX
12,127,566
35
Is it possible to serve precompiled assets with nginx directly? Serving assets dynamically with Rails is like 20 times slower (4000 req/sec vs 200 req/sec in my virtualbox). I guess it can be done with some rewrite rule in nginx.conf. The problem is, however, that these filenames include md5 hash of the content, so I don't really understand what can be done with this. If its not possible I don't get the whole idea with Rails 3.1 asset pipelines. Reducing client bandwidth and page load time at the cost of x20 server load? Any ideas? UPD: So, I managed to setup my nginx and Rails in a way, when everything in my application is served at the speed of ~3500-4000 requests/sec. First of all I added two virtual hosts, with one serving as a caching proxy to another and discovered that assets are served at the speed I wanted (4k). Then I connected my Rails application with memcached (nothing special so far, just one line in application.rb: ActionController::Base.cache_store = :mem_cache_store, "localhost") Then I added things like expires_in 1.hour, :public => true if !signed_in?; to my controllers to change default caching policy of Rails content and got speed boost around 500 requests/per second for my dynamic pages (before that it was something close to 200, and it was ~50 before I ever started this all). Now, when my nginx config files look like this: nginx.conf: ... proxy_cache_path /tmp/blog keys_zone=one:8m max_size=1000m inactive=600m; proxy_temp_path /tmp; gzip off; include /opt/nginx/conf/sites-enabled/*; sites-enabled/blog: server { listen 8080; server_name blindsight; root /home/mike/rails/blog/public; rails_env production; # serve static content directly location ~* \.(ico|jpg|gif|png|swf|html)$ { if (-f $request_filename) { expires max; break; } } passenger_enabled on; location ~ /\.ht { deny all; } } sites-enabled/main: server { listen 80; server_name blindsight; location /authorize { proxy_pass_header Cookie; proxy_pass_header Set-Cookie; proxy_pass http://127.0.0.1:8080; } location /admin { proxy_pass_header Set-Cookie; proxy_pass_header Cookie; proxy_pass http://127.0.0.1:8080; } location / { root /home/mike/rails/blog/public; # All POST requests go directly if ($request_method = POST) { proxy_pass http://127.0.0.1:8080; break; } proxy_redirect off; proxy_pass_header Cookie; proxy_ignore_headers Set-Cookie; proxy_hide_header Set-Cookie; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_cache one; proxy_cache_key blog$request_uri; proxy_cache_valid 200 302 5s; proxy_cache_valid 404 1m; proxy_pass http://127.0.0.1:8080; } Everything is fast like a bloody lightning :) Thank you, guys.
Following on from above with some extra bits I gleaned from the interweb: For Rails 3.1: location ~* ^/assets/ { # Per RFC2616 - 1 year maximum expiry # http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html expires 1y; add_header Cache-Control public; # Some browsers still send conditional-GET requests if there's a # Last-Modified header or an ETag header even if they haven't # reached the expiry date sent in the Expires header. add_header Last-Modified ""; add_header ETag ""; break; } For Rails 3.0 use location ~* ^/(images|javascripts|stylesheets)/ { ... copy block from above ... }
NGINX
6,402,278
35
We're working on a Ruby on Rails app that needs to take advantage of html5 websockets. At the moment, we have two separate "servers" so to speak: our main app running on nginx+passenger, and a separate server using Pratik Naik's Cramp framework (which is running on Thin) to handle the websocket connections. Ideally, when it comes time for deployment, we'd have the rails app running on nginx+passenger, and the websocket server would be proxied behind nginx, so we wouldn't need to have the websocket server running on a different port. Problem is, in this setup it seems that nginx is closing the connections to Thin too early. The connection is successfully established to the Thin server, then immediately closed with a 200 response code. Our guess is that nginx doesn't realize that the client is trying to establish a long-running connection for websocket traffic. Admittedly, I'm not all that savvy with nginx config, so, is it even possible to configure nginx to act as a reverse proxy for a websocket server? Or do I have to wait for nginx to offer support for the new websocket handshake stuff? Assuming that having both the app server and the websocket server listening on port 80 is a requirement, might that mean I have to have Thin running on a separate server without nginx in front for now? Thanks in advance for any advice or suggestions. :) -John
You can't use nginx for this currently[it's not true anymore], but I would suggest looking at HAProxy. I have used it for exactly this purpose. The trick is to set long timeouts so that the socket connections are not closed. Something like: timeout client 86400000 # In the frontend timeout server 86400000 # In the backend If you want to serve say a rails and cramp application on the same port you can use ACL rules to detect a websocket connection and use a different backend. So your haproxy frontend config would look something like frontend all 0.0.0.0:80 timeout client 86400000 default_backend rails_backend acl websocket hdr(Upgrade) -i WebSocket use_backend cramp_backend if websocket For completeness the backend would look like backend cramp_backend timeout server 86400000 server cramp1 localhost:8090 maxconn 200 check
NGINX
2,419,346
35
I got this error in nginx error log: SSL_do_handshake() failed (SSL: error:141CF06C:SSL routines:tls_parse_ctos_key_share:bad key share) while SSL handshaking I use Let's Encrypt currently. Any ideas to solve this problem? Thank you, guys.
This isn't your problem. The best thing you can do in this situation is just to keep your server reasonably updated and secured. At best for you, the client side of a request was running seriously outdated software, and at worst your server is simply being scanned for vulnerabilities by compromised devices connected to the internet. Personally I lean in the direction of this being scanning, as I myself see these errors on a private development server, to which only I should ever have a legitimate reason to connect to, yet I see a ton of IP addresses mentioned by the error from around the world. Similar question and answer has already been provided here: https://serverfault.com/questions/905011/nginx-ssl-do-handshake-failed-ssl-error1417d18cssl/905019 Stay safe.
NGINX
65,854,933
34
I want to build a single page application with Vue.js using Nginx as my webserver and a my own Dropwiward REST API. Moreover I use Axios to call my REST request. My nginx config looks like server { listen 80; server_name localhost; location / { root path/to/vue.js/Project; index index.html index.htm; include /etc/nginx/mime.types; } location /api/ { rewrite ^/api^/ /$1 break; proxy_pass http://localhost:8080/; } } Currently I can just call my localhost/api/path/to/rescource to get the the information from the backend. I build the Front end with HTML and javascript(vue.js) which has worked so far. However when I want to build a single page application most tutorials mention node.js. How can I use Nginx instead?
Add the following code to your Nginx Config, as detailed in the VueRouter docs, here: location / { try_files $uri $uri/ /index.html; } Also, you need to enable history mode on VueRouter: const router = new VueRouter({ mode: 'history', routes: [...] })
NGINX
47,655,869
34
I've a docker container running nginx which is writing logs to /var/log/nginx Logrotate is installed in the docker container and the logrotate config file for nginx is set up correctly. Still, the logs are not being automatically rotated by logrotate. Manually forcing log rotate to rotate the logs via logrotate -f /path/to/conf-file works as expected. My conclusion is that something is not triggering the cron to fire but I can't find the reason. Here's the Dockerfile for the docker container running nginx: FROM nginx:1.11 # Remove sym links from nginx image RUN rm /var/log/nginx/access.log RUN rm /var/log/nginx/error.log # Install logrotate RUN apt-get update && apt-get -y install logrotate # Copy MyApp nginx config COPY config/nginx.conf /etc/nginx/nginx.conf #Copy logrotate nginx configuration COPY config/logrotate.d/nginx /etc/logrotate.d/ And the docker-compose file: version: '2' services: nginx: build: ./build restart: always ports: - "80:80" - "443:443" volumes: - ./auth:/etc/nginx/auth - ./certs:/etc/nginx/certs - ./conf:/etc/nginx/conf - ./sites-enabled:/etc/nginx/sites-enabled - ./web:/etc/nginx/web - nginx_logs:/var/log/nginx logging: driver: "json-file" options: max-size: "100m" max-file: "1" volumes: nginx_logs: networks: default: external: name: my-network Here's the content of: /etc/logrotate.d/nginx /var/log/nginx/*.log { daily dateext missingok rotate 30 compress delaycompress notifempty create 0640 www-data adm sharedscripts prerotate if [ -d /etc/logrotate.d/httpd-prerotate ]; then \ run-parts /etc/logrotate.d/httpd-prerotate; \ fi \ endscript postrotate [ -s /run/nginx.pid ] && kill -USR1 `cat /run/nginx.pid` endscript } Content of /etc/cron.daily/logrotate #!/bin/sh test -x /usr/sbin/logrotate || exit 0 /usr/sbin/logrotate /etc/logrotate.conf Content of /etc/logrotate.conf # see "man logrotate" for details # rotate log files weekly weekly # keep 4 weeks worth of backlogs rotate 4 # create new (empty) log files after rotating old ones create # uncomment this if you want your log files compressed #compress # packages drop log rotation information into this directory include /etc/logrotate.d # no packages own wtmp, or btmp -- we'll rotate them here /var/log/wtmp { missingok monthly create 0664 root utmp rotate 1 } /var/log/btmp { missingok monthly create 0660 root utmp rotate 1 } # system-specific logs may be configured here Can someone point me in the right direction to why nginx logs are not being automatically rotated by logrotate? EDIT I can trace the root cause of this problem to the cron service not being run on the container. A possible solution is to find a way to make the container run both nginx and cron service at the same time.
As stated on the edit on my question the problem was that CMD from nginx:1.11 was only starting the nginx process. A work around is to place the following command on my Dockerfile CMD service cron start && nginx -g 'daemon off;' This will start nginx as nginx:1.11 starts it and well as start the cron service. The Dockerfile would look something like: FROM nginx:1.11 # Remove sym links from nginx image RUN rm /var/log/nginx/access.log RUN rm /var/log/nginx/error.log # Install logrotate RUN apt-get update && apt-get -y install logrotate # Copy MyApp nginx config COPY config/nginx.conf /etc/nginx/nginx.conf #Copy logrotate nginx configuration COPY config/logrotate.d/nginx /etc/logrotate.d/ # Start nginx and cron as a service CMD service cron start && nginx -g 'daemon off;'
NGINX
46,323,978
34
I am trying to configure NGINX as a forward proxy to replace Fiddler which we are using as a forward proxy. The feature of Fiddler that we use allows us to proxy ALL incoming request to a 8888 port. How do I do that with NGINX? In all examples of NGINX as a reverse proxy I see proxy_pass always defined to a specific upstream/proxied server. How can I configure it so it goes to the requested server, regardless of the server in the same way I am using Fiddler as a forward proxy. Example: In my code: WebProxy proxyObject = new WebProxy("http://mynginxproxyserver:8888/",true); WebRequest req = WebRequest.Create("http://www.contoso.com"); req.Proxy = proxyObject; In mynginxproxyserver/nginx.conf I do not want to delegate the proxying to another server (e.g. proxy_pass set to http://someotherproxyserver). Instead I want it to just be a proxy server, and redirect requests from my client (see above) to the request host. That's what Fiddler does when you enable it as a proxy: http://docs.telerik.com/fiddler/Configure-Fiddler/Tasks/UseFiddlerAsReverseProxy
Your code appears to be using a forward proxy (often just "proxy"), not reverse proxy and they operate quite differently. Reverse proxy is for server end and something client doesn't really see or think about. It's to retrieve content from the backend servers and hand to the client. Forward proxy is something the client sets up in order to connect to rest of the internet. In turn, the server may potentially know nothing about your forward proxy. Nginx is originally designed to be a reverse proxy, and not a forward proxy. But it can still be used as a forward one. That's why you probably couldn't find much configuration for it. This is more a theory answer as I've never done this myself, but a configuration like following should work. server { listen 8888; location / { resolver 8.8.8.8; # may or may not be necessary. proxy_pass http://$http_host$uri$is_args$args; } } This is just the important bits, you'll need to configure the rest. The idea is that the proxy_pass will pass to a variable host rather than a predefined one. So if you request http://example.com/foo?bar, your http header will include host of example.com. This will make your proxy_pass retrieve data from http://example.com/foo?bar. The document that you linked is using it as a reverse proxy. It would be equivalent to proxy_pass http://localhost:80;
NGINX
46,060,028
34
The Upstream server is wowza , which does not accept the custom headers if I don't enable them on application level. Nginx is working as a proxy server, from the browser I want to send few custom headers which should be received and logged by Nginx Proxy but before forwarding request to upstream server those headers should be removed from the request. So upstream server never come to know that there where any custom headers. I tried proxy_hide_header as well as proxy_set_header "<header>" "" , but seems like they apply to response headers not the request headers. And even if I accept to enable the headers on wowza, then again I am not able to find a way to enable headers at server level for all application. Currenlty I have to add headers to each newly created application which is not feasible for me to do. Any help would be appreciated.
The proxy_set_header HEADER "" does exactly what you expect. See https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header. If the value of a header field is an empty string then this field will not be passed to a proxied server: proxy_set_header Accept-Encoding ""; I have just confirmed this is working as documented, I used Nginx v1.12.
NGINX
44,536,548
34
I've installed Docker Toolbox in macOS and I'm following Docker's simple tutorial on deploying Nginx. I've executed docker run and confirmed that my container has been created: docker run --name mynginx1 -P -d nginx docker ps 40001fc50719 nginx "nginx -g 'daemon off" 23 minutes ago Up 23 minutes 0.0.0.0:32770->80/tcp, 0.0.0.0:32769->443/tcp mynginx1 however when I curl http://localhost:32770, I get a connection refused error: curl: (7) Failed to connect to localhost port 32770: Connection refused I'm struggling to see what I could have missed here. Is there an extra step I need to perform, in light of me being on macOS?
The issue is that your DOCKER_HOST is not set to localhost, you will need to use the IP address of your docker-machine, since you are using Docker Toolbox: docker-machine ip default # should return your IP address. See Docker Toolbox Docs for more information.
NGINX
33,022,250
34
I have a rails app in production that i deployed some changes to the other day. All of a sudden now I get the error ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5.000 seconds (waited 5.000 seconds) multiple times a day and have to restart puma to fix the issue. I'm completely stumped as to what is causing this. I didn't change anything on my server and the changes I made were pretty simple (add to a view and add to a controller method). I'm not seeing much of anything in the log files. I'm using rails 4.1.4 and ruby 2.0.0p481 Any ideas as to why my connections are filling up? My connection pool is set to 10 and i'm using the default puma configuration. Here's a stack trace: ActiveRecord::ConnectionTimeoutError (could not obtain a database connection within 5.000 seconds (waited 5.000 seconds)): activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `block in wait_poll' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:181:in `loop' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:181:in `wait_poll' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:136:in `block in poll' /usr/local/rvm/rubies/ruby-2.0.0-p481/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:146:in `synchronize' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:134:in `poll' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:418:in `acquire_connection' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:351:in `block in checkout' /usr/local/rvm/rubies/ruby-2.0.0-p481/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:350:in `checkout' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:265:in `block in connection' /usr/local/rvm/rubies/ruby-2.0.0-p481/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:264:in `connection' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:541:in `retrieve_connection' activerecord (4.1.4) lib/active_record/connection_handling.rb:113:in `retrieve_connection' activerecord (4.1.4) lib/active_record/connection_handling.rb:87:in `connection' activerecord (4.1.4) lib/active_record/query_cache.rb:51:in `restore_query_cache_settings' activerecord (4.1.4) lib/active_record/query_cache.rb:43:in `rescue in call' activerecord (4.1.4) lib/active_record/query_cache.rb:32:in `call' activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:621:in `call' actionpack (4.1.4) lib/action_dispatch/middleware/callbacks.rb:29:in `block in call' activesupport (4.1.4) lib/active_support/callbacks.rb:82:in `run_callbacks' actionpack (4.1.4) lib/action_dispatch/middleware/callbacks.rb:27:in `call' actionpack (4.1.4) lib/action_dispatch/middleware/remote_ip.rb:76:in `call' airbrake (4.1.0) lib/airbrake/rails/middleware.rb:13:in `call' actionpack (4.1.4) lib/action_dispatch/middleware/debug_exceptions.rb:17:in `call' actionpack (4.1.4) lib/action_dispatch/middleware/show_exceptions.rb:30:in `call' railties (4.1.4) lib/rails/rack/logger.rb:38:in `call_app' railties (4.1.4) lib/rails/rack/logger.rb:20:in `block in call' activesupport (4.1.4) lib/active_support/tagged_logging.rb:68:in `block in tagged' activesupport (4.1.4) lib/active_support/tagged_logging.rb:26:in `tagged' activesupport (4.1.4) lib/active_support/tagged_logging.rb:68:in `tagged' railties (4.1.4) lib/rails/rack/logger.rb:20:in `call' actionpack (4.1.4) lib/action_dispatch/middleware/request_id.rb:21:in `call' rack (1.5.2) lib/rack/methodoverride.rb:21:in `call' dragonfly (1.0.5) lib/dragonfly/cookie_monster.rb:9:in `call' rack (1.5.2) lib/rack/runtime.rb:17:in `call' activesupport (4.1.4) lib/active_support/cache/strategy/local_cache_middleware.rb:26:in `call' rack (1.5.2) lib/rack/sendfile.rb:112:in `call' airbrake (4.1.0) lib/airbrake/user_informer.rb:16:in `_call' airbrake (4.1.0) lib/airbrake/user_informer.rb:12:in `call' railties (4.1.4) lib/rails/engine.rb:514:in `call' railties (4.1.4) lib/rails/application.rb:144:in `call' railties (4.1.4) lib/rails/railtie.rb:194:in `public_send' railties (4.1.4) lib/rails/railtie.rb:194:in `method_missing' puma (2.9.0) lib/puma/configuration.rb:71:in `call' puma (2.9.0) lib/puma/server.rb:490:in `handle_request' puma (2.9.0) lib/puma/server.rb:361:in `process_client' puma (2.9.0) lib/puma/server.rb:254:in `block in run' puma (2.9.0) lib/puma/thread_pool.rb:92:in `call' puma (2.9.0) lib/puma/thread_pool.rb:92:in `block in spawn_thread' Puma init.d script #!/bin/sh # Starts and stops puma # case "$1" in start) su myuser -c "source /etc/profile && cd /var/www/myapp/current && rvm gemset use myapp && puma -d -e production -b unix:///var/www/myapp/myapp_app.sock -S /var/www/myapp/myapp_app.state" ;; stop) su myuser -c "source /etc/profile && cd /var/www/myapp/current && rvm gemset use myapp && RAILS_ENV=production bundle exec pumactl -S /var/www/myapp/myapp_app.state stop" ;; restart) $0 stop $0 start ;; *) echo "Usage: $0 {start|stop|restart}" exit 1 esac EDIT I think i've finally narrowed down the issue to be with the airbrake gem and using the devise method current_user or user_signed_in? in application_controller.rb in a before_action. Here's my application controller: class ApplicationController < ActionController::Base protect_from_forgery before_filter :authenticate_user!, :get_new_messages # Gets the unread messages for the logged in user def get_new_messages @num_new_messages = 0 # Initially set to 0 so login page, etc works # If the user is signed in, fetch the new messages if user_signed_in? # I also tried !current_user.nil? @num_new_messages = Message.where(:created_for => current_user.id).where(:viewed => false).count end end ... end If i remove the if block, i have no problems. Since i introduced that code, my app seems to run out of connections. If i leave that if block in place and remove the airbrake gem, my app seems to run just fine with only the default 5 connections set on my pool in my database.yml file. EDIT I finally figure out that if I comment out this line in my config/environments/production.rb file config.exceptions_app = self.routes that I don't get the error. It seems that custom routes + devise in the app controller before_action are the cause. I've created an issue and a reproducable project on github. https://github.com/plataformatec/devise/issues/3422 https://github.com/toymachiner62/devise-connection-failure/blob/master/config/environments/production.rb#L84
I had the same problems which were caused by too many open connections to the database. This can happen when you have database queries outside of a controller (in a model, mailer, pdf generator, ...). I could fix it by wrapping those queries in this block which closes the connection automatically. ActiveRecord::Base.connection_pool.with_connection do # your code end Since Puma works multi-threaded, the pool size (as eabraham mentioned) can be a limitation, too. Try to increase it (a little)... I hope this helps!
NGINX
27,801,185
34
Problem I have a rails 3.2.15 with rack 1.4.5 setup on two servers. First server is a nginx proxy serving static assets. Second server is a unicorn serving the rails app. In Rails production.log I always see the nginx IP address (10.0.10.150) and not my client IP address (10.0.10.62): Started GET "/" for 10.0.10.150 at 2013-11-21 13:51:05 +0000 I want to have the real client IP in logs. Our Setup The HTTP headers X-Forwarded-For and X-Real-IP are setup correctly in nginx and I have defined 10.0.10.62 as not being a trusted proxy address by setting config.action_dispatch.trusted_proxies = /^127\.0\.0\.1$/ in config/environments/production.rb, thanks to another answer. I can check it is working because I log them in the application controller: in app/controllers/application_controller.rb: class ApplicationController < ActionController::Base before_filter :log_ips def log_ips logger.info("request.ip = #{request.ip} and request.remote_ip = #{request.remote_ip}") end end in production.log: request.ip = 10.0.10.150 and request.remote_ip = 10.0.10.62 Investigation When investigating, I saw that Rails::Rack::Logger is responsible for logging the IP address: def started_request_message(request) 'Started %s "%s" for %s at %s' % [ request.request_method, request.filtered_path, request.ip, Time.now.to_default_s ] end request is an instance of ActionDispatch::Request. It inherits Rack::Request which defines how the IP address is computed: def trusted_proxy?(ip) ip =~ /^127\.0\.0\.1$|^(10|172\.(1[6-9]|2[0-9]|30|31)|192\.168)\.|^::1$|^fd[0-9a-f]{2}:.+|^localhost$/i end def ip remote_addrs = @env['REMOTE_ADDR'] ? @env['REMOTE_ADDR'].split(/[,\s]+/) : [] remote_addrs.reject! { |addr| trusted_proxy?(addr) } return remote_addrs.first if remote_addrs.any? forwarded_ips = @env['HTTP_X_FORWARDED_FOR'] ? @env['HTTP_X_FORWARDED_FOR'].strip.split(/[,\s]+/) : [] if client_ip = @env['HTTP_CLIENT_IP'] # If forwarded_ips doesn't include the client_ip, it might be an # ip spoofing attempt, so we ignore HTTP_CLIENT_IP return client_ip if forwarded_ips.include?(client_ip) end return forwarded_ips.reject { |ip| trusted_proxy?(ip) }.last || @env["REMOTE_ADDR"] end The forwarded IP address are filtered with trusted_proxy?. Because our nginx server is using a public IP address and not a private IP address, Rack::Request#ip thinks it is not a proxy but the real client ip that tries to do some IP spoofing. That's why I see nginx IP address in my logs. In log excerpts, client and servers have IP address 10.0.10.x because I am using virtual machines to reproduce our production environment. Our current solution To circumvent this behavior, I wrote a little Rack middleware located in app/middleware/remote_ip_logger.rb: class RemoteIpLogger def initialize(app) @app = app end def call(env) remote_ip = env["action_dispatch.remote_ip"] Rails.logger.info "Remote IP: #{remote_ip}" if remote_ip @app.call(env) end end And I insert it just after the ActionDispatch::RemoteIp middleware config.middleware.insert_after ActionDispatch::RemoteIp, "RemoteIpLogger" This way I can see the real client IP in logs: Started GET "/" for 10.0.10.150 at 2013-11-21 13:59:06 +0000 Remote IP: 10.0.10.62 I feel a little uncomfortable with this solution. nginx+unicorn is a common setup for rails application. If I have to log the client IP myself, it means I have missed something. Is it because the Nginx server is using a public IP address when communicating with the rails server? Is there a way to customize the trusted_proxy? method of Rack::Request? EDITED: add nginx configuration and a HTTP request capture /etc/nginx/sites-enabled/site.example.com.conf: server { server_name site.example.com; listen 80; location ^~ /assets/ { root /home/deployer/site/shared; expires 30d; } location / { root /home/deployer/site/current/public; try_files $uri @proxy; } location @proxy { access_log /var/log/nginx/site.access.log combined_proxy; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 300; proxy_pass http://rails.example.com:8080; } } Nginx server is 10.0.10.150. Rails server is 10.0.10.190. My machine is 10.0.10.62 When doing curl http://10.0.10.150/ from my machine, a tcpdump port 8080 -i eth0 -Aq -s 0 on rails server show theses request HTTP headers: GET / HTTP/1.0 X-Forwarded-For: 10.0.10.62 X-Forwarded-Proto: http Host: 10.0.10.150 Connection: close User-Agent: curl/7.29.0 Accept: */* And the rails log /home/deployer/site/current/log/production.log (Remote IP and request.ip lines being added by custom code): Started GET "/" for 10.0.10.150 at 2013-11-22 08:01:17 +0000 Remote IP: 10.0.10.62 Processing by Devise::RegistrationsController#new as */* request.ip = 10.0.10.150 and request.remote_ip = 10.0.10.62 Rendered devise/shared/_links.erb (0.1ms) Rendered devise/registrations/new.html.erb within layouts/application (2.3ms) Rendered layouts/_landing.html.erb (1.5ms) Completed 200 OK in 8.9ms (Views: 7.5ms | ActiveRecord: 0.0ms)
In my opinion, your current approach is the only sane one. The only step that is missing is overwriting the IP address in env. The typical REMOTE_ADDR seldom holds the correct IP if you've any amount of layers of proxies and load balancers and what not -- you're not unique in this respect. Each potentially adds or changes remote IP-related headers. And you cannot assume that each of those fields necessarily correspond to a single IP address, at that. Some will push or unshift an IP to a list instead. There is only one way to know for sure which field holds the correct value and how, and that is to dive in there and look. You've evidently done that already. Now, just overwrite env['REMOTE_ADDR'] with its correct value using your Rack middleware. There's little point in letting any piece of code you didn't write log or process the wrong IP address, as is happening now. (This being Ruby, you could also monkey patch Rack::Request, of course...) For colorful reading that illustrate the varying degrees of which exotic setups can mess up attempts at finding a client's real IP address, see for instance the unending discussions that occurred about this for WordPress: https://core.trac.wordpress.org/ticket/9235 https://core.trac.wordpress.org/ticket/4198 https://core.trac.wordpress.org/ticket/4602 It's PHP but the gist of the points raised apply equally well to Ruby. (Note that they're unresolved as I write this, too, and that they've been around for aeons.)
NGINX
20,124,292
34
I have problem setting up CGI scripts to be run on Nginx, so far I've found http://wiki.nginx.org/SimpleCGI this stuff but problem is that I can't make perl script run as service so that it will run in background and even in case of restart it will start running automatically Do you have any idea? I'm running Centos 5 I've found some solutions here but I couldn't integrate code given there with this Perl script I'm completely zero at Perl, please help me Thanks
Nginx doesn't have native CGI support (it supports fastCGI instead). The typical solution for this is to run your Perl script as a fastCGI process and edit the nginx config file to re-direct requests to the fastCGI process. This is quite a complex solution if all you want to do is run a CGI script. Do you have to use nginx for this solution? If all you want to do is execute some Perl CGI scripts, consider using Apache or Lighttpd as they come with CGI modules which will process your CGI scripts natively and don't require the script to be run as a separate process. To do this you need install the web server and edit the web server config file to load the CGI module. For Lighttpd, you will need to add a line in the config file to enable processing of CGI files. Then put the CGI files into the cgi-bin folder.
NGINX
11,667,489
34
Need help on Nginx proxy_pass. From outside Nginx URL will be hit like this: http://some-IP:8080/v2/platform/general/activity/plan?..... my downstream service looks like this: http://another-IP:8080/activity/plan?... I want to get rid of /v2/platform/general from original public url and call my downstream service like above. In Nginx, how do I redirect public access URL to downstream service? I tried this: location /v2/platform/general/ { rewrite ^/(.*) /$1 break; proxy_redirect off; proxy_pass http://another-IP:8080; proxy_set_header Host $host; But it didn't work, any help appreciated.
proxy_pass and proxy_redirect have totally different functions. The proxy_redirect directive is only involved with changing the Location response header in a 3xx status message. See the NGINX proxy_redirect docs for details. Your rewrite statement does nothing other than prevent further modification of the URI. This line needs to be deleted otherwise it will inhibit proxy_pass from mapping the URI. See below. The proxy_pass directive can map the URI (e.g. from /v2/platform/general/foo to /foo) by appending a URI value to the proxy_pass value, which works in conjunction with the location value. See this document for details. For example: location /v2/platform/general/ { ... proxy_pass http://another-IP:8080/; } You may need to set the Host header only if your upstream server does not respond correctly to the value another-IP:8080. You may need to add one or more proxy_redirect statements if your upstream server generates 3xx status responses with an incorrect value for the Location header value.
NGINX
59,852,217
33
I'm using nginx as a reverse proxy for my website. I want to be able to open my website in an iFrame from a chrome extension new tab html file. For this, I need my nginx to set X-Frame-Options to allow all domains. According to this answer, all domains is the default state if you don't set X-Frame-Options. My /etc/nginx/nginx.conf doesn't have the X-Frame-Options set anywhere. Yet when I check my website response header using Postman, it shows me X-Frame-Options = SAMEORIGIN. How can I remove this setting and load my website in an iFrame in the chrome new-tab .html file?
Solved it by changing proxy_hide_header values in /etc/nginx/sites-available/default file like so: proxy_hide_header X-Frame-Options; Needed to restart nginx as well as use pm2 to restart my nodejs server (for some reason, it didn't work till I made a small change to my server and restarted it).
NGINX
47,405,597
33
I noticed my install of nginx has three folders called etc/nginx/sites-available etc/nginx/sites-enabled etc/nginx/conf.d Do I really need these if I just want to work directly in the etc/nginx/nginx.conf file and remove the include lines that include these items in nginx.conf? Are these directories used for anything else that would mess things up if I delete them?
No, they are not needed if you define your server blocks properly in nginx.conf, but it's highly suggested. As you noticed, they are only used because of the include /etc/nginx/sites-enabled/*; in nginx.conf. For curiosity, is there a reason why you do not want to use them? They are very useful; easier to add new sites, disabling sites, etc. Rather than having one large config file. This is a kind of a best practice of nginx folder layout.
NGINX
41,303,885
33
I'm trying to set up NGINX and cloudflare. I've read about this on Google but nothing solved my problem. My cloudflare is active at the moment. I removed all page rules in cloudflare but before had domain.com and www.domain.com to use HTTPS. I thought this could be causing the problem so I removed it. Here is my default NGINX file, with purpose of allowing only access by domain name and forbid access by IP value of the website: server{ #REDIRECT HTTP TO HTTPS listen 80 default; listen [::]:80 default ipv6only=on; ## listen for ipv6 rewrite ^ https://$host$request_uri? permanent; } server{ #REDIRECT IP HTTPS TO DOMAIN HTTPS listen 443; server_name numeric_ip; rewrite ^ https://www.domain.com; } server{ #REDIRECT IP HTTP TO DOMAIN HTTPS listen 80; server_name numeric_ip; rewrite ^ https://www.domain.com; } server { listen 443 ssl; server_name www.domain.com domain.com; #rewrite ^ https://$host$request_uri? permanent; keepalive_timeout 70; ssl_certificate /ssl/is/working.crt; ssl_certificate_key /ssl/is/working.key; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; #ssl_dhparam /path/to/dhparam.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM$ ssl_prefer_server_ciphers on; add_header Strict-Transport-Security max-age=15768000; (...) more ssl configs What could be off? I'll provide mroe information if needed...
After tryouts I found that this is only related to Cloudflare. Because I had no redirect problem before moving to Cloudflare. In my case it was a simple fix like this. Select [Crypto] box and select Full (strict) as in the image. Really, you can try this out first before any other actions.
NGINX
35,143,193
33
I installed Gitlab CE on a dedicated Ubuntu 14.04 server edition with Omnibus package. Now I would want to install three other virtual hosts next to gitlab. Two are node.js web applications launched by a non-root user running on two distinct ports > 1024, the third is a PHP web application that need a web server to be launched from. There are: a private bower registry running on 8081 (node.js) a private npm registry running on 8082 (node.js) a private composer registry (PHP) But Omnibus listen 80 and doesn't seem to use neither Apache2 or Nginx, thus I can't use them to serve my PHP app and reverse-proxy my two other node apps. What serving mechanics Gitlab Omnibus uses to listen 80 ? How should I create the three other virtual hosts to be able to provide the following vHosts ? gitlab.mycompany.com (:80) -- already in use bower.mycompany.com (:80) npm.mycompany.com (:80) packagist.mycompany.com (:80)
About these But Omnibus listen 80 and doesn't seem to use neither Apache2 or Nginx [, thus ...]. and @stdob comment : Did omnibus not use nginx as a web server ??? – Wich I responded I guess not because nginx package isn't installed in the system ... In facts From Gitlab official docs : By default, omnibus-gitlab installs GitLab with bundled Nginx. So yes! Omnibus package actually uses Nginx ! but it was bundled, explaining why it doesn't require to be installed as dependency from the host OS. Thus YES! Nginx can, and should be used to serve my PHP app and reverse-proxy my two other node apps. Then now Omnibus-gitlab allows webserver access through user gitlab-www which resides in the group with the same name. To allow an external webserver access to GitLab, external webserver user needs to be added gitlab-www group. To use another web server like Apache or an existing Nginx installation you will have to do the following steps: Disable bundled Nginx by specifying in /etc/gitlab/gitlab.rb nginx['enable'] = false # For GitLab CI, use the following: ci_nginx['enable'] = false Check the username of the non-bundled web-server user. By default, omnibus-gitlab has no default setting for external webserver user. You have to specify the external webserver user username in the configuration! Let's say for example that webserver user is www-data. In /etc/gitlab/gitlab.rb set web_server['external_users'] = ['www-data'] This setting is an array so you can specify more than one user to be added to gitlab-www group. Run sudo gitlab-ctl reconfigure for the change to take effect. Setting the NGINX listen address or addresses By default NGINX will accept incoming connections on all local IPv4 addresses. You can change the list of addresses in /etc/gitlab/gitlab.rb. nginx['listen_addresses'] = ["0.0.0.0", "[::]"] # listen on all IPv4 and IPv6 addresses For GitLab CI, use the ci_nginx['listen_addresses'] setting. Setting the NGINX listen port By default NGINX will listen on the port specified in external_url or implicitly use the right port (80 for HTTP, 443 for HTTPS). If you are running GitLab behind a reverse proxy, you may want to override the listen port to something else. For example, to use port 8080: nginx['listen_port'] = 8080 Similarly, for GitLab CI: ci_nginx['listen_port'] = 8081 Supporting proxied SSL By default NGINX will auto-detect whether to use SSL if external_url contains https://. If you are running GitLab behind a reverse proxy, you may wish to keep the external_url as an HTTPS address but communicate with the GitLab NGINX internally over HTTP. To do this, you can disable HTTPS using the listen_https option: nginx['listen_https'] = false Similarly, for GitLab CI: ci_nginx['listen_https'] = false Note that you may need to configure your reverse proxy to forward certain headers (e.g. Host, X-Forwarded-Ssl, X-Forwarded-For, X-Forwarded-Port) to GitLab. You may see improper redirections or errors (e.g. "422 Unprocessable Entity", "Can't verify CSRF token authenticity") if you forget this step. For more information, see: What's the de facto standard for a Reverse Proxy to tell the backend SSL is used? https://wiki.apache.org/couchdb/Nginx_As_a_Reverse_Proxy To go further you can follow the official docs at https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#using-a-non-bundled-web-server Configuring our gitlab virtual host Installing Phusion Passenger We need to install ruby (gitlab run in omnibus with a bundled ruby) globally in the OS $ sudo apt-get update $ sudo apt-get install ruby $ sudo gem install passenger Recompile nginx with the passenger module Instead of Apache2 for example, nginx isn't able to be plugged with binary modules on-the-fly. It must be recompiled for each new plugin you want to add. Phusion passenger developer team worked hard to provide saying, "a bundled nginx version of passenger" : nginx bins compiled with passenger plugin. So, lets use it: requirement: we need to open our TCP port 11371 (the APT key port). $ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 561F9B9CAC40B2F7 $ sudo apt-get install apt-transport-https ca-certificates creating passenger.list $ sudo nano /etc/apt/sources.list.d/passenger.list with these lignes # Ubuntu 14.04 deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main use the right repo for your ubuntu version. For Ubuntu 15.04 for example: deb https://oss-binaries.phusionpassenger.com/apt/passenger vivid main Edit permissions: $ sudo chown root: /etc/apt/sources.list.d/passenger.list $ sudo chmod 600 /etc/apt/sources.list.d/passenger.list Updating package list: $ sudo apt-get update Allowing it as unattended-upgrades $ sudo nano /etc/apt/apt.conf.d/50unattended-upgrades Find or create this config block on top of the file: // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { // you may have some instructions here }; Add the following: // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { // you may have some instructions here // To check "Origin:" and "Suite:", you could use e.g.: // grep "Origin\|Suite" /var/lib/apt/lists/oss-binaries.phusionpassenger.com* "Phusion:stable"; }; Now (re)install nginx-extra and passenger: $ sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak_"$(date +%Y-%m-%d_%H:%M)" $ sudo apt-get install nginx-extras passenger configure it Uncomment the passenger_root and passenger_ruby directives in the /etc/nginx/nginx.conf file: $ sudo nano /etc/nginx/nginx.conf ... to obtain something like: ## # Phusion Passenger config ## # Uncomment it if you installed passenger or passenger-enterprise ## passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini; passenger_ruby /usr/bin/passenger_free_ruby; create the nginx site configuration (the virtual host conf) $ nano /etc/nginx/sites-available/gitlab.conf server { listen *:80; server_name gitlab.mycompany.com; server_tokens off; root /opt/gitlab/embedded/service/gitlab-rails/public; client_max_body_size 250m; access_log /var/log/gitlab/nginx/gitlab_access.log; error_log /var/log/gitlab/nginx/gitlab_error.log; # Ensure Passenger uses the bundled Ruby version passenger_ruby /opt/gitlab/embedded/bin/ruby; # Correct the $PATH variable to included packaged executables passenger_env_var PATH "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/usr/local/bin:/usr/bin:/bin"; # Make sure Passenger runs as the correct user and group to # prevent permission issues passenger_user git; passenger_group git; # Enable Passenger & keep at least one instance running at all times passenger_enabled on; passenger_min_instances 1; error_page 502 /502.html; } Now we can enable it: $ sudo ln -s /etc/nginx/sites-available/gitlab.cong /etc/nginx/sites-enabled/ There is no a2ensite equivalent coming natively with nginx, so we use ln, but if you want, there is a project on github: nginx_ensite: nginx_ensite and nginx_dissite for quick virtual host enabling and disabling This is a shell (Bash) script that replicates for nginx the Debian a2ensite and a2dissite for enabling and disabling sites as virtual hosts in Apache 2.2/2.4. It' done :-). Finally, restart nginx $ sudo service nginx restart With this new configuration, you are able to run other virtual hosts next to gitlab to serve what you want Just create new configs in /etc/nginx/sites-available. In my case, I made running and serving this way on the same host : gitlab.mycompany.com - the awesome git platform written in ruby ci.mycompany.com - the gitlab continuous integration server written in ruby npm.mycompany.com - a private npm registry written in node.js bower.mycompany.com - a private bower registry written in node.js packagist.mycompany.com - a private packagist for composer registry written in php For example, to serve npm.mycompany.com : Create a directory for logs: $ sudo mkdir -p /var/log/private-npm/nginx/ And fill a new vhost config file: $ sudo nano /etc/nginx/sites-available/npm.conf With this config server { listen *:80; server_name npm.mycompany.com client_max_body_size 5m; access_log /var/log/private-npm/nginx/npm_access.log; error_log /var/log/private-npm/nginx/npm_error.log; location / { proxy_pass http://localhost:8082; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } Then enable it and restart it: $ sudo ln -s /etc/nginx/sites-available/npm.conf /etc/nginx/sites-enabled/ $ sudo service nginx restart
NGINX
31,762,841
33
I have a vagrant box that has been working fine for sometime and today for some reason I get the following when I attemp to restart nginx. nginx: [emerg] host not found in upstream "www.myclass.com.192.168.33.10.xip.io" in /etc/nginx/conf.d/myclass.com.conf:19 nginx: configuration file /etc/nginx/nginx.conf test failed I've not changed anything myself as far as I know of (unless Windows Update has done something strange) Can anyone suggest how to get nginx working again & allowing me to restart the nginx service - it would appear I cannot ping the host... any ideas why? Here is my nginx conf file: nginx conf file --Update-- Run the following to check what is on port 80.. (having read another similar post) and I can see that the varnish daemon is on port 80.. is this cause of the problem?? Any advice would be welcomed as I'm new to this stuff sudo netstat -tlnp | grep 80 My myclass.com.conf file server { listen 80; server_name class.com.* www.class.com.*; root /vagrant/www.class.com/public_html; index index.php; access_log /vagrant/log/class.com.access.log; error_log /vagrant/log/class.com.error.log error; charset utf-8; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { access_log off; log_not_found off; } error_page 404 /index.php; location /socket.io { proxy_pass http://www.class.com.192.168.33.10.xip.io:8055; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; } location / { try_files $uri $uri/ @handler; expires 30d; } location /. { return 404; } location @handler { rewrite / /index.php last; } location ~ .php/ { rewrite ^(.*.php)/ $1 last; } location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param CLASS_ENVIRONMENT LYLE; include /etc/nginx/fastcgi_params; } gzip on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; sendfile off; }
All you need is to put resolver that can resolve such domain name: resolver 8.8.8.8 valid=300s; resolver_timeout 10s; Google DNS (8.8.8.8) can resolve it, but it resolves to internal address belongs to network class C. $ dig @8.8.8.8 www.class.com.192.168.33.10.xip.io ;; ANSWER SECTION: www.class.com.192.168.33.10.xip.io. 299 IN A 192.168.33.10
NGINX
26,585,510
33
I'm trying to log POST body, and add $request_body to the log_format in http clause, but the access_log command just prints "-" as the body after I send POST request using: curl -d name=xxxx myip/my_location My log_format (in http clause): log_format client '$remote_addr - $remote_user $request_time $upstream_response_time ' '[$time_local] "$request" $status $body_bytes_sent $request_body "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; My location definition(in server clause): location = /c.gif { empty_gif; access_log logs/uaa_access.log client; } How can I print the actual POST data from curl?
Nginx doesn't parse the client request body unless it really needs to, so it usually does not fill the $request_body variable. The exceptions are when: it sends the request to a proxy, or a fastcgi server. So you really need to either add the proxy_pass or fastcgi_pass directives to your block. The easiest way is to send it to Nginx itself as a proxied server, for example with this configuration: location = /c.gif { access_log logs/uaa_access.log client; # add the proper port or IP address if Nginx is not on 127.0.0.1:80 proxy_pass http://127.0.0.1/post_gif; } location = /post_gif { # turn off logging here to avoid double logging access_log off; empty_gif; } If you only expect to receive some key-pair values, it might be a good idea to limit the request body size: client_max_body_size 1k; client_body_buffer_size 1k; client_body_in_single_buffer on; I also received "405 Not Allowed" errors when testing using empty_gif; and curl (it was ok from the browser), I switched it to return 200; to properly test with curl.
NGINX
17,609,472
33
I have the following nginx configuration fragment: server { listen 80; server_name mydomain.io; root /srv/www/domains/mydomain.io; index index.html index.php; access_log /var/log/nginx/domains/mydomain.io/access.log; error_log /var/log/nginx/domains/mydomain.io/error.log; location ~\.php { try_files $uri =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; include /etc/nginx/fastcgi_params; } } First, how can I make the server block respond to both http://www.mydomain.io and also http://mydomain.io. Second, I want to force if they come from http://www.mydomain.io to redirect to http://mydomain.io. Thanks.
According to https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/#server-name-if, you should use: server { server_name www.example.com; return 301 $scheme://example.com$request_uri; } server { server_name example.com; # [...] }
NGINX
11,323,735
33
I have a caching system I need to bypass if the user's name (in a cookie) is found in the $request_uri. I'm trying to do something like this, but can't get the variable to interpolate into the regex. Any suggestions pretty please? I can set the $me variable just fine from the cookie; I just can't get it to interpolate into the regex. set $chk == "need"; set $me "kevin"; if ($uri ~ $me) { set $chk ""; } if ($chk == "need") { rewrite ^ /testing } I've always tried things like this: if ($uri ~ "by-{$me}") { set $chk ""; } Thanks! -Kevin
It's not exactly what I asked, but I think it'll work for my purposes. I'm still curious how to interpolate a variable inside a nginx PCRE regex if anyone else knows! set $chk == "need"; set $me "kevin"; if ($uri ~ /by-([^-]+)/) { set $by $1; } if ($by = $me) {set $chk "";}
NGINX
5,859,848
33
I have a website running on a LEMP stack. I have enabled cloudflare with the website. I am using the cloudflare flexible SSL certificate for https. When i open the website in chrome it shows website redirected you too many times and in firefox has detected that the server is redirecting the request for this address in a way that will never complete. I have tried to see answers of other questions but none of them seem to solve the problem. NGINX conf file:- server { listen 80 default_server; listen [::]:80 default_server; server_name mydomain.com www.mydomain.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; root /var/www/html; index index.php index.html index.htm index.nginx-debian.html; location / { try_files $uri $uri/ =404; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.0-fpm.sock; } location ~ /\.ht { deny all; } } I would be highly grateful if anyone can point out what I am doing wrong.
Since you are using cloudflare flexible SSL your nginx config file wll look like this:- server { listen 80 default_server; listen [::]:80 default_server; server_name mydomain.com www.mydomain.com; if ($http_x_forwarded_proto = "http") { return 301 https://$server_name$request_uri; } root /var/www/html; index index.php index.html index.htm index.nginx-debian.html; location / { try_files $uri $uri/ =404; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.0-fpm.sock; } location ~ /\.ht { deny all; } }
NGINX
41,583,088
32
I want to parse all my logs of nginx (you can see here): ls /var/log/nginx/ access.log access.log.21.gz error.log.1 error.log.22.gz access.log.1 access.log.22.gz error.log.10.gz error.log.23.gz access.log.10.gz access.log.23.gz error.log.11.gz error.log.24.gz access.log.11.gz access.log.24.gz error.log.12.gz error.log.2.gz access.log.12.gz access.log.2.gz error.log.13.gz error.log.3.gz access.log.13.gz access.log.3.gz error.log.14.gz error.log.4.gz access.log.14.gz access.log.4.gz error.log.15.gz error.log.5.gz access.log.15.gz access.log.5.gz error.log.16.gz error.log.6.gz access.log.16.gz access.log.6.gz error.log.17.gz error.log.7.gz access.log.17.gz access.log.7.gz error.log.18.gz error.log.8.gz access.log.18.gz access.log.8.gz error.log.19.gz error.log.9.gz access.log.19.gz access.log.9.gz error.log.20.gz access.log.20.gz error.log error.log.21.gz but I don't know how to do that. First of all, it seems like goaccess can't parse .gz files. What's the best way of parse all the information contained in these logs?
Quoting the man page and assuming you have a Combined Log Format: If we would like to process all access.log.*.gz we can do one of the following: # zcat -f access.log* | goaccess --log-format=COMBINED OR # zcat access.log.*.gz | goaccess --log-format=COMBINED On Mac OS X, use gunzip -c instead of zcat.
NGINX
39,232,741
32
I am trying to reverse proxy my website and modify the content. To do so, I compiled nginx with sub_filter. It now accepts the sub_filter directive, but it does not work somehow. server { listen 8080; server_name www.xxx.com; access_log /var/log/nginx/www.goparts.access.log main; error_log /var/log/nginx/www.goparts.error.log; root /usr/share/nginx/html; index index.html index.htm; ## send request back to apache1 ## location / { sub_filter <title> '<title>test</title>'; sub_filter_once on; proxy_pass http://www.google.fr; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Please help me
Check if the upstream source has gzip turned on, if so you need proxy_set_header Accept-Encoding ""; so the whole thing would be something like location / { proxy_set_header Accept-Encoding ""; proxy_pass http://upstream.site/; sub_filter_types text/css; sub_filter_once off; sub_filter .upstream.site special.our.domain; } Check these links https://www.ruby-forum.com/topic/178781 https://forum.nginx.org/read.php?2,226323,226323 http://www.serverphorums.com/read.php?5,542078
NGINX
31,893,211
32
I am trying to setup Nginx as a reverse proxy for accessing a MongoDB Database. By default Mongo listens to 27017 port. What I want to do, is redirect a hostname for example mongodb.mysite.com through nginx and pass it to mongodb server. In that way from the outside network I will have my known 27017 port closed, and access my db from a hidden url like the example I gave. So I am trying to setup Nginx with this configuration : server { listen 80; server_name mongo.mysite.com; gzip off; location / { proxy_pass http://127.0.0.1:27017; proxy_redirect off; proxy_buffering off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; } } So after having this I try to connect with mongo shell from my cmd with the command mongo mongo.mysite.com:80 and I get back the following error: 2015-08-06T13:44:32.670+0300 I NETWORK recv(): message len 1347703880 is invalid. Min 16 Max: 48000000 2015-08-06T13:44:32.670+0300 I NETWORK DBClientCursor::init call() failed 2015-08-06T13:44:32.674+0300 E QUERY Error: DBClientBase::findN: transport error: mongo.therminate.com:80 ns: admin.$cmd query: { whatsmyuri: 1 } at connect (src/mongo/shell/mongo.js:181:14) at (connect):1:6 at src/mongo/shell/mongo.js:181 exception: connect failed Also in the Nginx access log I get this: 94.66.184.128 - - [06/Aug/2015:10:44:32 +0000] "<\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xD4\x07\x00\x00\x00\x00\x00\x00admin.$cmd\x00\x00\x00\x00\x00\x01\x00\x00\x00\x15\x00\x00\x00\x10whatsmyuri\x00\x01\x00\x00\x00\x00" 400 172 "-" "-" Has anyone got an idea, what is going wrong here? Thanks!
You're right, you need to use NGINX's stream module by adding a stream section to your .conf file: stream { server { listen <your incoming Mongo TCP port>; proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass stream_mongo_backend; } upstream stream_mongo_backend { server <localhost:your local Mongo TCP port>; } }
NGINX
31,853,755
32
Though I have done the following setting, and even restarted the server: # head /etc/security/limits.conf -n2 www-data soft nofile -1 www-data hard nofile -1 # /sbin/sysctl fs.file-max fs.file-max = 201558 The open files limitation of specific process is still 1024/4096: # ps aux | grep nginx root 983 0.0 0.0 85872 1348 ? Ss 15:42 0:00 nginx: master process /usr/sbin/nginx www-data 984 0.0 0.2 89780 6000 ? S 15:42 0:00 nginx: worker process www-data 985 0.0 0.2 89780 5472 ? S 15:42 0:00 nginx: worker process root 1247 0.0 0.0 11744 916 pts/0 S+ 15:47 0:00 grep --color=auto nginx # cat /proc/984/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 15845 15845 processes Max open files 1024 4096 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 15845 15845 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us I've tried all possible solutions from googling but in vain. What setting did I miss?
On CentOS (tested on 7.x): Create file /etc/systemd/system/nginx.service.d/override.conf with the following contents: [Service] LimitNOFILE=65536 Reload systemd daemon with: systemctl daemon-reload Add this to Nginx config file: worker_rlimit_nofile 16384; (has to be smaller or equal to LimitNOFILE set above) And finally restart Nginx: systemctl restart nginx You can verify that it works with cat /proc/<nginx-pid>/limits.
NGINX
27,849,331
32
I want to insert log points (io.write) inside my lua code which itself is in nginx configuration (using HttpLuaModule for nginx). How to do that? Access and error logs are not showing them.
When running under nginx, you should use ngx.log. E.g: ngx.log(ngx.STDERR, 'your message here') For a working example, see http://linuxfiddle.net/f/77630edc-b851-487c-b2c8-aa6c9b858ebb For documentation, see http://wiki.nginx.org/HttpLuaModule#ngx.log
NGINX
26,189,429
32
Hello I have installed Gitlab using this https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#installation Now I want to use nginx to serve another content other than gitlab application how can I do this Where are the config files that I need to modify How can I point a directory like /var/www so that nginx knows that is the root for another app. Update(forgot to mention I'm running this under Red Hat 6.5, Debian/Ubuntu solution welcome)
Here I am using - gitlab.example.com to serve gitlab.example.com over https. - example.com over http to serve another content other than gitlab application. Gitlab installed from deb package is using chef to provision ngnix, so you have to modify chef recipies and add new vhost template into chef cookbooks directory You can find all chef cookbooks here: /opt/gitlab/embedded/cookbooks/gitlab/ open /opt/gitlab/embedded/cookbooks/gitlab/recipes/nginx.rb change: nginx_vars = node['gitlab']['nginx'].to_hash.merge({ :gitlab_http_config => File.join(nginx_etc_dir, "gitlab-http.conf"), }) to: nginx_vars = node['gitlab']['nginx'].to_hash.merge({ :gitlab_http_config => File.join(nginx_etc_dir, "gitlab-http.conf"), :examplecom_http_config => File.join(nginx_etc_dir, "examplecom-http.conf"), }) add this to the same file: template nginx_vars[:examplecom_http_config] do source "nginx-examplecom-http.conf.erb" owner "root" group "root" mode "0644" variables(nginx_vars.merge( { :fqdn => "example.com", :port => 80, } )) notifies :restart, 'service[nginx]' if OmnibusHelper.should_notify?("nginx") end then in template directory(/opt/gitlab/embedded/cookbooks/gitlab/templates/default), create nginx vhost template file( nginx-examplecom-http.conf.erb) and add this there: server { listen <%= @listen_address %>:<%= @port %>; server_name <%= @fqdn %>; root /var/www/example.com; access_log <%= @log_directory %>/examplecom_access.log; error_log <%= @log_directory %>/examplecom_error.log; location /var/www/example.com { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html; } error_page 502 /502.html; } you have to set nginx['redirect_http_to_https'] = false in(/etc/gitlab/gitlab.rb): external_url "https://gitlab.example.com" gitlab_rails['gitlab_email_from'] = "[email protected]" gitlab_rails['gitlab_support_email'] = "[email protected]" nginx['redirect_http_to_https'] = false nginx['ssl_certificate'] = "/etc/gitlab/ssl/ssl-unified.crt" nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/ssl.key" gitlab_rails['gitlab_default_projects_limit'] = 10 add include <%= @examplecom_http_config %>; into /opt/gitlab/embedded/cookbooks/gitlab/templates/default/nginx.conf.erb : http { sendfile <%= @sendfile %>; tcp_nopush <%= @tcp_nopush %>; tcp_nodelay <%= @tcp_nodelay %>; keepalive_timeout <%= @keepalive_timeout %>; gzip <%= @gzip %>; gzip_http_version <%= @gzip_http_version %>; gzip_comp_level <%= @gzip_comp_level %>; gzip_proxied <%= @gzip_proxied %>; gzip_types <%= @gzip_types.join(' ') %>; include /opt/gitlab/embedded/conf/mime.types; include <%= @gitlab_http_config %>; include <%= @examplecom_http_config %>; } after all those changes run: gitlab-ctl reconfigure gitlab-ctl restart
NGINX
24,090,624
32
I have a large URI and I am trying to configure Nginx to accept it. The URI parameters are 52000 characters in length with a size of 52kb. I have tried accessing the URI without Nginx and it works fine. But when I use Nginx it gives me an error. --- 414 (Request-URI Too Large) I have configured the large_client_header_buffers and client_header_buffer_size in the http block but it doesn't seem to be working. client_header_buffer_size 5120k; large_client_header_buffers 16 5120k; Any help will be appreciated. Thank you.
I have found the solution. The problem was that there were multiple instances of nginx running. This was causing a conflict and that's why the large_client_header_buffers wasnt working. After killing all nginx instances I restarted nginx with the configuration: client_header_buffer_size 64k; large_client_header_buffers 4 64k; Everything started working after that. Hope this helps anyone facing this problem.
NGINX
23,732,147
32
I'm trying to implement nginx rewrite rules for the following situation Request: http://192.168.64.76/Shep.ElicenseWeb/Public/OutputDocuments.ashx?uinz=12009718&iinbin=860610350635 Should be redirected to: http://localhost:82/Public/OutputDocuments.ashx?uinz=12009718&iinbin=860610350635 I tried this with no luck: location /Shep.ElicenseWeb/ { rewrite ^/Shep.ElicenseWeb/ /$1 last; proxy_pass http://localhost:82; } What is the correct way to perform such a rewrite for nginx ?
Your rewrite statement is wrong. The $1 on the right refers to a group (indicated by paratheses) in the matching section. Try: rewrite ^/Shep.ElicenseWeb/(.*) /$1 break;
NGINX
13,539,246
32
To respect the privacy of my users I'm trying to anonymize their IP addresses in nginx log files. One way to do this would be defining a custom log format, like so: log_format noip '127.0.0.1 - [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" $request_time'; This method has two downsides: I can't distinguish between two users and can't use geo location tools. The best thing would be to 'shorten' the IP address (87.12.23.55 would become 87.12.23.1). Is there a possibility to achieve this using nginx config scripting?
Even if there is already an accepted answer, the solution seems not to be valid. nginx has the log_format directive, which has a context of http. This means, the log_format can only be (valid) set within the http {} section of the config file, NOT within the server sections! On the other hand we have an if directive, which has a context of server and location. So we can NOT use “if” and “log_format” within a server section (which is done within the accepted solution) So the if is not helpful here, also if is evil ( http://wiki.nginx.org/IfIsEvil )! We need something which is working at http context because only there the log_format can be defined in a valid way, and this is the only place outside of the server context, where our virtual hosts are defined… Luckily there is a map feature within nginx! map is remapping some values into new values (accessible within variables which can be used in a log_format directive). And the good message: This also works with regular expressions. So let’s map our IPv4 and IPv6 addresses into anonymized addresses. This has to be done in 3 steps, since map can not accumulate returned values, it can only return strings or variables, not a combination of both. So, at first we grab the part of IP we want to have in the logfiles, the second map returns the part which symbolizes the anonymized part, and the 3rd map rule maps them together again. Here are the rules which go into the http {} context: map $remote_addr $ip_anonym1 { default 0.0.0; "~(?P<ip>(\d+)\.(\d+)\.(\d+))\.\d+" $ip; "~(?P<ip>[^:]+:[^:]+):" $ip; } map $remote_addr $ip_anonym2 { default .0; "~(?P<ip>(\d+)\.(\d+)\.(\d+))\.\d+" .0; "~(?P<ip>[^:]+:[^:]+):" ::; } map $ip_anonym1$ip_anonym2 $ip_anonymized { default 0.0.0.0; "~(?P<ip>.*)" $ip; } log_format anonymized '$ip_anonymized - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; access_log /var/log/nginx/access.log anonymized; After adding this to your nginx.conf config file, remember to reload your nginx. Your log files should now contain anoymized IP addresses, if you are using the “anonymized” log format (this is the format parameter of access_log directive).
NGINX
6,477,239
32
I am currently running into a problem trying to set up nginx:alpine in Openshift. My build runs just fine but I am not able to deploy with permission being denied with the following error 2019/01/25 06:30:54 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied) Now I know Openshift is a bit tricky when it comes to permissions as the container is running without root privilidges and the UID is gerenated on runetime which means it's not available in /etc/passwd. But the user is part of the group root. Now how this is supposed to be handled is being described here https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines I even went further and made the whole /var completely accessible (777) for testing purposes but I still get the error. This is what my Dockerfile looks like Dockerfile FROM nginx:alpine #Configure proxy settings ENV HTTP_PROXY=http://my.proxy:port ENV HTTPS_PROXY=http://my.proxy:port ENV HTTP_PROXY_AUTH=basic:*:username:password WORKDIR /app COPY . . # Install node.js RUN apk update && \ apk add nodejs npm python make curl g++ # Build Application RUN npm install RUN ./node_modules/@angular/cli/bin/ng build COPY ./dist/my-app /usr/share/nginx/html # Configure NGINX COPY ./openshift/nginx/nginx.conf /etc/nginx/nginx.conf COPY ./openshift/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf RUN chgrp -R root /var/cache/nginx /var/run /var/log/nginx && \ chmod -R 777 /var RUN sed -i.bak 's/^user/#user/' /etc/nginx/nginx.conf EXPOSE 8080 It's funny that this approach just seems to effekt the alpine version of nginx. nginx:latest (based on debian I think) has no issues and the way to set it up described here https://torstenwalter.de/openshift/nginx/2017/08/04/nginx-on-openshift.html works. (but i am having some other issues with that build so I switched to alpine) Any ideas why this is still not working?
I was using openshift, with limited permissions, so I fixed this problem by using the following nginx image (rather than nginx:latest) FROM nginxinc/nginx-unprivileged
NGINX
54,360,223
31
I have the following that config that works when I try <NodeIP>:30080 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: app-deployment spec: replicas: 3 template: metadata: labels: name: app-node spec: containers: - name: app image: myregistry.net/repo/app:latest imagePullPolicy: Always ports: - containerPort: 8080 env: - name: NODE_ENV value: production --- apiVersion: v1 kind: Service metadata: name: app-service spec: selector: name: app-node ports: - protocol: TCP port: 80 targetPort: 8080 nodePort: 30080 type: NodePort I am trying to use an Ingress: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress spec: rules: - host: myhost.com http: paths: - path: /app backend: serviceName: app-service servicePort: 80 myhost.com works with the nginx intro screen, but myhost.com/app gives 404 Not Found. Where is the issue in my setup? UPDATE: - path: / backend: serviceName: app-service servicePort: 80 If I do root path it works, but how come /app doesn't?
Your ingress definition creates rules that proxy traffic from the {path} to the {backend.serviceName}{path}. In your case, I believe the reason it's not working is that /app is proxied to app-service:80/app but you're intending on serving traffic at the / root. Try adding this annotation to your ingress resource: nginx.ingress.kubernetes.io/rewrite-target: / Source: https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite
NGINX
52,021,925
31
I've a simple kubernetes ingress network. I need deny the access some critical paths like /admin or etc. My ingress network file shown as below. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-test spec: rules: - host: host.host.com http: paths: - path: /service-mapping backend: serviceName: /service-mapping servicePort: 9042 How I can deny the custom path with kubernetes ingress network, with nginx annonations or another methods . I handle this issue with annotations shown as below . apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-configuration-snippet annotations: nginx.ingress.kubernetes.io/configuration-snippet: | server_tokens off; location DANGER-PATH { deny all; return 403; } spec: rules: - host: api.myhost.com http: paths: - backend: serviceName: bookapi-2 servicePort: 8080 path: PATH
You can use server-snippet annotation. This seems like exactly what you want to achieve.
NGINX
51,874,503
31
Google Cloud Network load balancer is a pass-through load balancer and not a proxy load balancer. ( https://cloud.google.com/compute/docs/load-balancing/network/ ). I can not find any resources in general on a pass through LB. Both HAProxy and Nginx seems to be proxy LBs. I'm guessing that pass through LB would be redirecting the clients directly to the servers. In what scenarios it would be beneficial? Are there any other type of load balancers except pass-through and proxy?
It's hard to find resources for pass-through load balancing because everyone came up with a different way of calling it: pass-though, direct server return(DSR), direct routing,... We'll call it pass-through here. Let me try to explain the thing: The IP packets are forwarded unmodified to the VM, there is no address or port translation. The VM thinks that the load balancer IP is one of its own IPs. In the specific case of Compute Engine Network Load Balancing https://cloud.google.com/compute/docs/load-balancing/: For Linux this is done by adding a route to this IP in the "local" routing table, Windows by adding a secondary IP on the network interface. The routing logic has to make sure that packets for a TCP connection or UDP "connection" are always sent to the same VM. For GCE network LB see here https://cloud.google.com/compute/docs/load-balancing/network/target-pools#sessionaffinity Regarding other load balancer types there can't be a definitive list, here are a few examples: NAT. An example with iptables is here https://tipstricks.itmatrix.eu/use-iptables-to-load-balance-web-trafic/. TCP Proxy. In Google Cloud Platform you can use TCP Proxy Load Balancing https://cloud.google.com/compute/docs/load-balancing/tcp-ssl/tcp-proxy HTTP Proxy. In Google Cloud Platform you can use HTTP(s) Load Balancing https://cloud.google.com/compute/docs/load-balancing/http/ DNS, called "DNS forwarder". For example: dnsmasq http://www.thekelleys.org.uk/dnsmasq/doc.html, or bind in "forwarding" mode https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-caching-or-forwarding-dns-server-on-ubuntu-14-04 Database communication protocols. For example the MySQL Protocol with https://github.com/mysql/mysql-proxy SIP protocol. Big list of implementations here https://www.voip-info.org/wiki/view/Open+Source+VOIP+Software#SIPProxies As for the advantages of pass-through over other methods: Some applications won't work or need to be adapted if the addresses on the IP packets is changing, for example the SIP protocol. See the Wikipedia for more on applications that don't play along well with NAT https://en.wikipedia.org/wiki/Network_address_translation#NAT_and_TCP/UDP. Here the advantage pass-through is that it does not change the source and destination IPs. Note that there is a trick for a load balancer working at a higher layer to keep the IPs: the load balancer spoofs the IP of the client when connecting to the backends. As of this writing no load balancing product uses this method in Compute Engine. If you need more control over the TCP connection from the client, for example to tune the TCP parameters. This is an advantage of pass-through or NAT over TCP (or higher layer) proxy.
NGINX
43,205,917
31
How can I pass the nginx.conf configuration file to an nginx instance running inside a Kubernetes cluster?
You can create a ConfigMap object and then mount the values as files where you need them: apiVersion: v1 kind: ConfigMap metadata: name: nginx-config data: nginx.conf: | your config comes here like this other.conf: | second file contents And in you pod spec: spec: containers: - name: nginx image: nginx volumeMounts: - name: nginx-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf - name: other.conf mountPath: /etc/nginx/other.conf subPath: other.conf volumes: - name: nginx-config configMap: name: nginx-config (Take note of the duplication of the filename in mountPath and using the exact same subPath; same as bind mounting files.) For more information about ConfigMap see: https://kubernetes.io/docs/user-guide/configmap/ Note: A container using a ConfigMap as a subPath volume will not receive ConfigMap updates.
NGINX
42,078,080
31
I have a web-service that runs long-running jobs (in the order of several hours). I am developing this using Flask, Gunicorn, and nginx. What I am thinking of doing is to have the route which takes a long time to complete, call a function that creates a thread. The function will then return a guid back to the route, and the route will return a url (using the guid) that the user can use to check progress. I am making the thread a daemon (thread.daemon = True) so that the thread exits if my calling code exits (unexpectedly). Is this the correct approach to use? It works, but that doesn't mean that it is correct. my_thread = threading.Thread(target=self._run_audit, args=()) my_thread.daemon = True my_thread.start()
Celery and RQ is overengineering for simple task. Take a look at this docs - https://docs.python.org/3/library/concurrent.futures.html Also check example, how to run long-running jobs in background for Flask app - https://stackoverflow.com/a/39008301/5569578
NGINX
34,321,986
31
I have precisely the same problem described in this SO question and answer. The answer to that question is a nice work around but I don't understand the fundamental problem. Terminating SSL at the load balancer and using HTTP between the load balancer and web/app servers is very common. What piece of the stack is not respecting the X-Forwarded-Proto? Is it werkzeug? Flask? uwsgi? In my case I'm using an AWS ELB (which sets X-Forwarded-Proto) => Nginx (which forwards along X-Forwarded-Proto to uwsgi). But in the python app I have to subclass Flask Request as described in the question I referenced above. Since this is such a common deployment scenario, it seems that there should be a better solution. What am I missing?
You are missing the ProxyFix() middleware component. See the Flask Proxy Setups documentation. There is no need to subclass anything; simply add this middleware component to your WSGI stack: # Werkzeug 0.15 and newer from werkzeug.middleware.proxy_fix import ProxyFix from flask import Flask app = Flask(__name__) app.wsgi_app = ProxyFix(app.wsgi_app, x_proto=1) If you have Flask installed, you have Werkzeug too, but do pin the version to >=0.15 to get the updated version of ProxyFix (Flask 1.1.0 and newer already use that version). This component sets the WSGI scheme from the X-Forwarded-Proto header. Do read the Flask documentation I linked you to above about trusting headers and about customising the middleware to your specific situation. Above, I’ve configured it to only look at X-Forwarded-Proto, but the component can handle other X-Forwarded-* configurations too. The default is to trust one level of X-Forwarded-For, add x_for=0 to the keyword arguments if you want to disable this. Also note that the functionality of the ProxyFix middleware has been expanded quite significantly in Werkzeug 0.15; in addition to X-Forwarded-Proto, -For, and -Host, the X-Forwarded-Port and -Prefix headers are also consulted, all headers support multiple values.
NGINX
23,347,387
31
I want any requests like http://example.com/whatever/index.php, to do a 301 redirect to http://example.com/whatever/. I tried adding: rewrite ^(.*/)index.php$ $1 permanent; location / { index index.php; } The problem here, this rewrite gets run on the root url, which causes a infinite redirect loop. Edit: I need a general solution http://example.com/ should serve the file webroot/index.php http://example.com/index.php, should 301 redirect to http://example.com/ http://example.com/a/index.php should 301 redirect to http://example.com/a/ http://example.com/a/ should serve the index.php script at webroot/a/index.php Basically, I never want to show "index.php" in the address bar. I have old backlinks that I need to redirect to the canonical url.
Great question, with the solution similar to another one I've answered on ServerFault recently, although it's much simpler here, and you know exactly what you need. What you want here is to only perform the redirect when the user explicitly requests /index.php, but never redirect any of the internal requests that end up being served by the actual index.php script, as defined through the index directive. This should do just that, avoiding the loops: server { index index.php; if ($request_uri ~* "^(.*/)index\.php$") { return 301 $1; } location / { # ... } }
NGINX
21,687,288
31
I use nginx as a load balencer in front of several tomcats. In my incoming requests, I have encoded query parameters. But when the request arrives to tomcat, parameters are decoded : incoming request to nginx: curl -i "http://server/1.1/json/T;cID=1234;pID=1200;rF=http%3A%2F%2Fwww.google.com%2F" incoming request to tomcat: curl -i "http://server/1.1/json/T;cID=1234;pID=1200;rF=http:/www.google.com/" I don't want my request parameters to be transformed, because in that case my tomcat throws a 405 error. My nginx configuration is the following : upstream tracking { server front-01.server.com:8080; server front-02.server.com:8080; server front-03.server.com:8080; server front-04.server.com:8080; } server { listen 80; server_name tracking.server.com; access_log /var/log/nginx/tracking-access.log; error_log /var/log/nginx/tracking-error.log; location / { proxy_pass http://tracking/webapp; } } In my current apache load balancer configuration, I have the AllowEncodedSlashes directive that preserves my encoded parameters: AllowEncodedSlashes NoDecode I need to move from apache to nginx. My question is quite the opposite from this question : Avoid nginx escaping query parameters on proxy_pass
I finally found the solution: I need to pass $request_uri parameter : location / { proxy_pass http://tracking/webapp$request_uri; } That way, characters that were encoded in the original request will not be decoded, i.e. will be passed as-is to the proxied server.
NGINX
20,496,963
31
I have a django app, python 2.7 with gunicorn and nginx. Nginx is throwing a 403 Forbidden Error, if I try to view anything in my static folder @: /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static nginx config(/etc/nginx/sites-enabled/myapp) contains: server { listen 80; server_name *.myapp.com; access_log /home/ubuntu/virtualenv/myapp/error/access.log; error_log /home/ubuntu/virtualenv/myapp/error/error.log warn; connection_pool_size 2048; fastcgi_buffer_size 4K; fastcgi_buffers 64 4k; root /home/ubuntu/virtualenv/myapp/myapp/homelaunch/; location /static/ { alias /home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } error.log contains: 2013/11/24 23:00:16 [error] 18243#0: *277 open() "/home/ubuntu/virtualenv/myapp/myapp/homelaunch/static/img/templated/home/img.png" failed (13: Permission denied), client: xx.xx.xxx.xxx, server: *.myapp.com, request: "GET /static/img/templated/home/img2.png HTTP/1.1", host: "myapp.com", referrer: "http://myapp.com/" access.log contains xx.xx.xx.xxx - - [24/Nov/2013:23:02:02 +0000] "GET /static/img/templated/base/animg.png HTTP/1.1" 403 141 "http://myapp.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:25.0) Gecko/20100101 Firefox/25.0" xx.xx.xx.xxx - - [24/Nov/2013:23:02:07 +0000] "-" 400 0 "-" "-" I tried just viewing say a .css file in /static/ and it throws an error like this in source: <html> <head><title>403 Forbidden</title></head> <body bgcolor="white"> <center><h1>403 Forbidden</h1></center> <hr><center>nginx/1.1.19</center> </body> </html>
MacOs El Capitan: At the top of nginx.conf write user username group_name My user name is Kamil so i write: user Kamil staff; (word 'staff' is very important in macOS). This do the trick. After that you don't need to change any permission in your project folder and files.
NGINX
20,182,329
31
I want to use rewrite function in my nginx server. When I try "http://www.example.com/1234", I want to rewrite "http://www.example.com/v.php?id=1234" and want to get "http://www.example.com/1234" in browser. Here is nginx.conf file ... location ~ /[0-9]+ { rewrite "/([0-9]+)" http://www.example.com/v.php?id=$1 break; } ... When I try "http://www.example.com/1234" I want to ... url bar in browser : http://www.example.com/1234 real url : http://www.example.com/v.php?id=1234 but I'm in trouble ... url bar in browser : http://www.example.com/v.php?id=1234 real url : http://www.example.com/v.php?id=1234
Reference: http://wiki.nginx.org/HttpRewriteModule#rewrite If the replacement string begins with http:// then the client will be redirected, and any further >rewrite directives are terminated. So remove the http:// part and it should work: location ~ /[0-9]+ { rewrite "/([0-9]+)" /v.php?id=$1 break; }
NGINX
15,322,826
31
I'm rewriting URLs in nginx after a relaunch. In the old site I had query parameters in the URL to filter stuff e.g. http://www.example.com/mypage.php?type=4 The new page doesn't have these kind of parameters. I want to remove them and rewrite the URLs to the main page, so that I get: http://www.example.com/mypage/ My rewrite rule in nginx is: location ^~ /mypage.php { rewrite ^/mypage.php$ http://www.example.com/mypage permanent; } But with this rule the parameter is still appended. I thought the $ would stop nginx from processing further values... any ideas? All other questions deal with how to add parameters - I just want to remove mine :)
Had a similar problem, after a lot of searching the answer presented itself in the rewrite docs. If you specify a ? at the end of a rewrite then Nginx will drop the original $args (arguments) So for your example, this would do the trick: location ^~ /mypage.php { rewrite ^/mypage.php$ http://www.example.com/mypage? permanent; }
NGINX
9,641,603
31
I have a django application running on http://localhost:12345 . I'd like user to access it via url http://my.server.com/myapp . I use nginx to reverse proxy to it like the following: ... ... server_name my.server.com; location /myapp { rewrite /myapp(.*) $1 break; ... ... # proxy param proxy_pass http://localhost:12345; } ... ... The question is, when configured like the above, how to make the urls in my response pages to have a prefix of "/myapp" so that the nginx can direct them correctly to myapp. E.g., the urls in a page like "/foo/far" ought to be changed to "/myapp/foo/bar" to allow nginx proxy to myapp. what is the right nginx configure to use to achieve this ? I can use settings variables of django to specify the root url prefix, but it's not flexiable to my mind, since the variable have to be modified according to different nginx configuration(say one day nginx may change the suburl from "/myapp" to "/anotherapp").
As the prefix is set in Nginx, the web server that hosts the Django app has no way of knowing the URL prefix. As orzel said, if you used apache+mod_wsgi of even nginx+gunicorn/uwsgi (with some additional configuration), you could use the WSGIScriptAlias value, that is automatically read by Django. When I need to use a URL prefix, I generally put it myself in my root urls.py, where I have only one line, prefixed by the prefix and including an other urls.py (r'^/myapp/', include('myapp.urls')), But I guess this has the same bottleneck than setting a prefix in settings.py, you have redundant configuration in nginx and Django. You need to do something in the server that hosts your Django app at :12345. You could set the prefix there, and pass it to Django using the WSGIScriptAlias or its equivalent outside mod_wsgi. I cannot give more information as I don't know how your Django application is run. Also, maybe you should consider running your Django app directly from Django, using uWSGI or gunicorn. To pass the prefix to Django from the webserver, you can use this : proxy_set_header SCRIPT_NAME /myapp; More information here
NGINX
8,133,063
31
Rails 3.1 has a convenient system which can compress files into .gz files. However, instead what I've done is I've moved all the asset files that are created with assets:precompile to a static webserver. This all works, but how can I get nginx to serve the .gz files normally?
1) ensure you have Nginx > 1.2.x (to proper headers modifications) and compile with --with-http_gzip_static_module option 2) Enable this option gzip on (to serve back-end response with gzip header) 3) Setup assets location with gzip_static on (to serve all.css.gz, all.js.gz files directly) 4) Prevent of etag generation and last-modify calculation for assets 5) Turn on the right Cache-control to cache SSL served static assets, unless they will be expired once browser is closed location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ { gzip_static on; expires max; add_header Cache-Control public; add_header Last-Modified ""; add_header ETag ""; } if you would like to get full Nginx configuration, you can see this gist on Github. open_file_cache helps you to cache: open file descriptors, their sizes, modification times and directory lookups, which is helpful for high load on the file system. UPDATE: If you are living on the edge, turn on the SPDY to boost the SSL connection.
NGINX
6,952,639
31
Is it possible to echo each time the loop is executed? For example: foreach(range(1,9) as $n){ echo $n."\n"; sleep(1); } Instead of printing everything when the loop is finished, I'd like to see it printing each result per time.
The easiest way to eliminate nginx's buffering is by emitting a header: header('X-Accel-Buffering: no'); This eliminates both proxy_buffering and (if you have nginx >= 1.5.6), fastcgi_buffering. The fastcgi bit is crucial if you're using php-fpm. The header is also far more convenient to do on an as-needed basis. Docs on X-Accel-Buffering Docs on fastcgi_buffering
NGINX
4,870,697
31
From what I understand Node.js doesnt need NginX to work as a http server (or a websockets server or any server for that matter), but I keep reading about how to use NginX instead of Node.js internal server and cant find of a good reason to go that way
Here http://developer.yahoo.com/yui/theater/video.php?v=dahl-node Node.js author says that Node.js is still in development and so there may be security issues that NginX simply hides. On the other hand, in case of a heavy traffic NginX will be able to split the job between many Node.js running servers.
NGINX
3,186,333
31
I am using file_put_contents to create a file. My php process is running in a group with permissions to write to the directory. When file_put_contents is called, however, the resulting file does not have group write permissions (it creates just fine the first time). This means that if I try to overwrite the file it fails because of a lack of permissions. Is there a way to create the file with group write permissions?
Example 1 (set file-permissions to read-write for owner and group, and read for others): file_put_contents($filename, $data); chmod($filename, 0664); Example 2 (make file writable by group without changing other permissions): file_put_contents($filename, $data); chmod($filename, fileperms($filename) | 16); Example 3 (make file writable by everyone without changing other permissions): file_put_contents($filename, $data); chmod($filename, fileperms($filename) | 128 + 16 + 2); 128, 16, 2 are for writable for owner, group and other respectively.
NGINX
1,240,034
31
I am trying to configure nginx server for my website. I am using the following code to configure my server. It works if I add default_server for my www.fastenglishacademy.fr (443) server block. But in that case, All my subdomains also brings the content of www.fastenglishacademy.fr And if I remove the default_server, I get the following error: nginx: [emerg] no "ssl_certificate" is defined for the "listen ... ssl" directive in /etc/nginx/sites-enabled/fastenglishacademy.fr.conf:14 nginx: configuration file /etc/nginx/nginx.conf test failed My nginx configuration codes: server { listen 80; listen [::]:80; server_name fastenglishacademy.fr; return 301 https://www.fastenglishacademy.fr$request_uri; } server { listen 80; listen [::]:80; server_name www.fastenglishacademy.fr; return 301 https://www.fastenglishacademy.fr$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name fastenglishacademy.fr; return 301 https://www.fastenglishacademy.fr$request_uri; } server { listen 443 ssl http2; listen [::]:443 ssl http2; root /media/fea/www/fastenglishacademy.com; index index.html index.htm index.nginx-debian.html; server_name www.fastenglishacademy.fr; location / { etag on; try_files $uri$args $uri$args/ /index.html; } location ~* \.(jpg|jpeg|png|gif|ico|ttf|woff2|woff|svg)$ { expires 365d; } location ~* \.(css|js)$ { expires 30d; } location ~* \.(pdf)$ { expires 15d; } #WARNING: Please read before adding the lines below! add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; # SSL Certificates ssl_certificate /path/to/fullchain.pem; ssl_certificate_key /path/to/privkey.pem; ssl_trusted_certificate /path/to/chain.pem; } My links: https://www.fastenglishacademy.fr/ https://api.fastenglishacademy.fr/
Your server section is missing ssl_certificate and ssl_certificate_key declarations. You need to have a .crt and a .key file to run with ssl. It should looks like server { listen 80; listen 443 default_server ssl; ssl_certificate /etc/nginx/certs/default.crt; ssl_certificate_key /etc/nginx/certs/default.key; ... other declarations }
NGINX
56,668,320
30
I am trying to start an NGINX server within a docker container configured through docker-compose. The catch is, however, that I would like to substitute an environment variable inside of the http section, specifically within the "upstream" block. It would be awesome to have this working, because I have several other containers that are all configured through environment variables, and I have about 5 environments that need to be running at any given time. I have tried using "envsubst" (as suggested by the official NGINX docs), perl_set, and set_by_lua, however none of them appear to be working. Below is the NGINX config, as it is after my most recent trial user nginx; worker_processes 1; env NGINXPROXY; load_module modules/ngx_http_perl_module.so; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { perl_set $nginxproxy 'sub { return $ENV{"NGINXPROXY"}; }'; upstream api-upstream { server ${nginxproxy}; } include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile off; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } Below is the NGINX dockerfile # build stage FROM node:latest WORKDIR /app COPY ./ /app RUN npm install RUN npm run build # production stage FROM nginx:1.17.0-perl COPY --from=0 /app/dist /usr/share/nginx/html RUN apt-get update && apt-get install -y gettext-base RUN rm /etc/nginx/conf.d/default.conf RUN rm /etc/nginx/nginx.conf COPY default.conf /etc/nginx/conf.d COPY nginx.conf /etc/nginx RUN mkdir /certs EXPOSE 80 443 CMD ["nginx", "-g", "daemon off;"] Below is the section of the docker-compose.yml for the NGINX server (with names and IPs changed). The envsubst command is intentionally commented out at this point in my troubleshooting. front-end: environment: - NGINXPROXY=172.31.67.100:9300 build: http://gitaccount:[email protected]/group/front-end.git#develop container_name: qa_front_end image: qa-front-end restart: always networks: qa_network: ipv4_address: 172.28.0.215 ports: - "9080:80" # command: /bin/bash -c "envsubst '$$NGINXPROXY' < /etc/nginx/nginx.conf > /etc/nginx/nginx.conf && nginx -g 'daemon off;'" What appears to be happening is when I reference the $nginxproxy variable in the upstream block (right after "server"), I get output that makes it look like it's referencing the string literal "$nginxproxy" rather than substituting the value of the variable. qa3_front_end | 2019/06/18 12:35:36 [emerg] 1#1: host not found in upstream "${nginx_upstream}" in /etc/nginx/nginx.conf:19 qa3_front_end | nginx: [emerg] host not found in upstream "${nginx_upstream}" in /etc/nginx/nginx.conf:19 qa3_front_end exited with code 1 When I attempt to use envsubst, I get an error that makes it sound like the command messed with the format of the nginx.conf file qa3_front_end | 2019/06/18 12:49:02 [emerg] 1#1: no "events" section in configuration qa3_front_end | nginx: [emerg] no "events" section in configuration qa3_front_end exited with code 1 I'm pretty stuck, so thanks in advance for your help.
Since nginx 1.19 you can now use environment variables in your configuration with docker-compose. I used the following setup: # file: docker/nginx/templates/default.conf.conf upstream api-upstream { server ${API_HOST}; } # file: docker-compose.yml services: nginx: image: nginx:1.19-alpine volumes: - "./docker/nginx/templates:/etc/nginx/templates/" environment: NGINX_ENVSUBST_TEMPLATE_SUFFIX: ".conf" API_HOST: api.example.com I'm going off script a little from the example in the documentation. Note the extra .conf extension on the template file - this is not a typo. In the docs for the nginx image it is suggested to name the file, for example, default.conf.template. Upon startup, a script will take that file, substitute the environment variables, and then output the file to /etc/nginx/conf.d/ with the original file name, dropping the .template suffix. By default that suffix is .template, but this breaks syntax highlighting unless you configure your editor. Instead, I specified .conf as the template suffix. If you only name your file default.conf the result will be a file named /etc/nginx/conf.d/default and your site won't be served as expected.
NGINX
56,649,582
30
My configuration file has a server directive block that begins with... server { server_name www.example1.com www.example2.com www.example3.com; ...in order to allow the site to be accessed with different domain names. However PHP's $_SERVER['SERVER_NAME'] always returns the first entry of server_name, in this case http://www.example1.com So I have no way from the PHP code to know which domain the user used to access the site. Is there any way to tell nginx/fastcgi to pass the real domain name used to access the site? The only solution I've found so far is to repeat the entire server block for each domain with a distinct server_name entry but obviously I'm looking for a better one.
Set SERVER_NAME to use $host in your fastcgi_params configuration. fastcgi_param SERVER_NAME $host; Source: http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_param
NGINX
31,479,341
30
I found strange behaviour concerning php and /tmp folder. Php uses another folder when it works with /tmp. Php 5.6.7, nginx, php-fpm. I execute the same script in two ways: via browser and via shell. But when it is launched via browser, file is not in real /tmp folder: <?php $name = date("His"); echo "File /tmp/$name.txt\n"; shell_exec('echo "123" > /tmp/'.$name.'.txt'); var_dump(file_exists('/tmp/'.$name.'.txt')); var_dump(shell_exec('cat /etc/*release | tail -n 1')); php -f script.php File /tmp/185617.txt bool(true) string(38) "CentOS Linux release 7.0.1406 (Core) Where is the file? In /tmp $ find / -name 185617.txt /tmp/185617.txt If access it via http://myserver.ru/script.php I get File /tmp/185212.txt bool(true) string(38) "CentOS Linux release 7.0.1406 (Core) But where is the file? $ find / -name 185212.txt /tmp/systemd-private-nABCDE/tmp/185212.txt Why does php thinks that /tmp should be in /tmp/systemd-private-nABCDE/tmp?
Because systemd is configured to give nginx a private /tmp. If you must use the system /tmp instead for some reason then you will need to modify the .service file to read "PrivateTmp=no".
NGINX
30,444,914
30
I know that there are a lot of questions like this on SO, but none of them appear to answer my particular issue. I understand that Django's ALLOWED_HOSTS value is blocking any requests to port 80 at my IP that do not come with the appropriate Host: value, and that when a request comes in that doesn't have the right value, Django is dropping me an email. I also know about the slick Nginx hack to make this problem go away, but I'm trying to understand the nature of one such request and determine whether this is a security issue I need to worry about. Requests like these make sense: [Django] ERROR: Invalid HTTP_HOST header: '203.0.113.1'. You may need to add u'203.0.113.1' to ALLOWED_HOSTS. But this one kind of freaks me out: [Django] ERROR: Invalid HTTP_HOST header: u'/run/my_project_name/gunicorn.sock:'. Doesn't this mean that the requestor sent Host: /run/my_project_name/gunicorn.sock to the server? If so, how do they have the path name for my .sock file? Is my server somehow leaking this information? Additionally, as I'm running Django 1.6.5, I don't understand why I'm receiving these emails at all, as this ticket has been marked fixed for some time now. Can someone shed some light on what I'm missing? This is my settings.LOGGING variable: { 'disable_existing_loggers': False, 'filters': { 'require_debug_false': {'()': 'django.utils.log.RequireDebugFalse'} }, 'formatters': { 'simple': {'format': '%(levelname)s %(message)s'}, 'verbose': {'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'} }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'formatter': 'verbose', 'level': 'DEBUG' }, 'mail_admins': { 'class': 'django.utils.log.AdminEmailHandler', 'filters': ['require_debug_false'], 'level': 'ERROR' } }, 'loggers': { 'django.request': { 'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True }, 'my_project_name': { 'handlers': ['console'], 'level': 'DEBUG' } }, 'version': 1 } And here's my nginx config: worker_processes 1; pid /run/nginx.pid; error_log /var/log/myprojectname/nginx.error.log debug; events { } http { include mime.types; default_type application/octet-stream; access_log /var/log/myprojectname/nginx.access.log combined; sendfile on; gzip on; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 500; gzip_disable "MSIE [1-6]\."; gzip_types text/plain text/html text/xml text/css text/comma-separated-values text/javascript application/x-javascript application/atom+xml; upstream app_server { server unix:/run/myprojectname/gunicorn.sock fail_timeout=0; } server { listen 80 default; listen [::]:80 default; client_max_body_size 4G; server_name myprojectname.mydomain.tld; keepalive_timeout 5; root /var/www/myprojectname; location / { try_files $uri @proxy_to_app; } location @proxy_to_app { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; proxy_pass http://app_server; } error_page 500 502 503 504 /500.html; location = /500.html { root /tmp; } } } Lastly, I found this in my nginx access log. It corresponds to the emails coming through that complain about /run/myprojectname/gunicorn.sock being an invalid HTTP_HOST header.* This was all on one line of course: 2014/09/05 20:38:56 [info] 12501#0: *513 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: 54.84.192.68, server: myproject.mydomain.tld, request: "HEAD / HTTP/1.0", upstream: "http://unix:/run/myprojectname/gunicorn.sock:/" Obviously I still don't know what this means though :-( Update #1: Added my settings.LOGGING Update #2: Added my nginx config Update #3: Added an interesting line from my nginx log Update #4: Updated my nginx config
Seems like proxy_set_header Host $http_host should be changed to proxy_set_header Host $host and server_name should be set appropriately to the address used to access the server. If you want it to catch all, you should use server_name www.domainname.com "" (doc here). I'm not certain, but I think what you're seeing happens if the client doesn't send a Host: header. Since nginx receives no Host: header, no Host: header gets passed up to gunicorn. At this point, I think gunicorn fills in the Host: as the socket path and tells Django this, since that's the connection used. Using $host and setting the server_name in nginx should ensure the Host: is correctly passed to gunicorn and resolve this problem. As for the email, according to the commit in the ticket you linked, it looks like emails are still being sent for disallowed hosts. Added to the doc was also a suggested a way to disable the emails being sent: 'loggers': { 'django.security.DisallowedHost': { 'handlers': ['null'], 'propagate': False, } },
NGINX
25,370,868
30
Recently I have started using NGINX, I found that we can use it for reverse proxy, serving static content from itself which can reduce load time. I have a Tomcat/JBoss server on my local machine and I want to put NGINX in front of it so that static content will be served from NGINX and rest all by Tomcat/JBoss. My Tomcat/JBoss application is running on http://localhost:8081/Test my NGINX configuration worked properly but it is not able to load css/js/jpg file. Here is my war strcuture wehere static contents are Test.war TEST | |--->Resources | |------->CSS | | |----> style.css | | | |-------->Images | |----> a.jpg | |----> b.jpg | |--->WEB-INF | |----->Web.xml | |----->spring-servlet.xml | |--->JSP |---->login.jsp I think the problem is because of absolute path, so should I copy resources folder and put it in some folder in NGINX and configure my NGINX to pick file from its own directory rather going to Tomcat/JBoss? I am new so I dont have any idea of doing this can anyone pls help me in this. This is my conf file for NGINX(windows) server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { proxy_pass http://127.0.0.1:8081/Test/; }
You can add location with regexp: server { listen 80; server_name localhost; location ~* \.(js|jpg|png|css)$ { root path/to/tomcat/document/root/Test/; expires 30d; } location / { proxy_pass http://127.0.0.1:8081/Test/; } }
NGINX
23,776,660
30
I am reading nginx beginner's tutorial, on the section Serving Static Content they have http { server { } } but when I add an http block I get the error [emerg] "http" directive is not allowed here … When I remove the http block and change the conf file to this, it works fine: server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; root /var/example.com/html; index index.html index.htm; # make site accessible from http://localhost/ server_name localhost location / { try_files $uri $uri/ /index.html; } I suspect that I am missing something simple, but why do they use http to serve static files?
Your doing fine. I guess you are editing /etc/nginx/sites-enabled/default (or the linked file at /etc/nginx/sites-available/default. This is the standard nginx set up. It is configured with /etc/nginx/nginx.conf which contains the http {} statement. This in turn contains an "include /etc/nginx/sites-enabled/*" line to include your file above with server{ } clause in it. Note that if you are using an editor that creates a backup file, you must modify the include statement to exclude the backup files, or you will get some "interesting" errors! My line is include /etc/nginx/sites-enabled/*[a-zA-Z] which will not pick up backup files ending in a tilde. YMMV.
NGINX
20,639,568
30
tl;dr version How do you setup nginx as a reverse proxy for example.com to a locally running tomcat webapp at http://127.0.0.1:8080/blah/ without breaking the pageContext? Tomcat Setup There exists a tomcat 7 webapp, blah, deployed with a .war file and sitting in /var/lib/tomcat7/webapps/blah/. tomcat is running locally and accessible at http://127.0.0.1:8080. Multiple webapps are running and can be accessed at: http://127.0.0.1:8080/blah/ http://127.0.0.1:8080/foo/ http://127.0.0.1:8080/bar/ Port 8080 is blocked externally by the firewall. Nginx Setup nginx is running on the server as the gatekeeper. One site is enabled to access all of the local tomcat webapps mentioned above. This works fine for example.com: server { listen 80; server_name example.com; root /var/lib/tomcat/webapps/ROOT/; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080/; } } Question: how to configure an additional site to access blah directly? Under /etc/nginx/sites-enabled/ an additional site file is setup to route http://blah.com to http://127.0.0.1:8080/blah/ but there are issues. server { listen 80; server_name blah.com *.blah.com; root /var/lib/tomcat/webapps/blah/; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080/blah/; } } This setup adds an extra blah to the context path, creating a 404 page because path /blah/blah/ doesn't exist, which makes sense. Is there a simple way within nginx to pass blah.com to the webapp root? Within the webapp, I'm using ${pageContext.request.contextPath}/path for relative paths to webapp resource. I thought this was the correct way to handle internal tomcat paths but could this be part of the problem? I believe this is why I'm getting the extra blah in the route, creating the 404 page. <%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%> <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <meta http-equiv="refresh" content="0; url=${pageContext.request.contextPath}/form"> <script type="text/javascript"> window.location.href = "${pageContext.request.contextPath}/form" </script> <title>Load BLAH</title> </head> <body> <p>If you are not redirected automatically, follow this <a href="${pageContext.request.contextPath}/form">link</a>.</p> </body> </html> This page is hit alright but the redirect goes to /blah/blah/form instead of /blah/form where the servlet actually exists. I've also tried other approaches including pointing blah.com to the tomcat root itself. This works in the sense that you can get to blah via blah.com/blah/ but that's not really what we want. Additionally, it is completely acceptable (and desired) to still be able to access blah via example.com/blah/. Obviously, this is for an nginx novice but help me (and future novices) clear this up because the clear solution is eluding me and the nginx docs use the help too.
One possible solution is to create a virtual host within tomcat and set blah as the ROOT app on the new host. nginx will pass still pass requests to tomcat on localhost including the requested host header and tomcat will handle the rest with the correct context. Setup the Virtual host Add a Host entry to the Engine portion of $CATALINA_HOME/conf/server.xml <Engine name="Catalina" defaultHost="localhost"> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> </Host> <Host name="blah.com" appBase="blahApps" unpackWARS="true" autoDeploy="true"> <Alias>www.blah.com</Alias> </Host> </Engine> Create the appBase directory $CATALINA_HOME/blahApps/ Configure the context with $CATALINA_HOME/blahApps/ROOT/META-INF/context.xml <Context path="/" antiResourceLocking="false" /> Deploy blah to $CATALINA_HOME/blahApps/ROOT. This may be as simple as changing blah.war to ROOT.war. Make sure nginx is still copacetic Just proxy requests for blah.com to localhost and tomcat will take care of the rest: server { listen 80; server_name blah.com www.blah.com; location / { proxy_pass http://127.0.0.1:8080/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }
NGINX
19,866,203
30
I've been using nginx for a few months without issue, but after upgrading to Mac OS X 10.9 Mavericks, when trying to start nginx I get this: nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] still could not bind() I tried to follow these directions, but I'm not having much luck as my outputs seem a little different. The output of: ps ax -o pid,ppid,%cpu,vsz,wchan,command|egrep '(nginx|PID)' is: PID PPID %CPU VSZ WCHAN COMMAND 15015 12765 0.0 2432784 - egrep (nginx|PID) I've tried killing the process using that PID, but it never seems to die... Any ideas on how to get nginx running again? Any help is greatly appreciated!!
Your ps ... | egrep command is finding itself, not an instance of nginx (look at the "COMMAND" column). Since port 80 is in use, it's likely some other program (maybe the Apache that comes with the OS?) is running and grabbing it. To find out, run: sudo lsof -i:80 If it's the system Apache ("httpd") program, you can probably shut it down with: sudo launchctl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist If that doesn't do it, more info will be needed to figure out what's grabbing port 80 and how it's getting started.
NGINX
19,720,237
30
I am really new to sys admin stuff, and have only provisioned a VPS with nginx(serving the static files) and gunicorn as the web server. I have lately been reading about different other stuff. I came to know about other tools: nginx : high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server haproxy : high performance load balancer varnish : caching HTTP reverse proxy gunicorn : python WSGI http server uwsgi : another python WSGI server I have been reading about all the above 5 tools and I have confused myself as to which one is used for what purpose? Could someone please explain me in lay man terms what use is each of the tool put in, when used together and which specific concern do they address?
Let's say you plan to host a few websites on your new VPS. Let's look at the tools you might need for each site. HTTP Servers Website 'Alpha' just consists of a some pure HTML, CSS and Javascript. The content is static. When someone visits website Alpha, their browser will issue an HTTP request. You have configured (via DNS and name server configuration) that request to be directed to the IP address of your VPS. Now you need your VPS to be able to accept that HTTP request, decide what to do with it, and issue a response that the visitor's browser can understand. You need an HTTP server, such as Apache httpd or NGINX, and let's say you do some research and eventually decide on NGINX. Application Servers Website 'Beta' is dynamic, written using the Django Web Framework. WSGI is an protocol that describes the interface between a Python application (the django app) and an application server. So what you need now is an WSGI app server, which will be able to understand web requests, make appropriate 'calls' to the application's various objects, and return the results. You have many options here, including gunicorn and uWSGI. Let's say you do some research and eventually decide on uWSGI. uWSGI can accept and handle HTTPS requests for static content as well, so if you wanted to you could have website Alpha served entirely by NGINX and website Beta served entirely by uWSGI. And that would be that. Reverse Proxy Servers But uWSGI has poor performance in dealing with static content, so you would rather use NGINX for static content like images, even on website Beta. But then something would have to distinguish between requests and send them to the right place. Is that possible? It turns out NGINX is not just an HTTP server but also a reverse proxy server: it is capable of redirecting incoming requests to another place, like your uWSGI application server, or many other places, collecting the response(s) and sending them back to the original requester. Awesome! So you configure all incoming requests to go to NGINX, which will serve up static content or, when required, redirect it to the app server. Load Balancing with multiple web servers You are also hosting Website Gamma, which is a blog that is popular internationally and receives a ton of traffic. For Gamma you decide to set up multiple web servers. All incoming requests are going to your original VPS with NGINX, and you configure NGINX to redirect the request to one of several other web servers based in round-robin fashion, and return the response to the original requester. HAProxy is web server that specializes in balancing loads for high traffic sites. In this case, you were able to use NGINX to handle traffic for site Gamma. In other scenarios, one may choose to set up a high-availability cluster: e.g., send all requests to a server like HAProxy, which intelligently redirects traffic to a cluster of nginx servers similar to your original VPS. Cache Server Website Gamma exceeded the capacity of your VPS due to the sheer volume of traffic. Let's say you instead hosted website Delta, and the reason your web server is unable to handle Delta is due to a popular feature that is very content-heavy. A cache server is able to understand what media content is being frequently requested and store this content differently, such that it can be more quickly served. This is achieved by reducing disk IO operations; the popular content can be stored in memory or virtual memory instead. You might decide to combine your existing NGINX stack with a technology like Varnish or Memchached to achieve this type of optimization and server website Gamma more effectively.
NGINX
13,210,636
30
I'm trying to set up Nginx on my Windows development environment. I can't find how to create something similar to "sites-enabled" on Linux where Nginx would look for (links to) active virtual host configurations. Is there a way to do something similar with a directory with shortcuts to the actual configuration files and Nginx scanning that directory? Or is there another way to hook up a virtual host configuration other than copying the host configuration to nginx.conf?
In windows you have to give full path of the directory where the config files are located. There are two files to update: nginx.conf, which tells nginx where to find web sites, and localhost.conf, which is the configuration for a web site. It is assumed that nginx is installed in C:\nginx. If the installation directory is at another path, you will have to update that path accordingly, wherever it appears in the following two configuration files. nginx.conf Location: C:\nginx\conf worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #to read external configuration. include "C:/nginx/conf/sites-enabled/*.conf"; } localhost.conf Location: C:\nginx\conf\sites-enabled server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } }
NGINX
13,070,986
30
Sometimes I get an issue with error 502 when httpd service is down. But only in 1 minute the website come back. I need to custom the 502 message to ask user to wait for 1 minute then refresh page, or embed JavaScript or meta refresh tag to auto refresh page after 1 minute. Page's URL must be the same to make refresh effect Notice that I know about custom error page redirect eg location = /502.html, but that type of custom error page will redirect user to other page, if they will refresh page they will got error page again. Any idea will be very helpful. EDIT UPDATE for more detail 10/06/2012. My nginx config: user nobody; # no need for more workers in the proxy mode worker_processes 24; error_log /var/log/nginx/error.log crit; #worker_rlimit_nofile 20480; events { worker_connections 109024; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 2048; server_names_hash_bucket_size 256; include mime.types; default_type application/octet-stream; server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 20; ignore_invalid_headers on; client_header_timeout 50m; client_body_timeout 50m; send_timeout 20m; reset_timedout_connection on; connection_pool_size 2048; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 20M; client_body_buffer_size 300k; request_pool_size 32k; output_buffers 14 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; proxy_cache_path /dev/shm/nginx levels=1:2 keys_zone=wwwcache:45m inactive=5m max_size=1000m; client_body_in_file_only off; access_log off; open_log_file_cache off; #log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } and vhost config: server { # error_log /var/log/nginx/vhost-error_log warn; listen 123.30.137.66:80; server_name xaluan.net mtvvui.com www.daiduong.com.au www.xaluan.net xaluan.com www.xaluan.com www.daiduongrestaurant.net veryzoo.com www.mtvvui.com www.xaluan.org www.veryzoo.com daiduongrestaurant.net xaluan.org daiduong.com.au; # access_log /usr/local/apache/domlogs/xaluan.net combined; root /home/xaluano/public_html; location / { if ($http_cache_control ~ "max-age=0") { set $bypass 1; } location ~.*\.(3gp|gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|htm|txt|js|css|exe|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso)$ { #root /home/xaluano/public_html; #proxy_cache wwwcache; #proxy_cache_valid 200 15m; #proxy_cache_bypass $bypass; expires 1d; #try_files $uri @backend; proxy_pass http://123.30.137.66:8081; } error_page 405 = @backend; add_header X-Cache "HIT from Backend"; #proxy_set_header Server "Caching-Proxy"; #add_header X-Cache-Vinahost "HIT from Backend"; proxy_pass http://123.30.137.66:8081; include proxy.inc; } location @backend { internal; proxy_pass http://123.30.137.66:8081; include proxy.inc; } location ~ .*\.(php|jsp|cgi|pl|py)?$ { #proxy_cache wwwcache; #proxy_cache_valid 200 15m; proxy_pass http://123.30.137.66:8081; include proxy.inc; } location ~ /\.ht { deny all; } } == the case test.. If Apache httpd service stops: #service httpd stop Then open in browser this link: http://www.xaluan.com/modules.php?name=News&file=article&sid=123456 You will see the 502 error with the same URL on browser address. == Custom error page I need the config which will help when Apache fail, will show the custom message telling user to wait for 1 minute for service back, then refresh current page with same URL (refresh I can do easy by JavaScript), Nginx does not change URL so JavaScript can work out.
I found an answer that works for me. In the vhost config file, I put right at the end of the server block, before closing brace: error_page 502 /502.html; location = /502.html { root /home/xaluano/public_html; } Of course I also need to create a file 502.html at my domain root, with the meta-tag refresh, and java-script auto refresh. The content of html page is: <head> <meta http-equiv="refresh" content="40" /> </head> <body> <script language="JavaScript" type="text/javascript"> /*<![CDATA[*/ var TimerVal = 40; var TimerSPan = document.getElementById("CDTimer"); function CountDown(){ setTimeout( "CountDown()", 1000 ); TimerSPan.innerHTML=TimerVal; TimerVal=TimerVal-1; if (TimerVal<0) { TimerVal=0; location.reload(true); // window.location.href = "http://www.xaluan.com"; } //improvement by vivalibre, tq } CountDown(); /*]]>*/ </script> </body>
NGINX
10,895,071
30
I have nginx running on my server, listening port 80 and 433. I know nginx has a number ways of port forwarding that allows me to forward request like: http://myserver:80/subdir1 to some address like: http://myserver:8888. My question is it possible to configure nginx so that i can forward NON-http request (just those plain TCP connection) to some other port? It's very easy to test if it's a http request because the first bytes will be either "GET" or "POST". Here's the example. The client connected to nginx . The client send: a. HTTP get request: "GET / HTTP 1.1": some rule for HTTP b. Any bytes that can't be recognized as HTTP header: forward it to some other port, say, 888, 999, etc. Is it technically possible? Or would you suggest a way to do this?
It is possible since nginx 1.9.0: http://nginx.org/en/docs/stream/ngx_stream_core_module.html Something along these lines (this goes on top level of nginx.conf): stream { upstream backend { server backend1.example.com:12345; } server { listen 12345; proxy_pass backend; } }
NGINX
5,337,122
30
I'm vaguely aware that on a computer joined to a domain IE can be asked to send some extra headers that I could use to automatically sign on to an application. I've got apache running on a windows server with mod_php. I'd like to be able to avoid the user having to log in if necessary. I've found some links talking about Kerberos and Apache modules. http://www.onlamp.com/pub/a/onlamp/2003/09/11/kerberos.html?page=last https://metacpan.org/pod/Apache2::AuthenNTLM Since I'm running on Windows it's proven to be non-trivial to get Perl or Apache modules installed. But doesn't PHP already have access to HTTP headers? I found this but it doesn't do any authentication, it just shows that PHP can read the NTLM headers. http://siphon9.net/loune/2007/10/simple-lightweight-ntlm-in-php/ I'd like to be able to have my users just point to the application and have them automatically authenticated. Has anyone had any experience with this or gotten it to work at all? UPDATE Since originally posting this question, we've changed setups to nginx and php-fcgi still running on windows. Apache2 and php-cgi on windows is probably one of the slowest setups you could configure on windows. It's looking like Apache might still be needed (it works with php-fcgi) but I would prefer a nginx solution. I also still don't understand (and would love to be educated) why HTTP server plugins are necessary and we can't have a PHP, web server agnostic solution.
All you need is the mod_auth_sspi Apache module. Sample configuration: AuthType SSPI SSPIAuth On SSPIAuthoritative On SSPIDomain mydomain # Set this if you want to allow access with clients that do not support NTLM, or via proxy from outside. Don't forget to require SSL in this case! SSPIOfferBasic On # Set this if you have only one domain and don't want the MYDOMAIN\ prefix on each user name SSPIOmitDomain On # AD user names are case-insensitive, so use this for normalization if your application's user names are case-sensitive SSPIUsernameCase Lower AuthName "Some text to prompt for domain credentials" Require valid-user And don't forget that you can also use Firefox for transparent SSO in a Windows domain: Simply go to about:config, search for network.automatic-ntlm-auth.trusted-uris, and enter the host name or FQDN of your internal application (like myserver or myserver.corp.domain.com). You can have more than one entry, it's a comma-separated list.
NGINX
1,003,751
30
I need your help to understand my problem. I updated my macintosh with Catalina last week, then i updated docker for mac. Since those updates, i have ownership issues on shared volumes. I can reproduce with a small example. I just create a small docker-compose which build a nginx container. I have a folder src with a PHP file like this "src/index.php". I build the container and start it. Then i go to /app/www/mysrc (shared volume) and tape "ls -la" to check if the index.php is OK and i get : ls: cannot open directory '.': Operation not permitted Here is a simple docker-compose file : docker-compose.yml : version: "3" services: test-nginx: restart: always image: 'nginx:1.17.3' ports: - "8082:80" volumes: - ./src:/app/www/mysrc When i build and start the container, i get : $ docker-compose exec test-nginx sh # cd /app/www # ls -la total 8 drwxr-xr-x 3 root root 4096 Oct 21 07:58 . drwxr-xr-x 3 root root 4096 Oct 21 07:58 .. drwxr-xr-x 3 root root 96 Oct 21 07:51 mysrc # cd mysrc # ls -la ls: cannot open directory '.': Operation not permitted # whoami root So, my nginx server is down because nginx can't access to the source files. Thanks for your help.
If it was working prior to the update to Catalina, the issue is due to the new permissions requested by Catalina. Now, macOS requests permissions for everything, even for accessing a directory. So, probably you had a notification about granting Docker for Mac permission to access the shared folder, you didn't grant it, and now you are facing the outcome of such action. To grant privileges now, go to System preferences > Security & Privacy > Files and Folders, and add Docker for Mac and your shared directory.
NGINX
58,482,352
29
I'm trying to add SSL certs (generated with LetsEncrypt) to my nginx. The nginx is built from a docker-compose file where I create a volume from my host to the container so the containers can access the certs and private key. volumes: - /etc/nginx/certs/:/etc/nginx/certs/ When the nginx container starts and fails with the following error [emerg] 1#1: BIO_new_file("/etc/nginx/certs/fullchain.pem") failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/etc/nginx/certs/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file) My nginx config file looks like this: server { listen 80; server_name server_blah www.server_blah; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name server_blah; ssl_certificate /etc/nginx/certs/fullchain.pem; ssl_certificate_key /etc/nginx/certs/privkey.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; } What am I missing/doing incorrectly?
Finally cracked this and was able to successfully repeat the process on my dev and production site to get SSL certs working! Sorry for the length of the post! In my setup I have docker docker-compose setup on an ubuntu 16 machine. Anyone who's encountering this problem I'll detail the steps I did. Go to the directory where your code lives cd /opt/example_dir/ Make a directory for letsencrypt and it's site. sudo mkdir -p /opt/example_dir/letsencrypt/letsencrypt-site Create barebones docker-compose.yml file from the letsencrypt directory. sudo nano /opt/example_dir/letsencrypt/docker-compose.yml Add the following to it: version: '2' services: image: nginx:latest ports: - "80:80" volumes: - ./nginx.conf:/etc/nginx/conf.d/default.conf - ./letsencrypt-site:/usr/share/nginx/html networks: - docker-network networks: docker-network: driver: bridge This will pull down the latest nginx version Expose port 80 Mount a config file (that i'll create later) Maps the site directory so that we can have a simple test index.html for when we start the simple nginx container. Create an nginx.conf file in /opt/example_dir/letsencrypt sudo nano /opt/example_dir/letsencrypt/nginx.conf Put the following into it server { listen 80; listen [::]:80; server_name example_server.com; location ~ /.well-known/acme-challenge { allow all; root /usr/share/nginx/html; } root /usr/share/nginx/html; index index.html; } This listens for requests on port 80 for the server with name example_server.com Gives the Certbot agent access to ./well-known/acme-challenge Sets the default root and file Next create an index.html file within /opt/example_dir/letsencrypt/letsencrypt-site sudo nano /opt/example_dir/letsencrypt/letsencrypt-site/index.html Add the following to it <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>LetsEncrypt Setup</title> </head> <body> <p>Test file for our http nginx server</p> </body> </html> All parts in place for basic nginx container! Now we start up the nginx container. cd /opt/example_dir/letsencrypt sudo docker-compose up -d The nginx container is up and running now, visit the url you've defined and you should get the test index.html page back. At this point we're ready to run the certbot command to generate some certs Run the following to generate certs replacing --email with your email sudo docker run -it --rm \ -v /docker-volumes/etc/letsencrypt:/etc/letsencrypt \ -v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt \ -v /opt/example_dir/letsencrypt/letsencrypt-site:/data/letsencrypt \ -v "/docker-volumes/var/log/letsencrypt:/var/log/letsencrypt" \ certbot/certbot \ certonly --webroot \ --email [email protected] --agree-tos --no-eff-email \ --webroot-path=/data/letsencrypt \ -d example.com Run docker in interactive mode so you can see the output. When its finished generating certs it will remove itself. It will mount 4 volumes: The letsencrypt folder where the certs are stored/ A lib folder Maps our site folder Maps a logging path It agrees to ToS Specifies the root webpath Specify the server address you want to generate certs for. If that command ran okay then we have generated certs for this web server. We can now use these in our production site and configure nginx to use the ssl and make use of these certs! Shut down the nginx container cd /opt/example_dir/letsencrypt/ sudo docker-compose down Setup Production nginx container Directory structure should look like this now. Where you have your code / web app project and then the letsencrypt folder that we created above. /opt/example_dir / -> project_folder / -> letsencrypt Create a folder call dh-param sudo mkdir -p /opt/example_dir/project_folder/dh-param Generate a dh key sudo openssl dhparam -out /opt/example_dir/project_folder/dhparam/dhparam-2048.pem 2048 Update docker-compose.yml and nginx.conf files within /opt/example_dir/project_folder The project_folder is where my source code lives so I create a production config file here for nginx and update the docker-compose.yml to mount my nginx config, dh-pharam exchange key as well as the certs themselves we created earlier. nginx service in the docker-compose nginx: image: nginx:1.11.3 restart: always ports: - "80:80" - "443:443" - "8000:8000" volumes: - ./nginx.conf:/etc/nginx/conf.d/default.conf - ./dh-param/dhparam-2048.pem:/etc/ssl/certs/dhparam-2048.pem - /docker-volumes/etc/letsencrypt/live/exampleserver.com/fullchain.pem:/etc/letsencrypt/live/exampleserver.com/fullchain.pem - /docker-volumes/etc/letsencrypt/live/exampleserver.com/privkey.pem:/etc/letsencrypt/live/exampleserver.com/privkey.pem networks: - docker-network volumes_from: - flask depends_on: - flask - falcon links: - datastore nginx.conf within project_folder error_log /var/log/nginx/error.log warn; server { listen 80; listen [::]:80; server_name exampleserver.com location / { rewrite ^ https://$host$request_uri? permanent; } #for certbot challenges (renewal process) location ~ /.well-known/acme-challenge { allow all; root /data/letsencrypt; } } #https://exampleserver.com server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name exampleserver.com; server_tokens off; ssl_certificate /etc/letsencrypt/live/exampleserver.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/exampleserver.com/privkey.pem; ssl_buffer_size 8k; ssl_dhparam /etc/ssl/certs/dhparam-2048.pem; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_prefer_server_ciphers on; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5; ssl_ecdh_curve secp384r1; ssl_session_tickets off; # OCSP stapling ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8; # Define the specified charset to the “Content-Type” response header field charset utf-8; } At this point everything is setup! (finally) Spin up the docker container. cd /opt/example_dir/project_folder sudo docker-compose up -d # Check the docker log with: sudo docker logs -f -t I know it's a lot of steps but this is what I have done, it's worked for me and hope it helps someone else.
NGINX
51,399,883
29
I've set up two web applications in my DigitalOcean droplet, and I'm trying to run both applications on different domains, with SSL encryption. I can confirm that everything works if I only use one of the domains, and the error occurs when I try to run both at the same time. nginx -t duplicate listen options for [::]:443 in /etc/nginx/sites-enabled/hello.com:26 /etc/nginx/sites-avilable/hello.com server { server_name hello.com www.hello.com; location / { proxy_pass http://localhost:4000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } listen [::]:443 ssl ipv6only=on default_server; # managed by Certbot listen 443 ssl default_server; # managed by Certbot ssl_certificate /etc/letsencrypt/live/hello.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/hello.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = www.hello.com) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = hello.com) { return 301 https://$host$request_uri; } # managed by Certbot listen 80 default_server; listen [::]:80 default_server; server_name hello.com www.hello.com; return 404; # managed by Certbot } /etc/nginx/sites-available/example.com server { server_name example.com www.example.com; location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } listen [::]:443 ssl ipv6only=on; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = www.example.com) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = example.com) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; listen [::]:80; server_name example.com www.example.com; return 404; # managed by Certbot } What can I do to avoid this error? Further improvements in the nginx files are very much appreciated. I've used the following guides: How To Set Up Nginx Server Blocks: https://www.digitalocean.com/community/tutorials/how-to-set-up-nginx-server-blocks-virtual-hosts-on-ubuntu-14-04-lts How To Set Up a Node.js Application for Production on Ubuntu 16.04: https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-16-04 How To Secure Nginx with Let's Encrypt on Ubuntu 16.04: https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-16-04
The problem is ipv6only=on, which can only be specified once according to the documentation. The default value is on anyway, so the option can be safely removed.
NGINX
49,938,342
29
Recently, I upgrade one of my django sites from http to https. However, after that, I continuously receive Invalid HTTP_HOST header error email while before I never received such type of emails. Here are some log messages: [Django] ERROR (EXTERNAL IP): Invalid HTTP_HOST header: '123.56.221.107'. You may need to add '123.56.221.107' to ALLOWED_HOSTS. [Django] ERROR (EXTERNAL IP): Invalid HTTP_HOST header: 'www.sgsrec.com'. You may need to add 'www.sgsrec.com' to ALLOWED_HOSTS. [Django] ERROR (EXTERNAL IP): Invalid HTTP_HOST header: 'sgsrec.com'. You may need to add 'sgsrec.com' to ALLOWED_HOSTS. Report at /apple-app-site-association Invalid HTTP_HOST header: ‘sgsrec.com’. You may need to add ‘sgsrec.com’ to ALLOWED_HOSTS. Invalid HTTP_HOST header: ‘www.pythonzh.cn’. You may need to add ‘www.pythonzh.cn’ to ALLOWED_HOSTS. Report at / Invalid HTTP_HOST header: ‘www.pythonzh.cn’. You may need to add ‘www.pythonzh.cn’ to ALLOWED_HOSTS. Request Method: GET Request URL: http://www.pythonzh.cn/ Django Version: 1.10.6 [Django] ERROR (EXTERNAL IP): Invalid HTTP_HOST header: 'pythonzh.cn'. You may need to add 'pythonzh.cn' to ALLOWED_HOSTS. What the strange thing is that I only change my blog site www.zmrenwu.com nginx configuration, but seems all of my sites which hosted on 123.56.221.107 are effected. Of cause, I set ALLOWED_HOSTS correctly: ALLOWED_HOSTS = ['.zmrenwu.com'] ALLOWED_HOSTS = ['.sgsrec.com'] ALLOWED_HOSTS = ['.pythonzh.cn'] Nginx configuration of my blog site www.zmrenwu.com: server { charset utf-8; server_name zmrenwu.com www.zmrenwu.com; listen 80; return 301 https://www.zmrenwu.com$request_uri; } server { charset utf-8; server_name zmrenwu.com; listen 443; ssl on; ssl_certificate /etc/ssl/1_www.zmrenwu.com_bundle.crt; ssl_certificate_key /etc/ssl/2_www.zmrenwu.com.key; return 301 https://www.zmrenwu.com$request_uri; } server { charset utf-8; listen 443; server_name www.zmrenwu.com; ssl on; ssl_certificate /etc/ssl/1_www.zmrenwu.com_bundle.crt; ssl_certificate_key /etc/ssl/2_www.zmrenwu.com.key; location /static { alias /home/yangxg/sites/zmrenwu.com/blogproject/static; } location /media { alias /home/yangxg/sites/zmrenwu.com/blogproject/media; } location / { proxy_set_header Host $host; proxy_pass http://unix:/tmp/zmrenwu.com.socket; Why that happened? And How could I solve this issue?
Disabling DisallowedHost host warnings as suggested in the other answer is not the correct solution in my opinion. There is a reason why Django gives you those warnings - and it is better for you to block those requests before they reach Django. You created a new server block in your nginx configuration. Because it is the only HTTPS server you have defined, it becomes the default server for that port. From the documentation: The default_server parameter, if present, will cause the server to become the default server for the specified address:port pair. If none of the directives have the default_server parameter then the first server with the address:port pair will be the default server for this pair. This explains why you are suddenly seeing all these invalid host errors. Any bot that now tries to connect to your server over HTTPS will end up using this default server. Because many bots will be using fake host names or just your server IP (neither of which are in ALLOWED_HOSTS) this causes the warnings in Django. So what is the solution? You can create a separate server block that handles all such invalid requests: server { listen 443 ssl default_server; server_name _; return 444; } 444 is a special response status used by nginx to disconnect invalid requests. Once you add this block, it will be used for all requests that don't match the host headers that you want to respond to, and anything trying to connect with an invalid host will not be able to connect. Django meanwhile will stop seeing requests for invalid hosts.
NGINX
47,846,521
29
lets say I've a path like: /var/www/myside/ that path contains two folders... let's say /static and /manage I'd like to configure nginx to have an access to: /static folder on / (eg. http://example.org/) this folder has some .html files. /manage folder on /manage (eg. http://example.org/manage) in this case this folder contains Slim's PHP framework code - that means the index.php file is in public subfolder (eg. /var/www/mysite/manage/public/index.php) I've tried a lot of combinations such as server { listen 80; server_name example.org; error_log /usr/local/etc/nginx/logs/mysite/error.log; access_log /usr/local/etc/nginx/logs/mysite/access.log; root /var/www/mysite; location /manage { root $uri/manage/public; try_files $uri /index.php$is_args$args; } location / { root $uri/static/; index index.html; } location ~ \.php { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; } } The / works correctly anyway manage doesn't. Am I doing something wrong? Does anybody know what should I change? Matthew.
To access a path like /var/www/mysite/manage/public with a URI like /manage, you will need to use alias rather than root. See this document for details. I am assuming that you need to run PHP from both roots, in which case you will need two location ~ \.php blocks, see example below. If you have no PHP within /var/www/mysite/static, you can delete the unused location block. For example: server { listen 80; server_name example.org; error_log /usr/local/etc/nginx/logs/mysite/error.log; access_log /usr/local/etc/nginx/logs/mysite/access.log; root /var/www/mysite/static; index index.html; location / { } location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } location ^~ /manage { alias /var/www/mysite/manage/public; index index.php; if (!-e $request_filename) { rewrite ^ /manage/index.php last; } location ~ \.php$ { if (!-f $request_filename) { return 404; } fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } } } The ^~ modifier causes the prefix location to take precedence over regular expression locations at the same level. See this document for details. The alias and try_files directives are not together due to this long standing bug. Be aware of this caution in the use of the if directive.
NGINX
42,443,468
29
Should I leave the /.well-known/acme-challenge always exposed on the server? Here is my config for the HTTP: server { listen 80; location '/.well-known/acme-challenge' { root /var/www/demo; } location / { if ($scheme = http) { return 301 https://$server_name$request_uri; } } Which basically redirects all the requests to https, except for the acme-challenge (for auto renewal). My question: Is it alright to keep location '/.well-known/acme-challenge' always exposed on port 80? Or better to comment/uncomment it manually, when need to reissue the certificate? Are there any security issues with that? Any advise or links to read for about the this location appreciated. Thanks!
Acme challenge link only needed for verifying domain to this ip address
NGINX
41,803,140
29