question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I am switching to PostgreSQL from SQLite for a typical Rails application. The problem is that running specs became slow with PG. On SQLite it took ~34 seconds, on PG it's ~76 seconds which is more than 2x slower. So now I want to apply some techniques to bring the performance of the specs on par with SQLite with no code modifications (ideally just by setting the connection options, which is probably not possible). Couple of obvious things from top of my head are: RAM Disk (good setup with RSpec on OSX would be good to see) Unlogged tables (can it be applied on the whole database so I don't have change all the scripts?) As you may have understood I don't care about reliability and the rest (the DB is just a throwaway thingy here). I need to get the most out of the PG and make it as fast as it can possibly be. Best answer would ideally describe the tricks for doing just that, setup and the drawbacks of those tricks. UPDATE: fsync = off + full_page_writes = off only decreased time to ~65 seconds (~-16 secs). Good start, but far from the target of 34. UPDATE 2: I tried to use RAM disk but the performance gain was within an error margin. So doesn't seem to be worth it. UPDATE 3:* I found the biggest bottleneck and now my specs run as fast as the SQLite ones. The issue was the database cleanup that did the truncation. Apparently SQLite is way too fast there. To "fix" it I open a transaction before each test and roll it back at the end. Some numbers for ~700 tests. Truncation: SQLite - 34s, PG - 76s. Transaction: SQLite - 17s, PG - 18s. 2x speed increase for SQLite. 4x speed increase for PG.
First, always use the latest version of PostgreSQL. Performance improvements are always coming, so you're probably wasting your time if you're tuning an old version. For example, PostgreSQL 9.2 significantly improves the speed of TRUNCATE and of course adds index-only scans. Even minor releases should always be followed; see the version policy. Don'ts Do NOT put a tablespace on a RAMdisk or other non-durable storage. If you lose a tablespace the whole database may be damaged and hard to use without significant work. There's very little advantage to this compared to just using UNLOGGED tables and having lots of RAM for cache anyway. If you truly want a ramdisk based system, initdb a whole new cluster on the ramdisk by initdbing a new PostgreSQL instance on the ramdisk, so you have a completely disposable PostgreSQL instance. PostgreSQL server configuration When testing, you can configure your server for non-durable but faster operation. This is one of the only acceptable uses for the fsync=off setting in PostgreSQL. This setting pretty much tells PostgreSQL not to bother with ordered writes or any of that other nasty data-integrity-protection and crash-safety stuff, giving it permission to totally trash your data if you lose power or have an OS crash. Needless to say, you should never enable fsync=off in production unless you're using Pg as a temporary database for data you can re-generate from elsewhere. If and only if you're doing to turn fsync off can also turn full_page_writes off, as it no longer does any good then. Beware that fsync=off and full_page_writes apply at the cluster level, so they affect all databases in your PostgreSQL instance. For production use you can possibly use synchronous_commit=off and set a commit_delay, as you'll get many of the same benefits as fsync=off without the giant data corruption risk. You do have a small window of loss of recent data if you enable async commit - but that's it. If you have the option of slightly altering the DDL, you can also use UNLOGGED tables in Pg 9.1+ to completely avoid WAL logging and gain a real speed boost at the cost of the tables getting erased if the server crashes. There is no configuration option to make all tables unlogged, it must be set during CREATE TABLE. In addition to being good for testing this is handy if you have tables full of generated or unimportant data in a database that otherwise contains stuff you need to be safe. Check your logs and see if you're getting warnings about too many checkpoints. If you are, you should increase your checkpoint_segments. You may also want to tune your checkpoint_completion_target to smooth writes out. Tune shared_buffers to fit your workload. This is OS-dependent, depends on what else is going on with your machine, and requires some trial and error. The defaults are extremely conservative. You may need to increase the OS's maximum shared memory limit if you increase shared_buffers on PostgreSQL 9.2 and below; 9.3 and above changed how they use shared memory to avoid that. If you're using a just a couple of connections that do lots of work, increase work_mem to give them more RAM to play with for sorts etc. Beware that too high a work_mem setting can cause out-of-memory problems because it's per-sort not per-connection so one query can have many nested sorts. You only really have to increase work_mem if you can see sorts spilling to disk in EXPLAIN or logged with the log_temp_files setting (recommended), but a higher value may also let Pg pick smarter plans. As said by another poster here it's wise to put the xlog and the main tables/indexes on separate HDDs if possible. Separate partitions is pretty pointless, you really want separate drives. This separation has much less benefit if you're running with fsync=off and almost none if you're using UNLOGGED tables. Finally, tune your queries. Make sure that your random_page_cost and seq_page_cost reflect your system's performance, ensure your effective_cache_size is correct, etc. Use EXPLAIN (BUFFERS, ANALYZE) to examine individual query plans, and turn the auto_explain module on to report all slow queries. You can often improve query performance dramatically just by creating an appropriate index or tweaking the cost parameters. AFAIK there's no way to set an entire database or cluster as UNLOGGED. It'd be interesting to be able to do so. Consider asking on the PostgreSQL mailing list. Host OS tuning There's some tuning you can do at the operating system level, too. The main thing you might want to do is convince the operating system not to flush writes to disk aggressively, since you really don't care when/if they make it to disk. In Linux you can control this with the virtual memory subsystem's dirty_* settings, like dirty_writeback_centisecs. The only issue with tuning writeback settings to be too slack is that a flush by some other program may cause all PostgreSQL's accumulated buffers to be flushed too, causing big stalls while everything blocks on writes. You may be able to alleviate this by running PostgreSQL on a different file system, but some flushes may be device-level or whole-host-level not filesystem-level, so you can't rely on that. This tuning really requires playing around with the settings to see what works best for your workload. On newer kernels, you may wish to ensure that vm.zone_reclaim_mode is set to zero, as it can cause severe performance issues with NUMA systems (most systems these days) due to interactions with how PostgreSQL manages shared_buffers. Query and workload tuning These are things that DO require code changes; they may not suit you. Some are things you might be able to apply. If you're not batching work into larger transactions, start. Lots of small transactions are expensive, so you should batch stuff whenever it's possible and practical to do so. If you're using async commit this is less important, but still highly recommended. Whenever possible use temporary tables. They don't generate WAL traffic, so they're lots faster for inserts and updates. Sometimes it's worth slurping a bunch of data into a temp table, manipulating it however you need to, then doing an INSERT INTO ... SELECT ... to copy it to the final table. Note that temporary tables are per-session; if your session ends or you lose your connection then the temp table goes away, and no other connection can see the contents of a session's temp table(s). If you're using PostgreSQL 9.1 or newer you can use UNLOGGED tables for data you can afford to lose, like session state. These are visible across different sessions and preserved between connections. They get truncated if the server shuts down uncleanly so they can't be used for anything you can't re-create, but they're great for caches, materialized views, state tables, etc. In general, don't DELETE FROM blah;. Use TRUNCATE TABLE blah; instead; it's a lot quicker when you're dumping all rows in a table. Truncate many tables in one TRUNCATE call if you can. There's a caveat if you're doing lots of TRUNCATES of small tables over and over again, though; see: Postgresql Truncation speed If you don't have indexes on foreign keys, DELETEs involving the primary keys referenced by those foreign keys will be horribly slow. Make sure to create such indexes if you ever expect to DELETE from the referenced table(s). Indexes are not required for TRUNCATE. Don't create indexes you don't need. Each index has a maintenance cost. Try to use a minimal set of indexes and let bitmap index scans combine them rather than maintaining too many huge, expensive multi-column indexes. Where indexes are required, try to populate the table first, then create indexes at the end. Hardware Having enough RAM to hold the entire database is a huge win if you can manage it. If you don't have enough RAM, the faster storage you can get the better. Even a cheap SSD makes a massive difference over spinning rust. Don't trust cheap SSDs for production though, they're often not crashsafe and might eat your data. Learning Greg Smith's book, PostgreSQL 9.0 High Performance remains relevant despite referring to a somewhat older version. It should be a useful reference. Join the PostgreSQL general mailing list and follow it. Reading: Tuning your PostgreSQL server - PostgreSQL wiki Number of database connections - PostgreSQL wiki
PostgreSQL
9,407,442
254
Somehow I've managed to completely bugger the install of postgresql on Ubuntu karmic. I want to start over from scratch, but when I "purge" the package with apt-get it still leaves traces behind such that the reinstall configuration doesn't run properly. After I've done: apt-get purge postgresql apt-get install postgresql It said Setting up postgresql-8.4 (8.4.3-0ubuntu9.10.1) ... Configuring already existing cluster (configuration: /etc/postgresql/8.4/main, data: /var/lib/postgresql/8.4/main, owner: 108:112) Error: move_conffile: required configuration file /var/lib/postgresql/8.4/main/postgresql.conf does not exist Error: could not create default cluster. Please create it manually with pg_createcluster 8.4 main --start or a similar command (see 'man pg_createcluster'). update-alternatives: using /usr/share/postgresql/8.4/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode. Setting up postgresql (8.4.3-0ubuntu9.10.1) ... I have a "/etc/postgresql" with nothing in it and "/etc/postgresql-common/" has a 'pg_upgradecluser.d' directory and root.crt and user_clusters files. The /etc/passwd has a postgres user; the purge script doesn't appear to touch it. There's been a bunch of symptoms which I work through only to expose the next. Right this second, when I run that command "pg_createcluster..." it complains that '/var/lib/postgresql/8.4/main/postgresql.conf does not exist', so I'll go find one of those but I'm sure that won't be the end of it. Is there not some easy one-liner (or two) which will burn it completely and let me start over?
Option A If your install isn't already damaged, you can drop unwanted PostgreSQL servers ("clusters") using pg_dropcluster. Use that in preference to a full purge and reinstall if you just want to restart with a fresh PostgreSQL instance. $ pg_lsclusters Ver Cluster Port Status Owner Data directory Log file 11 main 5432 online postgres /var/lib/postgresql/11/main /var/log/postgresql/postgresql-11-main.log $ sudo systemctl stop postgresql@11-main $ sudo pg_dropcluster --stop 11 main $ sudo pg_createcluster --start 11 main Option B If you really need to do a full purge and reinstall, first make sure PostgreSQL isn't running. ps -C postgres should show no results. Now run: apt-get --purge remove postgresql\* to remove everything PostgreSQL from your system. Just purging the postgres package isn't enough since it's just an empty meta-package. Once all PostgreSQL packages have been removed, run: rm -r /etc/postgresql/ rm -r /etc/postgresql-common/ rm -r /var/lib/postgresql/ rm -r /var/log/postgresql/ userdel -r postgres groupdel postgres You should now be able to: apt-get install postgresql or for a complete install: apt-get install postgresql-8.4 postgresql-contrib-8.4 postgresql-doc-8.4
PostgreSQL
2,748,607
254
Looking through the documentation for the Postgres 9.4 datatype JSONB, it is not immediately obvious to me how to do updates on JSONB columns. Documentation for JSONB types and functions: http://www.postgresql.org/docs/9.4/static/functions-json.html http://www.postgresql.org/docs/9.4/static/datatype-json.html As an examples, I have this basic table structure: CREATE TABLE test(id serial, data jsonb); Inserting is easy, as in: INSERT INTO test(data) values ('{"name": "my-name", "tags": ["tag1", "tag2"]}'); Now, how would I update the 'data' column? This is invalid syntax: UPDATE test SET data->'name' = 'my-other-name' WHERE id = 1; Is this documented somewhere obvious that I missed?
If you're able to upgrade to Postgresql 9.5, the jsonb_set command is available, as others have mentioned. In each of the following SQL statements, I've omitted the where clause for brevity; obviously, you'd want to add that back. Update name: UPDATE test SET data = jsonb_set(data, '{name}', '"my-other-name"'); Replace the tags (as oppose to adding or removing tags): UPDATE test SET data = jsonb_set(data, '{tags}', '["tag3", "tag4"]'); Replacing the second tag (0-indexed): UPDATE test SET data = jsonb_set(data, '{tags,1}', '"tag5"'); Append a tag (this will work as long as there are fewer than 999 tags; changing argument 999 to 1000 or above generates an error. This no longer appears to be the case in Postgres 9.5.3; a much larger index can be used): UPDATE test SET data = jsonb_set(data, '{tags,999999999}', '"tag6"', true); Remove the last tag: UPDATE test SET data = data #- '{tags,-1}' Complex update (delete the last tag, insert a new tag, and change the name): UPDATE test SET data = jsonb_set( jsonb_set(data #- '{tags,-1}', '{tags,999999999}', '"tag3"', true), '{name}', '"my-other-name"'); It's important to note that in each of these examples, you're not actually updating a single field of the JSON data. Instead, you're creating a temporary, modified version of the data, and assigning that modified version back to the column. In practice, the result should be the same, but keeping this in mind should make complex updates, like the last example, more understandable. In the complex example, there are three transformations and three temporary versions: First, the last tag is removed. Then, that version is transformed by adding a new tag. Next, the second version is transformed by changing the name field. The value in the data column is replaced with the final version.
PostgreSQL
26,703,476
253
So I have a in my Postgresql: TAG_TABLE ========================== id tag_name -------------------------- 1 aaa 2 bbb 3 ccc To simplify my problem, What I want to do is SELECT 'id' from TAG_TABLE when a string "aaaaaaaa" contains the 'tag_name'. So ideally, it should only return "1", which is the ID for tag name 'aaa' This is what I am doing so far: SELECT id FROM TAG_TABLE WHERE 'aaaaaaaaaaa' LIKE '%tag_name%' But obviously, this does not work, since the postgres thinks that '%tag_name%' means a pattern containing the substring 'tag_name' instead of the actual data value under that column. How do I pass the tag_name to the pattern??
You should use tag_name outside of quotes; then it's interpreted as a field of the record. Concatenate using '||' with the literal percent signs: SELECT id FROM TAG_TABLE WHERE 'aaaaaaaa' LIKE '%' || tag_name || '%'; And remember that LIKE is case-sensitive. If you need a case-insensitive comparison, you could do this: SELECT id FROM TAG_TABLE WHERE 'aaaaaaaa' LIKE '%' || LOWER(tag_name) || '%';
PostgreSQL
23,320,945
253
I just installed PostgreSQL with Homebrew and when I go on to type the command psql I get the following error: dyld: Library not loaded: /usr/local/opt/readline/lib/libreadline.6.2.dylib Referenced from: /usr/local/bin/psql Reason: image not found [1] 69711 trace trap psql What's wrong?
I was getting the exact same error, but the previous answers didn't work for me. I had to reinstall PostgreSQL. brew reinstall postgresql
PostgreSQL
21,488,778
253
Is email address a bad candidate for primary when compared to auto incrementing numbers? Our web application needs the email address to be unique in the system. So, I thought of using email address as primary key. However my colleague suggests that string comparison will be slower than integer comparison. Is it a valid reason to not use email as primary key? We are using PostgreSQL.
String comparison is slower than int comparison. However, this does not matter if you simply retrieve a user from the database using the e-mail address. It does matter if you have complex queries with multiple joins. If you store information about users in multiple tables, the foreign keys to the users table will be the e-mail address. That means that you store the e-mail address multiple times.
PostgreSQL
3,804,108
251
When you are upserting a row (PostgreSQL >= 9.5), and you want the possible INSERT to be exactly the same as the possible UPDATE, you can write it like this: INSERT INTO tablename (id, username, password, level, email) VALUES (1, 'John', 'qwerty', 5, '[email protected]') ON CONFLICT (id) DO UPDATE SET id=EXCLUDED.id, username=EXCLUDED.username, password=EXCLUDED.password, level=EXCLUDED.level,email=EXCLUDED.email Is there a shorter way? To just say: use all the EXCLUDE values. In SQLite I used to do : INSERT OR REPLACE INTO tablename (id, user, password, level, email) VALUES (1, 'John', 'qwerty', 5, '[email protected]')
Postgres hasn't implemented an equivalent to INSERT OR REPLACE. From the ON CONFLICT docs (emphasis mine): It can be either DO NOTHING, or a DO UPDATE clause specifying the exact details of the UPDATE action to be performed in case of a conflict. Though it doesn't give you shorthand for replacement, ON CONFLICT DO UPDATE applies more generally, since it lets you set new values based on preexisting data. For example: INSERT INTO users (id, level) VALUES (1, 0) ON CONFLICT (id) DO UPDATE SET level = users.level + 1;
PostgreSQL
36,359,440
247
I have a simple list of ~25 words. I have a varchar field in PostgreSQL, let's say that list is ['foo', 'bar', 'baz']. I want to find any row in my table that has any of those words. This will work, but I'd like something more elegant. select * from table where (lower(value) like '%foo%' or lower(value) like '%bar%' or lower(value) like '%baz%')
PostgreSQL also supports full POSIX regular expressions: select * from table where value ~* 'foo|bar|baz'; The ~* is for a case insensitive match, ~ is case sensitive. Another option is to use ANY: select * from table where value like any (array['%foo%', '%bar%', '%baz%']); select * from table where value ilike any (array['%foo%', '%bar%', '%baz%']); You can use ANY with any operator that yields a boolean. I suspect that the regex options would be quicker but ANY is a useful tool to have in your toolbox.
PostgreSQL
4,928,054
247
Say I have an interval like 4 days 10:00:00 in postgres. How do I convert that to a number of hours (106 in this case?) Is there a function or should I bite the bullet and do something like extract(days, my_interval) * 24 + extract(hours, my_interval)
Probably the easiest way is: SELECT EXTRACT(epoch FROM my_interval)/3600
PostgreSQL
952,493
247
I am trying to create a database from command line. My OS is centos and postgres version is 10.9. sudo -u postgres psql createdb test Password for user test: Why is it prompting me for the password?
Change the user to postgres : su - postgres Create User for Postgres (in the shell and NOT with psql) $ createuser testuser Create Database (same) $ createdb testdb Acces the postgres Shell psql ( enter the password for postgressql) Provide the privileges to the postgres user $ alter user testuser with encrypted password 'qwerty'; $ grant all privileges on database testdb to testuser;
PostgreSQL
30,641,512
246
Question is simple. How to add column x to table y, but only when x column doesn't exist ? I found only solution here how to check if column exists. SELECT column_name FROM information_schema.columns WHERE table_name='x' and column_name='y';
With Postgres 9.6 this can be done using the option if not exists ALTER TABLE table_name ADD COLUMN IF NOT EXISTS column_name INTEGER;
PostgreSQL
12,597,465
246
I'm working on the design for a RoR project for my company, and our development team has already run into a bit of a debate about the design, specifically the database. We have a model called Message that needs to be persisted. It's a very, very small model with only three db columns other than the id, however there will likely be A LOT of these models when we go to production. We're looking at as much as 1,000,000 insertions per day. The models will only ever be searched by two foreign keys on them which can be indexed. As well, the models never have to be deleted, but we also don't have to keep them once they're about three months old. So, what we're wondering is if implementing this table in Postgres will present a significant performance issue? Does anyone have experience with very large SQL databases to tell us whether or not this will be a problem? And if so, what alternative should we go with?
Rows per a table won't be an issue on it's own. So roughly speaking 1 million rows a day for 90 days is 90 million rows. I see no reason Postgres can't deal with that, without knowing all the details of what you are doing. Depending on your data distribution you can use a mixture of indexes, filtered indexes, and table partitioning of some kind to speed thing up once you see what performance issues you may or may not have. Your problem will be the same on any other RDMS that I know of. If you only need 3 months worth of data design in a process to prune off the data you don't need any more. That way you will have a consistent volume of data on the table. Your lucky you know how much data will exist, test it for your volume and see what you get. Testing one table with 90 million rows may be as easy as: select x,1 as c2,2 as c3 from generate_series(1,90000000) x; https://wiki.postgresql.org/wiki/FAQ Limit Value Maximum Database Size Unlimited Maximum Table Size 32 TB Maximum Row Size 1.6 TB Maximum Field Size 1 GB Maximum Rows per Table Unlimited Maximum Columns per Table 250 - 1600 depending on column types Maximum Indexes per Table Unlimited
PostgreSQL
21,866,113
242
I have the following database table on a Postgres server: id date Product Sales 1245 01/04/2013 Toys 1000 1245 01/04/2013 Toys 2000 1231 01/02/2013 Bicycle 50000 456461 01/01/2014 Bananas 4546 I would like to create a query that gives the SUM of the Sales column and groups the results by month and year as follows: Apr 2013 3000 Toys Feb 2013 50000 Bicycle Jan 2014 4546 Bananas Is there a simple way to do that?
I can't believe the accepted answer has so many upvotes -- it's a horrible method. Here's the correct way to do it, with date_trunc: SELECT date_trunc('month', txn_date) AS txn_month, sum(amount) as monthly_sum FROM yourtable GROUP BY txn_month It's bad practice but you might be forgiven if you use GROUP BY 1 in a very simple query. You can also use GROUP BY date_trunc('month', txn_date) if you don't want to select the date.
PostgreSQL
17,492,167
242
I'm trying to create a Postgres database for the first time. I assigned basic read-only permissions to the DB role that must access the database from my PHP scripts, and I have a curiosity: If I execute GRANT some_or_all_privileges ON ALL TABLES IN SCHEMA schema TO role; is there any need to also execute this? GRANT USAGE ON SCHEMA schema TO role; From the documentation: USAGE: For schemas, allows access to objects contained in the specified schema (assuming that the objects' own privilege requirements are also met). Essentially this allows the grantee to "look up" objects within the schema. I think that if I can select or manipulate any data contained in the schema, I can access to any objects of the schema itself. Am I wrong? If not, what is GRANT USAGE ON SCHEMA used for? And what does the documentation mean exactly with "assuming that the objects' own privilege requirements are also met"?
GRANTs on different objects are separate. GRANTing on a database doesn't GRANT rights to the schema within. Similiarly, GRANTing on a schema doesn't grant rights on the tables within. If you have rights to SELECT from a table, but not the right to see it in the schema that contains it then you can't access the table. The rights tests are done in order: Do you have `USAGE` on the schema? No: Reject access. Yes: Do you also have the appropriate rights on the table? No: Reject access. Yes: Check column privileges. Your confusion may arise from the fact that the public schema has a default GRANT of all rights to the role public, which every user/group is a member of. So everyone already has usage on that schema. The phrase: (assuming that the objects' own privilege requirements are also met) Is saying that you must have USAGE on a schema to use objects within it, but having USAGE on a schema is not by itself sufficient to use the objects within the schema, you must also have rights on the objects themselves. It's like a directory tree. If you create a directory somedir with file somefile within it then set it so that only your own user can access the directory or the file (mode rwx------ on the dir, mode rw------- on the file) then nobody else can list the directory to see that the file exists. If you were to grant world-read rights on the file (mode rw-r--r--) but not change the directory permissions it'd make no difference. Nobody could see the file in order to read it, because they don't have the rights to list the directory. If you instead set rwx-r-xr-x on the directory, setting it so people can list and traverse the directory but not changing the file permissions, people could list the file but could not read it because they'd have no access to the file. You need to set both permissions for people to actually be able to view the file. Same thing in Pg. You need both schema USAGE rights and object rights to perform an action on an object, like SELECT from a table. (The analogy falls down a bit in that PostgreSQL doesn't have row-level security yet, so the user can still "see" that the table exists in the schema by SELECTing from pg_class directly. They can't interact with it in any way, though, so it's just the "list" part that isn't quite the same.)
PostgreSQL
17,338,621
241
I want the code to be able to automatically fill the timestamp value when a new row is inserted as I can do in MySQL using CURRENT_TIMESTAMP. How will I be able to achieve this in PostgreSQL? CREATE TABLE users ( id serial not null, firstname varchar(100), middlename varchar(100), lastname varchar(100), email varchar(200), timestamp timestamp )
To populate the column during insert, use a DEFAULT value: CREATE TABLE users ( id serial not null, firstname varchar(100), middlename varchar(100), lastname varchar(100), email varchar(200), timestamp timestamp default current_timestamp ) Note that the value for that column can explicitly be overwritten by supplying a value in the INSERT statement. If you want to prevent that you do need a trigger. You also need a trigger if you need to update that column whenever the row is updated (as mentioned by E.J. Brennan) Note that using reserved words for column names is usually not a good idea. You should find a different name than timestamp
PostgreSQL
9,556,474
241
How do I delete an enum type value that I created in postgresql? create type admin_level1 as enum('classifier', 'moderator', 'god'); E.g. I want to remove moderator from the list. I can't seem to find anything on the docs. I'm using Postgresql 9.3.4.
You delete (drop) enum types like any other type, with DROP TYPE: DROP TYPE admin_level1; Is it possible you're actually asking about how to remove an individual value from an enum type? If so, you can't. It's not supported: Although enum types are primarily intended for static sets of values, there is support for adding new values to an existing enum type, and for renaming values (see ALTER TYPE). Existing values cannot be removed from an enum type, nor can the sort ordering of such values be changed, short of dropping and re-creating the enum type. You must create a new type without the value, convert all existing uses of the old type to use the new type, then drop the old type. E.g. CREATE TYPE admin_level1 AS ENUM ('classifier', 'moderator'); CREATE TABLE blah ( user_id integer primary key, power admin_level1 not null ); INSERT INTO blah(user_id, power) VALUES (1, 'moderator'), (10, 'classifier'); ALTER TYPE admin_level1 ADD VALUE 'god'; INSERT INTO blah(user_id, power) VALUES (42, 'god'); -- .... oops, maybe that was a bad idea CREATE TYPE admin_level1_new AS ENUM ('classifier', 'moderator'); -- Remove values that won't be compatible with new definition -- You don't have to delete, you might update instead DELETE FROM blah WHERE power = 'god'; -- Convert to new type, casting via text representation ALTER TABLE blah ALTER COLUMN power TYPE admin_level1_new USING (power::text::admin_level1_new); -- and swap the types DROP TYPE admin_level1; ALTER TYPE admin_level1_new RENAME TO admin_level1;
PostgreSQL
25,811,017
239
I want to create a database which does not exist through JDBC. Unlike MySQL, PostgreSQL does not support create if not exists syntax. What is the best way to accomplish this? The application does not know if the database exists or not. It should check and if the database exists it should be used. So it makes sense to connect to the desired database and if connection fails due to non-existence of database it should create new database (by connecting to the default postgres database). I checked the error code returned by Postgres but I could not find any relevant code that species the same. Another method to achieve this would be to connect to the postgres database and check if the desired database exists and take action accordingly. The second one is a bit tedious to work out. Is there any way to achieve this functionality in Postgres?
Restrictions You can ask the system catalog pg_database - accessible from any database in the same database cluster. The tricky part is that CREATE DATABASE can only be executed as a single statement. The manual: CREATE DATABASE cannot be executed inside a transaction block. So it cannot be run directly inside a function or DO statement, where it would be inside a transaction block implicitly. SQL procedures, introduced with Postgres 11, cannot help with this either. Workaround from within psql You can work around it from within psql by executing the DDL statement conditionally: SELECT 'CREATE DATABASE mydb' WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'mydb')\gexec The manual: \gexec Sends the current query buffer to the server, then treats each column of each row of the query's output (if any) as a SQL statement to be executed. Workaround from the shell With \gexec you only need to call psql once: echo "SELECT 'CREATE DATABASE mydb' WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = 'mydb')\gexec" | psql You may need more psql options for your connection; role, port, password, ... See: Run batch file with psql command without password The same cannot be called with psql -c "SELECT ...\gexec" since \gexec is a psql meta‑command and the -c option expects a single command for which the manual states: command must be either a command string that is completely parsable by the server (i.e., it contains no psql-specific features), or a single backslash command. Thus you cannot mix SQL and psql meta-commands within a -c option. Workaround from within Postgres transaction You could use a dblink connection back to the current database, which runs outside of the transaction block. Effects can therefore also not be rolled back. Install the additional module dblink for this (once per database): How to use (install) dblink in PostgreSQL? Then: DO $do$ BEGIN IF EXISTS (SELECT FROM pg_database WHERE datname = 'mydb') THEN RAISE NOTICE 'Database already exists'; -- optional ELSE PERFORM dblink_exec('dbname=' || current_database() -- current db , 'CREATE DATABASE mydb'); END IF; END $do$; Again, you may need more psql options for the connection. See Ortwin's added answer: Simulate CREATE DATABASE IF NOT EXISTS for PostgreSQL? Detailed explanation for dblink: How do I do large non-blocking updates in PostgreSQL? You can make this a function for repeated use.
PostgreSQL
18,389,124
237
Seems like Money type is discouraged as described here. My application needs to store currency, which datatype shall I be using? Numeric, Money or FLOAT?
Your source is in no way official. It dates to 2011 and I don't even recognize the authors. If the money type was officially "discouraged" PostgreSQL would say so in the manual - which it doesn't. For a more official source, read this thread in pgsql-general (from just this week!), with statements from core developers including D'Arcy J.M. Cain (original author of the money type) and Tom Lane: Related answer (and comments!) about improvements in recent releases: Jasper Report: unable to get value for field 'x' of class 'org.postgresql.util.PGmoney' Basically, money has its (very limited) uses. The Postgres Wiki suggests to largely avoid it, except for those narrowly defined cases. The advantage over numeric is performance. decimal is just an alias for numeric in Postgres, and widely used for monetary data, being an "arbitrary precision" type. The manual: The type numeric can store numbers with a very large number of digits. It is especially recommended for storing monetary amounts and other quantities where exactness is required. Personally, I like to store currency as integer representing Cents if fractional Cents never occur (basically where money makes sense). That's more efficient than any other of the mentioned options.
PostgreSQL
15,726,535
236
I'm trying to restore my dump file, but it caused an error: psql:psit.sql:27485: invalid command \N Is there a solution? I searched, but I didn't get a clear answer.
Postgres uses \N as substitute symbol for NULL value. But all psql commands start with a backslash \ symbol. You can get these messages, when a copy statement fails, but the loading of dump continues. This message is a false alarm. You have to search all lines prior to this error if you want to see the real reason why COPY statement failed. Is possible to switch psql to "stop on first error" mode and to find error: psql -v ON_ERROR_STOP=1
PostgreSQL
20,427,689
235
I'm having a table like this Movie Actor A 1 A 2 A 3 B 4 I want to get the name of a movie and all actors in that movie, and I want the result to be in a format like this: Movie ActorList A 1, 2, 3 How can I do it?
Simpler with the aggregate function string_agg() (Postgres 9.0 or later): SELECT movie, string_agg(actor, ', ') AS actor_list FROM tbl GROUP BY 1; The 1 in GROUP BY 1 is a positional reference and a shortcut for GROUP BY movie in this case. string_agg() expects data type text as input. Other types need to be cast explicitly (actor::text) - unless an implicit cast to text is defined - which is the case for all other string types (varchar, character, name, ...) and some other types. As isapir commented, you can add an ORDER BY clause in the aggregate call to get a sorted list - should you need that. Like: SELECT movie, string_agg(actor, ', ' ORDER BY actor) AS actor_list FROM tbl GROUP BY 1; But it's typically faster to sort rows in a subquery. See: Create array in SELECT
PostgreSQL
15,847,173
235
I have a question about the ALTER TABLE command on a really large table (almost 30 millions rows). One of its columns is a varchar(255) and I would like to resize it to a varchar(40). Basically, I would like to change my column by running the following command: ALTER TABLE mytable ALTER COLUMN mycolumn TYPE varchar(40); I have no problem if the process is very long but it seems my table is no more readable during the ALTER TABLE command. Is there a smarter way? Maybe add a new column, copy values from the old column, drop the old column and finally rename the new one? Note: I use PostgreSQL 9.0.
In PostgreSQL 9.1 there is an easier way http://www.postgresql.org/message-id/[email protected] CREATE TABLE foog(a varchar(10)); ALTER TABLE foog ALTER COLUMN a TYPE varchar(30); postgres=# \d foog Table "public.foog" Column | Type | Modifiers --------+-----------------------+----------- a | character varying(30) |
PostgreSQL
7,729,287
235
Is it possible? Can i specify it on the connection URL? How to do that?
I know this was answered already, but I just ran into the same issue trying to specify the schema to use for the liquibase command line. Update As of JDBC v9.4 you can specify the url with the new currentSchema parameter like so: jdbc:postgresql://localhost:5432/mydatabase?currentSchema=myschema Appears based on an earlier patch: http://web.archive.org/web/20141025044151/http://postgresql.1045698.n5.nabble.com/Patch-to-allow-setting-schema-search-path-in-the-connectionURL-td2174512.html Which proposed url's like so: jdbc:postgresql://localhost:5432/mydatabase?searchpath=myschema
PostgreSQL
4,168,689
235
I'm using the PostgreSQL database for my Ruby on Rails application (on Mac OS X 10.9). Are there any detailed instructions on how to upgrade PostgreSQL database? I'm afraid I will destroy the data in the database or mess it up.
Assuming you've used home-brew to install and upgrade Postgres, you can perform the following steps. Stop current Postgres server: launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist Initialize a new 10.1 database: initdb /usr/local/var/postgres10.1 -E utf8 run pg_upgrade (note: change bin version if you're upgrading from something other than below): pg_upgrade -v \ -d /usr/local/var/postgres \ -D /usr/local/var/postgres10.1 \ -b /usr/local/Cellar/postgresql/9.6.5/bin/ \ -B /usr/local/Cellar/postgresql/10.1/bin/ -v to enable verbose internal logging -d the old database cluster configuration directory -D the new database cluster configuration directory -b the old PostgreSQL executable directory -B the new PostgreSQL executable directory Move new data into place: cd /usr/local/var mv postgres postgres9.6 mv postgres10.1 postgres Restart Postgres: launchctl load ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist Check /usr/local/var/postgres/server.log for details and to make sure the new server started properly. Finally, re-install the rails pg gem gem uninstall pg gem install pg I suggest you take some time to read the PostgreSQL documentation to understand exactly what you're doing in the above steps to minimize frustrations.
PostgreSQL
24,379,373
233
Postgres 8.4 and greater databases contain common tables in public schema and company specific tables in company schema. company schema names always start with 'company' and end with the company number. So there may be schemas like: public company1 company2 company3 ... companynn An application always works with a single company. The search_path is specified accordingly in odbc or npgsql connection string, like: search_path='company3,public' How would you check if a given table exists in a specified companyn schema? eg: select isSpecific('company3','tablenotincompany3schema') should return false, and select isSpecific('company3','tableincompany3schema') should return true. In any case, the function should check only companyn schema passed, not other schemas. If a given table exists in both public and the passed schema, the function should return true. It should work for Postgres 8.4 or later.
It depends on what you want to test exactly. Information schema? To find "whether the table exists" (no matter who's asking), querying the information schema (information_schema.tables) is incorrect, strictly speaking, because (per documentation): Only those tables and views are shown that the current user has access to (by way of being the owner or having some privilege). The query provided by @kong can return FALSE, but the table can still exist. It answers the question: How to check whether a table (or view) exists, and the current user has access to it? SELECT EXISTS ( SELECT FROM information_schema.tables WHERE table_schema = 'schema_name' AND table_name = 'table_name' ); The information schema is mainly useful to stay portable across major versions and across different RDBMS. But the implementation is slow, because Postgres has to use sophisticated views to comply to the standard (information_schema.tables is a rather simple example). And some information (like OIDs) gets lost in translation from the system catalogs - which actually carry all information. System catalogs Your question was: How to check whether a table exists? SELECT EXISTS ( SELECT FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE n.nspname = 'schema_name' AND c.relname = 'table_name' AND c.relkind = 'r' -- only tables ); Use the system catalogs pg_class and pg_namespace directly, which is also considerably faster. However, per documentation on pg_class: The catalog pg_class catalogs tables and most everything else that has columns or is otherwise similar to a table. This includes indexes (but see also pg_index), sequences, views, materialized views, composite types, and TOAST tables; For this particular question you can also use the system view pg_tables. A bit simpler and more portable across major Postgres versions (which is hardly of concern for this basic query): SELECT EXISTS ( SELECT FROM pg_tables WHERE schemaname = 'schema_name' AND tablename = 'table_name' ); Identifiers have to be unique among all objects mentioned above. If you want to ask: How to check whether a name for a table or similar object in a given schema is taken? SELECT EXISTS ( SELECT FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace WHERE n.nspname = 'schema_name' AND c.relname = 'table_name' ); Related answer on dba.SE discussing "Information schema vs. system catalogs" Alternative: cast to regclass SELECT 'schema_name.table_name'::regclass; This raises an exception if the (optionally schema-qualified) table (or other object occupying that name) does not exist. If you do not schema-qualify the table name, a cast to regclass defaults to the search_path and returns the OID for the first table found - or an exception if the table is in none of the listed schemas. Note that the system schemas pg_catalog and pg_temp (the schema for temporary objects of the current session) are automatically part of the search_path. You can use that and catch a possible exception in a function. Example: Check if sequence exists in Postgres (plpgsql) A query like above avoids possible exceptions and is therefore slightly faster. Note that the each component of the name is treated as identifier here - as opposed to above queries where names are given as literal strings. Identifiers are cast to lower case unless double-quoted. If you have forced otherwise illegal identifiers with double-quotes, those need to be included. Like: SELECT '"Dumb_SchName"."FoolishTbl"'::regclass; See: Are PostgreSQL column names case-sensitive? to_regclass(rel_name) in Postgres 9.4+ Much simpler now: SELECT to_regclass('schema_name.table_name'); Same as the cast, but it returns ... ... null rather than throwing an error if the name is not found
PostgreSQL
20,582,500
233
After this comment to one of my questions, I'm thinking if it is better using one database with X schemas or vice versa. I'm developing a web application where, when people register, I create (actually) a database (no, it's not a social network: everyone must have access to his own data and never see the data of the other user). That's the way I used for the previous version of my application (that is still running on MySQL): through the Plesk API, for every registration, I do: Create a database user with limited privileges; Create a database that can be accessed just by the previous created user and the superuser (for maintenance) Populate the database Now, I'll need to do the same with PostgreSQL (the project is getting mature and MySQL don't fulfil all the needs). I need to have all the databases/schemas backups independent: pg_dump works perfectly in both ways, and the same for the users that can be configured to access just one schema or one database. So, assuming you are more experienced PostgreSQL users than me, what do you think is the best solution for my situation, and why? Will there be performance differences using $x database instead of $x schemas? And what solution will be better to maintain in the future (reliability)? All of my databases/schemas will always have the same structure! For the backups issue (using pg_dump), is maybe better using one database and many schemas, dumping all the schemas at once: recovering will be quite simple loading the main dump in a development machine and then dump and restore just the schema needed: there is one additional step, but dumping all the schema seem faster than dumping them one by one. UPDATE 2012 Well, the application structure and design changed so much during those last two years. I'm still using the "one db with many schemas" -approach, but still, I have one database for each version of my application: Db myapp_01 \_ my_customer_foo_schema \_ my_customer_bar_schema Db myapp_02 \_ my_customer_foo_schema \_ my_customer_bar_schema For backups, I'm dumping each database regularly, and then moving the backups on the development server. I'm also using the PITR/WAL backup but, as I said before, it's not likely I'll have to restore all database at once. So it will probably be dismissed this year (in my situation is not the best approach). The one-db-many-schema approach worked very well for me since now, even if the application structure is totally changed. I almost forgot: all of my databases/schemas will always have the same structure! Now, every schema has its own structure that change dynamically reacting to users data flow.
A PostgreSQL "schema" is roughly the same as a MySQL "database". Having many databases on a PostgreSQL installation can get problematic; having many schemas will work with no trouble. So you definitely want to go with one database and multiple schemas within that database.
PostgreSQL
1,152,405
232
How can I find out which version of PostGIS I have?
Since some of the functions depend on other libraries like GEOS and proj4 you might want to get their versions too. Then use: SELECT PostGIS_full_version();
PostgreSQL
4,833,282
231
I can't find a definite answer to this question in the documentation. If a column is an array type, will all the entered values be individually indexed? I created a simple table with one int[] column, and put a unique index on it. I noticed that I couldn't add the same array of ints, which leads me to believe the index is a composite of the array items, not an index of each item. INSERT INTO "Test"."Test" VALUES ('{10, 15, 20}'); INSERT INTO "Test"."Test" VALUES ('{10, 20, 30}'); SELECT * FROM "Test"."Test" WHERE 20 = ANY ("Column1"); Is the index helping this query?
Yes you can index an array, but you have to use the array operators and the GIN-index type. Example: CREATE TABLE "Test"("Column1" int[]); INSERT INTO "Test" VALUES ('{10, 15, 20}'); INSERT INTO "Test" VALUES ('{10, 20, 30}'); CREATE INDEX idx_test on "Test" USING GIN ("Column1" gin__int_ops); EXPLAIN ANALYZE SELECT * FROM "Test" WHERE "Column1" @> ARRAY[20]; Result: Bitmap Heap Scan on "Test" (cost=4.26..8.27 rows=1 width=32) (actual time=0.014..0.015 rows=2 loops=1) Recheck Cond: ("Column1" @> '{20}'::integer[]) -> Bitmap Index Scan on idx_test (cost=0.00..4.26 rows=1 width=0) (actual time=0.009..0.009 rows=2 loops=1) Index Cond: ("Column1" @> '{20}'::integer[]) Total runtime: 0.062 ms Note it appears that in many cases the gin__int_ops option is required create index <index_name> on <table_name> using GIN (<column> gin__int_ops) I have not yet seen a case where it would work with the && and @> operator without the gin__int_ops options
PostgreSQL
4,058,731
231
I'm trying to test out the json type in PostgreSQL 9.3. I have a json column called data in a table called reports. The JSON looks something like this: { "objects": [ {"src":"foo.png"}, {"src":"bar.png"} ], "background":"background.png" } I would like to query the table for all reports that match the 'src' value in the 'objects' array. For example, is it possible to query the DB for all reports that match 'src' = 'foo.png'? I successfully wrote a query that can match the "background": SELECT data AS data FROM reports where data->>'background' = 'background.png' But since "objects" has an array of values, I can't seem to write something that works. Is it possible to query the DB for all reports that match 'src' = 'foo.png'? I've looked through these sources but still can't get it: http://www.postgresql.org/docs/9.3/static/functions-json.html How do I query using fields inside the new PostgreSQL JSON datatype? http://michael.otacoo.com/postgresql-2/postgres-9-3-feature-highlight-json-operators/ I've also tried things like this but to no avail: SELECT json_array_elements(data->'objects') AS data from reports WHERE data->>'src' = 'foo.png'; I'm not an SQL expert, so I don't know what I am doing wrong.
jsonb in Postgres 9.4+ You can use the same query as for 9.3+ below, just with jsonb_array_elements(). But you should rather use the jsonb "contains" operator @> in combination with a matching GIN index on the expression data->'objects': CREATE INDEX reports_data_gin_idx ON reports USING gin ((data->'objects') jsonb_path_ops); SELECT * FROM reports WHERE data->'objects' @> '[{"src":"foo.png"}]'; Since the key objects holds a JSON array, we need to match the structure in the search term and wrap the array element into square brackets, too. Drop the array brackets when searching a plain record. More explanation and options: Index for finding an element in a JSON array json in Postgres 9.3+ Unnest the JSON array with the function json_array_elements() in a lateral join in the FROM clause and test for its elements: SELECT data::text, obj FROM reports r, json_array_elements(r.data#>'{objects}') obj WHERE obj->>'src' = 'foo.png'; db<>fiddle here Old sqlfiddle Or, equivalent for just a single level of nesting: SELECT * FROM reports r, json_array_elements(r.data->'objects') obj WHERE obj->>'src' = 'foo.png'; ->>, -> and #> operators are explained in the manual. Both queries use an implicit JOIN LATERAL. Closely related: Query for element of array in JSON column
PostgreSQL
22,736,742
230
when i create a new user, but it cannot login the database. I do that like this: postgres@Aspire:/home/XXX$ createuser dev Shall the new role be a superuser? (y/n) n Shall the new role be allowed to create databases? (y/n) y Shall the new role be allowed to create more new roles? (y/n) y then create a database: postgres@Aspire:/home/XXX$ createdb -O dev test_development after that, I try psql -U dev -W test_development to login, but get the error: psql: FATAL: Peer authentication failed for user "dev" I tried to solve the problem but failed.
Try: psql -U user_name -h 127.0.0.1 -d db_name where -U is the database user name -h is the hostname/IP of the local server, thus avoiding Unix domain sockets -d is the database name to connect to This is then evaluated as a "network" connection by Postgresql rather than a Unix domain socket connection, thus not evaluated as a "local" connect as you might see in pg_hba.conf: local all all peer
PostgreSQL
17,443,379
230
I'm using the official Postgres Docker image, trying to customize its configuration. For this purpose, I use the command sed to change max_connections for example: sed -i -e"s/^max_connections = 100.*$/max_connections = 1000/" /var/lib/postgresql/data/postgresql.conf I tried two methods to apply this configuration: The first is by adding the commands to a script and copying it within the init folder: /docker-entrypoint-initdb.d. The second method is by running the commands directly within my Dockerfile with the "RUN" command (this method worked fine with a non-official PostgreSQL image with a different path to the configuration file /etc/postgres/...). In both cases the changes fail because the configuration file is missing (I think it's not created yet). How should I change the configuration? Here is the Dockerfile used to create the image: # Database (http://www.cs3c.ma/) FROM postgres:9.4 MAINTAINER Sabbane <[email protected]> ENV TERM=xterm RUN apt-get update RUN apt-get install -y nano ADD scripts /scripts # ADD scripts/setup-my-schema.sh /docker-entrypoint-initdb.d/ # Allow connections from anywhere. RUN sed -i -e"s/^#listen_addresses =.*$/listen_addresses = '*'/" /var/lib/postgresql/data/postgresql.conf RUN echo "host all all 0.0.0.0/0 md5" >> /var/lib/postgresql/data/pg_hba.conf # Configure logs RUN sed -i -e"s/^#logging_collector = off.*$/logging_collector = on/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_directory = 'pg_log'.*$/log_directory = '\/var\/log\/postgresql'/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_filename = 'postgresql-\%Y-\%m-\%d_\%H\%M\%S.log'.*$/log_filename = 'postgresql_\%a.log'/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_file_mode = 0600.*$/log_file_mode = 0644/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_truncate_on_rotation = off.*$/log_truncate_on_rotation = on/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_rotation_age = 1d.*$/log_rotation_age = 1d/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_min_duration_statement = -1.*$/log_min_duration_statement = 0/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_checkpoints = off.*$/log_checkpoints = on/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_connections = off.*$/log_connections = on/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_disconnections = off.*$/log_disconnections = on/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^log_line_prefix = '\%t \[\%p-\%l\] \%q\%u@\%d '.*$/log_line_prefix = '\%t \[\%p\]: \[\%l-1\] user=\%u,db=\%d'/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_lock_waits = off.*$/log_lock_waits = on/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#log_temp_files = -1.*$/log_temp_files = 0/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#statement_timeout = 0.*$/statement_timeout = 1800000 # in milliseconds, 0 is disabled (current 30min)/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^lc_messages = 'en_US.UTF-8'.*$/lc_messages = 'C'/" /var/lib/postgresql/data/postgresql.conf # Performance Tuning RUN sed -i -e"s/^max_connections = 100.*$/max_connections = 1000/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^shared_buffers =.*$/shared_buffers = 16GB/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#effective_cache_size = 128MB.*$/effective_cache_size = 48GB/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#work_mem = 1MB.*$/work_mem = 16MB/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#maintenance_work_mem = 16MB.*$/maintenance_work_mem = 2GB/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#checkpoint_segments = .*$/checkpoint_segments = 32/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#checkpoint_completion_target = 0.5.*$/checkpoint_completion_target = 0.7/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#wal_buffers =.*$/wal_buffers = 16MB/" /var/lib/postgresql/data/postgresql.conf RUN sed -i -e"s/^#default_statistics_target = 100.*$/default_statistics_target = 100/" /var/lib/postgresql/data/postgresql.conf VOLUME ["/var/lib/postgresql/data", "/var/log/postgresql"] CMD ["postgres"] With this Dockerfile, the build process produces an error: sed: can't read /var/lib/postgresql/data/postgresql.conf: No such file or directory
With Docker Compose When working with Docker Compose, you can use command: postgres -c option=value in your docker-compose.yml to configure Postgres. Adapting Vojtech Vitek's answer, you can use command: postgres -c config_file=/etc/postgresql.conf to change the config file Postgres will use. As per the comment by johnthagen, the command can be shortened to command: -c config_file=/etc/postgresql.conf You'd mount your custom config file with a volume: volumes: - ./customPostgresql.conf:/etc/postgresql.conf Here's the docker-compose.yml of a demo application, showing how to configure Postgres: services: db: image: postgres:16.2 command: -c config_file=/etc/postgresql.conf environment: POSTGRES_USER: postgres # Provide the password via an environment variable. If the variable is unset or empty, use a default password # Explanation of this shell feature: https://unix.stackexchange.com/questions/122845/using-a-b-for-variable-assignment-in-scripts/122848#122848 POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-4WXUms893U6j4GE&Hvk3S*hqcqebFgo!vZi} POSTGRES_DB: test_db # Optionally, expose the database port to the host. Not necessary for communication between the app and the database ports: - "5432:5432" volumes: - ./customPostgresql.conf:/etc/postgresql.conf # Add the database files to the host - ./postgres_data:/var/lib/postgresql/data # The directory "./logs" is created by run.sh on the host. Postgres is configured via customPostgresql.conf to write log messages to "/logs" - ./logs:/logs # The container should use the user and group IDs from the host. When we set the owner of /logs to the user "postgres" in the host (via run.sh), the ID of the container's user "postgres" will match. # From https://stackoverflow.com/questions/23544282/what-is-the-best-way-to-manage-permissions-for-docker-shared-volumes#45640469 - /etc/passwd:/etc/passwd:ro - /etc/group:/etc/group:ro networks: myApp-network: # Our application can communicate with the database using this hostname aliases: - myPostgres my_app: networks: - myApp-network build: . depends_on: - db networks: myApp-network: The example custom Postgres config makes Postgres log to a directory on the host. Mind that the directory on the host needs to be owned by user "postgres", the script run.sh takes care of that.
PostgreSQL
30,848,670
229
How do I write an SQL script to create a ROLE in PostgreSQL 9.1, but without raising an error if it already exists? The current script simply has: CREATE ROLE my_user LOGIN PASSWORD 'my_password'; This fails if the user already exists. I'd like something like: IF NOT EXISTS (SELECT * FROM pg_user WHERE username = 'my_user') BEGIN CREATE ROLE my_user LOGIN PASSWORD 'my_password'; END; ... but that doesn't work - IF doesn't seem to be supported in plain SQL. I have a batch file that creates a PostgreSQL 9.1 database, role and a few other things. It calls psql.exe, passing in the name of an SQL script to run. So far all these scripts are plain SQL and I'd like to avoid PL/pgSQL and such, if possible.
Simple script (question asked) Building on @a_horse_with_no_name's answer and improved with @Gregory's comment: DO $do$ BEGIN IF EXISTS ( SELECT FROM pg_catalog.pg_roles WHERE rolname = 'my_user') THEN RAISE NOTICE 'Role "my_user" already exists. Skipping.'; ELSE CREATE ROLE my_user LOGIN PASSWORD 'my_password'; END IF; END $do$; Unlike, for instance, with CREATE TABLE there is no IF NOT EXISTS clause for CREATE ROLE (up to at least Postgres 14). And you cannot execute dynamic DDL statements in plain SQL. Your request to "avoid PL/pgSQL" is impossible except by using another PL. The DO statement uses PL/pgSQL as default procedural language: DO [ LANGUAGE lang_name ] code ... lang_name The name of the procedural language the code is written in. If omitted, the default is plpgsql. No race condition The above simple solution allows for a race condition in the tiny time frame between looking up the role and creating it. If a concurrent transaction creates the role in between we get an exception after all. In most workloads, that will never happen as creating roles is a rare operation carried out by an admin. But there are highly contentious workloads like @blubb mentioned. @Pali added a solution trapping the exception. But a code block with an EXCEPTION clause is expensive. The manual: A block containing an EXCEPTION clause is significantly more expensive to enter and exit than a block without one. Therefore, don't use EXCEPTION without need. Actually raising an exception (and then trapping it) is comparatively expensive on top of it. All of this only matters for workloads that execute it a lot - which happens to be the primary target audience. To optimize: DO $do$ BEGIN IF EXISTS ( SELECT FROM pg_catalog.pg_roles WHERE rolname = 'my_user') THEN RAISE NOTICE 'Role "my_user" already exists. Skipping.'; ELSE BEGIN -- nested block CREATE ROLE my_user LOGIN PASSWORD 'my_password'; EXCEPTION WHEN duplicate_object THEN RAISE NOTICE 'Role "my_user" was just created by a concurrent transaction. Skipping.'; END; END IF; END $do$; Much cheaper: If the role already exists, we never enter the expensive code block. If we enter the expensive code block, the role only ever exists if the unlikely race condition hits. So we hardly ever actually raise an exception (and catch it).
PostgreSQL
8,092,086
229
Which of the following two is more accurate? select numbackends from pg_stat_database; select count(*) from pg_stat_activity;
Those two queries aren't equivalent. The equivalent version of the first one would be: SELECT sum(numbackends) FROM pg_stat_database; In that case, I would expect that version to be slightly faster than the second one, simply because it has fewer rows to count. But you are not likely going to be able to measure a difference. Both queries are based on exactly the same data, so they will be equally accurate.
PostgreSQL
5,267,715
227
I regularly need to delete all the data from my PostgreSQL database before a rebuild. How would I do this directly in SQL? At the moment I've managed to come up with a SQL statement that returns all the commands I need to execute: SELECT 'TRUNCATE TABLE ' || tablename || ';' FROM pg_tables WHERE tableowner='MYUSER'; But I can't see a way to execute them programmatically once I have them.
FrustratedWithFormsDesigner is correct, PL/pgSQL can do this. Here's the script: CREATE OR REPLACE FUNCTION truncate_tables(username IN VARCHAR) RETURNS void AS $$ DECLARE statements CURSOR FOR SELECT tablename FROM pg_tables WHERE tableowner = username AND schemaname = 'public'; BEGIN FOR stmt IN statements LOOP EXECUTE 'TRUNCATE TABLE ' || quote_ident(stmt.tablename) || ' CASCADE;'; END LOOP; END; $$ LANGUAGE plpgsql; This creates a stored function (you need to do this just once) which you can afterwards use like this: SELECT truncate_tables('MYUSER');
PostgreSQL
2,829,158
227
I have just installed postgresql and I specified password x during installation. When I try to do createdb and specify any password I get the message: createdb: could not connect to database postgres: FATAL: password authentication failed for user Same for createuser. How should I start? Can I add myself as a user to the database?
The other answers were not completely satisfying to me. Here's what worked for postgresql-9.1 on Xubuntu 12.04.1 LTS. Connect to the default database with user postgres: sudo -u postgres psql template1 Set the password for user postgres, then exit psql (Ctrl-D): ALTER USER postgres with encrypted password 'xxxxxxx'; Edit the pg_hba.conf file: sudo vim /etc/postgresql/9.1/main/pg_hba.conf and change "peer" to "md5" on the line concerning postgres: local      all     postgres     peer md5 To know what version of postgresql you are running, look for the version folder under /etc/postgresql. Also, you can use Nano or other editor instead of VIM. Restart the database : sudo /etc/init.d/postgresql restart (Here you can check if it worked with psql -U postgres). Create a user having the same name as you (to find it, you can type whoami): sudo createuser -U postgres -d -e -E -l -P -r -s <my_name> The options tell postgresql to create a user that can login, create databases, create new roles, is a superuser, and will have an encrypted password. The really important ones are -P -E, so that you're asked to type the password that will be encrypted, and -d so that you can do a createdb. Beware of passwords: it will first ask you twice the new password (for the new user), repeated, and then once the postgres password (the one specified on step 2). Again, edit the pg_hba.conf file (see step 3 above), and change "peer" to "md5" on the line concerning "all" other users: local      all     all     peer md5 Restart (like in step 4), and check that you can login without -U postgres: psql template1 Note that if you do a mere psql, it will fail since it will try to connect you to a default database having the same name as you (i.e. whoami). template1 is the admin database that is here from the start. Now createdb <dbname> should work.
PostgreSQL
1,471,571
227
Let's say I have a table like this: name | score_a | score_b -----+---------+-------- Joe | 100 | 24 Sam | 96 | 438 Bob | 76 | 101 ... | ... | ... I'd like to select the minimum of score_a and score_b. In other words, something like: SELECT name, MIN(score_a, score_b) FROM table The results, of course, would be: name | min -----+----- Joe | 24 Sam | 96 Bob | 76 ... | ... However, when I try this in Postgres, I get, "No function matches the given name and argument types. You may need to add explicit type casts." MAX() and MIN() appear to work across rows rather than columns. Is it possible to do what I'm attempting?
LEAST(a, b): The GREATEST and LEAST functions select the largest or smallest value from a list of any number of expressions. The expressions must all be convertible to a common data type, which will be the type of the result (see Section 10.5 for details). NULL values in the list are ignored. The result will be NULL only if all the expressions evaluate to NULL. Note that GREATEST and LEAST are not in the SQL standard, but are a common extension. Some other databases make them return NULL if any argument is NULL, rather than only when all are NULL...
PostgreSQL
318,988
227
I have trouble connecting to my own postgres db on a local server. I googled some similar problems and came up with this manual https://help.ubuntu.com/stable/serverguide/postgresql.html so: pg_hba.conf says: # TYPE DATABASE USER ADDRESS METHOD # "local" is for Unix domain socket connections only local all all trust # IPv4 local connections: host all all 127.0.0.1/32 md5 # IPv6 local connections: host all all ::1/128 trust then I create a user and assign a password for it: postgres=# CREATE ROLE asunotest; CREATE ROLE postgres=# ALTER ROLE asunotest WITH ENCRYPTED PASSWORD '1234'; ALTER ROLE but it doesn't let me in: -bash-4.2$ psql -h 127.0.0.1 -U asunotest Password for user asunotest: 1234 psql: FATAL: role "asunotest" is not permitted to log in what could be the problem?
The role you have created is not allowed to log in. You have to give the role permission to log in. One way to do this is to log in as the postgres user and update the role: psql -U postgres Once you are logged in, type: ALTER ROLE "asunotest" WITH LOGIN; Here's the documentation http://www.postgresql.org/docs/9.0/static/sql-alterrole.html
PostgreSQL
35,254,786
226
I am using Postgres DB for my product. While doing the batch insert using slick 3, I am getting an error message: org.postgresql.util.PSQLException: FATAL: sorry, too many clients already. My batch insert operation will be more than thousands of records. Max connection for my postgres is 100. How to increase the max connections?
Just increasing max_connections is bad idea. You need to increase shared_buffers and kernel.shmmax as well. Considerations max_connections determines the maximum number of concurrent connections to the database server. The default is typically 100 connections. Before increasing your connection count you might need to scale up your deployment. But before that, you should consider whether you really need an increased connection limit. Each PostgreSQL connection consumes RAM for managing the connection or the client using it. The more connections you have, the more RAM you will be using that could instead be used to run the database. A well-written app typically doesn't need a large number of connections. If you have an app that does need a large number of connections then consider using a tool such as pg_bouncer which can pool connections for you. As each connection consumes RAM, you should be looking to minimize their use. How to increase max connections 1. Increase max_connection and shared_buffers in /var/lib/pgsql/{version_number}/data/postgresql.conf change max_connections = 100 shared_buffers = 24MB to max_connections = 300 shared_buffers = 80MB The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data. If you have a system with 1GB or more of RAM, a reasonable starting value for shared_buffers is 1/4 of the memory in your system. it's unlikely you'll find using more than 40% of RAM to work better than a smaller amount (like 25%) Be aware that if your system or PostgreSQL build is 32-bit, it might not be practical to set shared_buffers above 2 ~ 2.5GB. Note that on Windows, large values for shared_buffers aren't as effective, and you may find better results keeping it relatively low and using the OS cache more instead. On Windows the useful range is 64MB to 512MB. 2. Change kernel.shmmax You would need to increase kernel max segment size to be slightly larger than the shared_buffers. In file /etc/sysctl.conf set the parameter as shown below. It will take effect when postgresql reboots (The following line makes the kernel max to 96Mb) kernel.shmmax=100663296 References Postgres Max Connections And Shared Buffers Tuning Your PostgreSQL Server
PostgreSQL
30,778,015
226
I am trying to query my postgresql db to return results where a date is in certain month and year. In other words I would like all the values for a month-year. The only way i've been able to do it so far is like this: SELECT user_id FROM user_logs WHERE login_date BETWEEN '2014-02-01' AND '2014-02-28' Problem with this is that I have to calculate the first date and last date before querying the table. Is there a simpler way to do this? Thanks
With dates (and times) many things become simpler if you use >= start AND < end. For example: SELECT user_id FROM user_logs WHERE login_date >= '2014-02-01' AND login_date < '2014-03-01' In this case you still need to calculate the start date of the month you need, but that should be straight forward in any number of ways. The end date is also simplified; just add exactly one month. No messing about with 28th, 30th, 31st, etc. This structure also has the advantage of being able to maintain use of indexes. Many people may suggest a form such as the following, but they do not use indexes: WHERE DATEPART('year', login_date) = 2014 AND DATEPART('month', login_date) = 2 This involves calculating the conditions for every single row in the table (a scan) and not using index to find the range of rows that will match (a range-seek).
PostgreSQL
23,335,970
226
I have two string columns a and b in a table foo. select a, b from foo returns values a and b. However, concatenation of a and b does not work. I tried : select a || b from foo and select a||', '||b from foo Update from comments: both columns are type character(2).
With string types (including character(2)), the displayed concatenation just works because, quoting the manual: [...] the string concatenation operator (||) accepts non-string input, so long as at least one input is of a string type, as shown in Table 9.8. For other cases, insert an explicit coercion to text [...] Bold emphasis mine. The 2nd example select a||', '||b from foo works for any data types because the untyped string literal ', ' defaults to type text making the whole expression valid. For non-string data types, you can "fix" the 1st statement by casting at least one argument to text. Any type can be cast to text. SELECT a::text || b AS ab FROM foo; Judging from your own answer, "does not work" was supposed to mean "returns null". The result of anything concatenated to null is null. If null values can be involved and the result shall not be null, use concat_ws() to concatenate any number of values: SELECT concat_ws(', ', a, b) AS ab FROM foo; Separators are only added between non-null values, i.e. only where necessary. Or concat() if you don't need separators: SELECT concat(a, b) AS ab FROM foo; No need for type casts since both functions take "any" input and work with text representations. However, that's also why the function volatility of both concat() and concat_ws() is only STABLE, not IMMUTABLE. If you need an immutable function (like for an index, a generated column, or for partitioning), see: Create an immutable clone of concat_ws PostgreSQL full text search on many columns More details (and why COALESCE is a poor substitute) in this related answer: Combine two columns and add into one new column Asides + (as mentioned in comments) is not a valid operator for string concatenation in Postgres (or standard SQL). It's a private idea of Microsoft to add this to their products. There is hardly any good reason to use character(n) (synonym: char(n)). Use text or varchar. Details: Any downsides of using data type "text" for storing strings? Best way to check for "empty or null value"
PostgreSQL
19,942,824
226
I have a small table and a certain field contains the type "character varying". I'm trying to change it to "Integer" but it gives an error that casting is not possible. Is there a way around this or should I just create another table and bring the records into it using a query. The field contains only integer values.
There is no implicit (automatic) cast from text or varchar to integer (i.e. you cannot pass a varchar to a function expecting integer or assign a varchar field to an integer one), so you must specify an explicit cast using ALTER TABLE ... ALTER COLUMN ... TYPE ... USING: ALTER TABLE the_table ALTER COLUMN col_name TYPE integer USING (col_name::integer); Note that you may have whitespace in your text fields; in that case, use: ALTER TABLE the_table ALTER COLUMN col_name TYPE integer USING (trim(col_name)::integer); to strip white space before converting. This shoud've been obvious from an error message if the command was run in psql, but it's possible PgAdmin-III isn't showing you the full error. Here's what happens if I test it in psql on PostgreSQL 9.2: => CREATE TABLE test( x varchar ); CREATE TABLE => insert into test(x) values ('14'), (' 42 '); INSERT 0 2 => ALTER TABLE test ALTER COLUMN x TYPE integer; ERROR: column "x" cannot be cast automatically to type integer HINT: Specify a USING expression to perform the conversion. => ALTER TABLE test ALTER COLUMN x TYPE integer USING (trim(x)::integer); ALTER TABLE Thanks @muistooshort for adding the USING link. See also this related question; it's about Rails migrations, but the underlying cause is the same and the answer applies. If the error still occurs, then it may be related not to column values, but indexes over this column or column default values might fail typecast. Indexes need to be dropped before ALTER COLUMN and recreated after. Default values should be changed appropriately.
PostgreSQL
13,170,570
226
I'm bulk loading data and can re-calculate all trigger modifications much more cheaply after the fact than on a row-by-row basis. How can I temporarily disable all triggers in PostgreSQL?
Alternatively, if you are wanting to disable all triggers, not just those on the USER table, you can use: SET session_replication_role = replica; This disables triggers for the current session. To re-enable for the same session: SET session_replication_role = DEFAULT; Source: http://koo.fi/blog/2013/01/08/disable-postgresql-triggers-temporarily/
PostgreSQL
3,942,258
225
Postgresql got enum support some time ago. CREATE TYPE myenum AS ENUM ( 'value1', 'value2', ); How do I get all values specified in the enum with a query?
If you want an array: SELECT enum_range(NULL::myenum) If you want a separate record for each item in the enum: SELECT unnest(enum_range(NULL::myenum)) Additional Information This solution works as expected even if your enum is not in the default schema. For example, replace myenum with myschema.myenum. The data type of the returned records in the above query will be myenum. Depending on what you are doing, you may need to cast to text. e.g. SELECT unnest(enum_range(NULL::myenum))::text If you want to specify the column name, you can append AS my_col_name. Credit to Justin Ohms for pointing out some additional tips, which I incorporated into my answer.
PostgreSQL
1,616,123
224
I am trying to dump a Postgresql database using the pg_dump tool. $ pg_dump books > books.out How ever i am getting this error. pg_dump: server version: 9.2.1; pg_dump version: 9.1.6 pg_dump: aborting because of server version mismatch The --ignore-version option is now deprecated and really would not be a a solution to my issue even if it had worked. How can I upgrade pg_dump to resolve this issue?
Check the installed version(s) of pg_dump: find / -name pg_dump -type f 2>/dev/null My output was: /usr/pgsql-9.3/bin/pg_dump /usr/bin/pg_dump There are two versions installed. To update pg_dump with the newer version: sudo ln -s /usr/pgsql-9.3/bin/pg_dump /usr/bin/pg_dump --force This will create the symlink to the newer version.
PostgreSQL
12,836,312
223
I would like to manage my Heroku database with pgadmin client. By now, I've been doing this with psql. When I use data from heroku pg:credentials to connect de DB using pgadmin, I obtain: An error has occurred: Error connecting to the server: FATAL: permission denied for database "postgres" DETAIL: User does not have CONNECT privilege. How to achieve the connection?
Open the "Properties" of the Heroku server in pgAdminIII and change the "Maintenance DB" value to be the name of the database you want to connect to. The default setup is suitable for DBAs et al who can connect to any database on the server, but apparently that isn't true in your case.
PostgreSQL
11,769,860
222
Some SQL servers have a feature where INSERT is skipped if it would violate a primary/unique key constraint. For instance, MySQL has INSERT IGNORE. What's the best way to emulate INSERT IGNORE and ON DUPLICATE KEY UPDATE with PostgreSQL?
With PostgreSQL 9.5, this is now native functionality (like MySQL has had for several years): INSERT ... ON CONFLICT DO NOTHING/UPDATE ("UPSERT") 9.5 brings support for "UPSERT" operations. INSERT is extended to accept an ON CONFLICT DO UPDATE/IGNORE clause. This clause specifies an alternative action to take in the event of a would-be duplicate violation. ... Further example of new syntax: INSERT INTO user_logins (username, logins) VALUES ('Naomi',1),('James',1) ON CONFLICT (username) DO UPDATE SET logins = user_logins.logins + EXCLUDED.logins;
PostgreSQL
1,009,584
221
I'm going to guess that the answer is "no" based on the below error message (and this Google result), but is there anyway to perform a cross-database query using PostgreSQL? databaseA=# select * from databaseB.public.someTableName; ERROR: cross-database references are not implemented: "databaseB.public.someTableName" I'm working with some data that is partitioned across two databases although data is really shared between the two (userid columns in one database come from the users table in the other database). I have no idea why these are two separate databases instead of schema, but c'est la vie...
Note: As the original asker implied, if you are setting up two databases on the same machine you probably want to make two schemas instead - in that case you don't need anything special to query across them. postgres_fdw Use postgres_fdw (foreign data wrapper) to connect to tables in any Postgres database - local or remote. Note that there are foreign data wrappers for other popular data sources. At this time, only postgres_fdw and file_fdw are part of the official Postgres distribution. For Postgres versions before 9.3 Versions this old are no longer supported, but if you need to do this in a pre-2013 Postgres installation, there is a function called dblink. I've never used it, but it is maintained and distributed with the rest of PostgreSQL. If you're using the version of PostgreSQL that came with your Linux distro, you might need to install a package called postgresql-contrib.
PostgreSQL
46,324
220
I have 2 tables as you will see in my PosgreSQL code below. The first table students has 2 columns, one for student_name and the other student_id which is the Primary Key. In my second table called tests, this has 4 columns, one for subject_id, one for the subject_name, then one for a student with the highest score in a subject which is highestStudent_id. am trying to make highestStudent_id refer to student_id in my students table. This is the code I have below, am not sure if the syntax is correct: CREATE TABLE students ( student_id SERIAL PRIMARY KEY, player_name TEXT); CREATE TABLE tests ( subject_id SERIAL, subject_name, highestStudent_id SERIAL REFERENCES students); is the syntax highestStudent_id SERIAL REFERENCES students correct? because i have seen another one like highestStudent_id REFERENCES students(student_id)) What would be the correct way of creating the foreign key in PostgreSQL please?
Assuming this table: CREATE TABLE students ( student_id SERIAL PRIMARY KEY, player_name TEXT ); There are four different ways to define a foreign key (when dealing with a single column PK) and they all lead to the same foreign key constraint: Inline without mentioning the target column: CREATE TABLE tests ( subject_id SERIAL, subject_name text, highestStudent_id integer REFERENCES students ); Inline with mentioning the target column: CREATE TABLE tests ( subject_id SERIAL, subject_name text, highestStudent_id integer REFERENCES students (student_id) ); Out of line inside the create table: CREATE TABLE tests ( subject_id SERIAL, subject_name text, highestStudent_id integer, constraint fk_tests_students foreign key (highestStudent_id) REFERENCES students (student_id) ); As a separate alter table statement: CREATE TABLE tests ( subject_id SERIAL, subject_name text, highestStudent_id integer ); alter table tests add constraint fk_tests_students foreign key (highestStudent_id) REFERENCES students (student_id); Which one you prefer is a matter of taste. But you should be consistent in your scripts. The last two statements are the only option if you have foreign keys referencing a PK that consists of more than one column - you can't define the FK "inline" in that case, e.g. foreign key (a,b) references foo (x,y) Only version 3) and 4) will give you the ability to define your own name for the FK constraint if you don't like the system generated ones from Postgres. The serial data type is not really a data type. It's just a short hand notation that defines a default value for the column taken from a sequence. So any column referencing a column defined as serial must be defined using the appropriate base type integer (or bigint for bigserial columns)
PostgreSQL
28,558,920
219
At amazon ec2 RDS Postgresql: => SHOW rds.extensions; rds.extensions -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- btree_gin,btree_gist,chkpass,citext,cube,dblink,dict_int,dict_xsyn,earthdistance,fuzzystrmatch,hstore,intagg,intarray,isn,ltree,pgcrypto,pgrowlocks,pg_trgm,plperl,plpgsql,pltcl,postgis,postgis_tiger_geocoder,postgis_topology,sslinfo,tablefunc,tsearch2,unaccent,uuid-ossp (1 row) As you can see, uuid-ossp extension does exist. However, when I'm calling the function for generation uuid_v4, it fails: CREATE TABLE my_table ( id uuid DEFAULT uuid_generate_v4() NOT NULL, name character varying(32) NOT NULL, ); What's wrong with this?
The extension is available but not installed in this database. CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
PostgreSQL
22,446,478
218
I have a table with over million rows. I need to reset sequence and reassign id column with new values (1, 2, 3, 4... etc...). Is any easy way to do that?
If you don't want to retain the ordering of ids, then you can ALTER SEQUENCE seq RESTART WITH 1; UPDATE t SET idcolumn=nextval('seq'); I doubt there's an easy way to do that in the order of your choice without recreating the whole table.
PostgreSQL
4,678,110
218
How to assign the result of a query to a variable in PL/pgSQL, the procedural language of PostgreSQL? I have a function: CREATE OR REPLACE FUNCTION test(x numeric) RETURNS character varying AS $BODY$ DECLARE name character varying(255); begin name ='SELECT name FROM test_table where id='||x; if(name='test')then --do somthing else --do the else part end if; end; return -- return my process result here $BODY$ LANGUAGE plpgsql VOLATILE In the above function I need to store the result of this query: 'SELECT name FROM test_table where id='||x; to the variable name. How to process this?
I think you're looking for SELECT select_expressions INTO: select test_table.name into name from test_table where id = x; That will pull the name from test_table where id is your function's argument and leave it in the name variable. Don't leave out the table name prefix on test_table.name or you'll get complaints about an ambiguous reference.
PostgreSQL
12,328,198
217
How do I change column default value in PostgreSQL? I've tried: ALTER TABLE ONLY users ALTER COLUMN lang DEFAULT 'en_GB'; But it gave me an error: ERROR: syntax error at or near "DEFAULT"
'SET' is forgotten ALTER TABLE ONLY users ALTER COLUMN lang SET DEFAULT 'en_GB';
PostgreSQL
4,745,156
216
The gitpod GitHub page says Gitpod is an open-source Kubernetes application providing prebuilt, collaborative development environments in your browser - powered by VS Code. However, I can not comprehend what it actually does. Can anyone please explain.
Gitpod co-founder here. Gitpod = server-side-dev-envs + dev-env-as-code + prebuilds + IDE + collaboration. From a Git Repository on GitHub, Gitlab or Bitbucket, Gitpod can spin up a server-side-dev-environment for you in seconds. That's a docker container that you can fully customize and that includes your source code, git-Terminal, VS Code extensions, your IDE (Theia IDE), etc. The dev environment is enough powerful to run your app and even side-services like databases. Step (1) is easily repeatable and reproducible because it's automated and version-controlled and shared across the team. We call this dev-environment-as-code. Think of infrastructure-as-code for your dev environment. After (1), you're immediately ready-to-code, because your workplace is already compiled and all dependencies of your code have been downloaded. Gitpod does that by running your build tools on git-push (like CI/CD would do) and "prebuilds" and store your workspace until you need it. This really shines when reviewing PRs in Gitpod. Collaboration becomes much easier once your dev environments live server-side and your IDE runs in the browser. Sending a snapshot of your dev environment to a colleague is as easy as sending a URL. The same goes for live shared coding in the same IDE and dev-environments. At the end of the day, you start treating your dev environments as something ephemeral: You start them, you code, your push your code, and you forget your dev environment. For your next thing, you'll use a fresh dev environment. The ease of mind that you get from not messing, massaging, and maintaining dev environments on your local machine is incredibly liberating. Gitpod can be used on gitpod.io, or self-hosted on Kubernetes, GCP, or AWS.
Gitpod
63,588,658
30
From what I understand: They are both tools to build container images The build itself runs in a container The build can happen on a remote node, for example in a Kubernetes cluster (Kaniko, BuildKit) They both offer advanced features such as layer caching The differences I can gather: Security model (Kaniko) BuildKit leverages more recent developments such as cache manifest and manifest lists BuildKit supports multiple architectures What I'm not clear is the extent of the overlap between the 2 set of tools and when one should be used instead of the other. For example, both tools seem to cover well the use case of self hosting a remote image build farm on a Kubernetes cluster.
Overlapping features notwithstanding, the primary differences are these: BuildKit Kaniko build with no root or daemon² ✔ ✔ build multi-architecture³ ✔ remote layer caching⁴ ✔ ✔ local layer caching⁵ ✔ ² Both Kaniko and BuildKit can run daemonless and rootless, though Kaniko is, practically speaking and in my humble opinion, easier to build a container from within a non-root container. Kaniko "builds as a root user within a container in an unprivileged environment", but does not require root or a daemon. BuildKit, when exposed via buildx, requires a privileged docker daemon, but BuildKit requires no daemon or root privileges in its standalone form (with some tooling like RootlessKit). ³ Kaniko does not support multi-architecture builds at the time of writing this. https://docs.docker.com/desktop/multi-arch/#build-multi-arch-images-with-buildx ⁴ BuildKit and Kaniko support registry-based caching. BuildKit, however, requires the registry have support for cache manifest lists. ⁵ BuildKit supports multiple --cache-to options, including local filesystem. https://docs.docker.com/engine/reference/commandline/buildx_build/#cache-to Typically the restraints / features of your build environment or platform would dictate which tool is most appropriate, and if you have both as an option, speed may help you decide (though this should be benchmarked thoroughly).
kaniko
67,495,607
16
I am interested in setting up a monitoring service that will page me whenever there are too many jobs in the Resque queue (I have about 6 queues, I'll have different numbers for each queue). I also want to setup a very similar monitoring service that will alert me when I exceed a certain amount of failed jobs in my queue. My question is, there is a lot of keys and confusion that I see affiliated with Resque on my redis server. I don't necessarily see a straight forward way to get a count of jobs per queue or the number of failed jobs. Is there currently a trivial way to grab this data from redis?
yes it's quite easy, given you're using the Resque gem: require 'resque' Resque.info will return a hash e.g/ => { :pending => 54338, :processed => 12772, :queues => 2, :workers => 0, :working => 0, :failed => 8761, :servers => [ [0] "redis://192.168.1.10:6379/0" ], :environment => "development" } So to get the failed job count, simply use: Resque.info[:failed] which would give => 8761 #in my example To get the queues use: Resque.queues this returns a array e.g./ => [ [0] "superQ", [1] "anotherQ" ] You may then find the number of jobs per queue: Resque.size(queue_name) e.g/ Resque.size("superQ") or Resque.size(Resque.queues[0]) .....
Redis
11,235,318
65
Trying to start Celery first time but issues error as below, i have installed redis and its starting fine , but still somehow django seems to have issues with it , File "<frozen importlib._bootstrap_external>", line 848, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/atif/Documents/celery_test/celery-env/lib/python3.8/site-packages/kombu/transport/redis.py", line 263, in <module> class PrefixedStrictRedis(GlobalKeyPrefixMixin, redis.Redis): AttributeError: 'NoneType' object has no attribute 'Redis' Celery.py from django.conf import settings os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'celery_test.settings') app = Celery('celery_test',) app.config_from_object('django.conf:settings') # Load task modules from all registered Django apps. app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) @app.task(bind=True) def debug_task(self): print(f'Request: {self.request!r}') Settings #celery stuff --------------- BROKER_URL = 'redis://localhost:6379' CELERY_RESULT_BACKEND = 'redis://localhost:6379' CELERY_ACCEPT_CONTENT = ['application/json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_TIMEZONE = 'Asia/Kolkata' celery_module/tasks.py from celery import Celery app = Celery('tasks',) @app.task def add(x, y): return x + y
Try to install Redis as in your virtual environment as well: pip install Redis
Redis
70,164,076
65
I want to send a PING to Redis to check if the connection is working, now I could just install redis-cli, but I don't want to and curl is already there. So how can I abuse curl to do that? Basically I need to turn off what's send here: > GET / HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Host: localhost:6379 > Accept: */* > -ERR wrong number of arguments for 'get' command -ERR unknown command 'User-Agent:' -ERR unknown command 'Host:' -ERR unknown command 'Accept:' I was able to get rid of the User-Agent altogether by adding -A "", but I can't find anything else for the rest. Any idea how I can do that?
When you want to use curl, you need REST over RESP, like webdis, tinywebdis or turbowebdis. See https://github.com/markuman/tinywebdis#turbowebdis-tinywebdis--cherrywebdis $ curl -w '\n' http://127.0.0.1:8888/ping {"ping":"PONG"} Without a REST interface for redis, you can use netcat for example. $ (printf "PING\r\n";) | nc <redis-host> 6379 +PONG For password protected redis you can use netcat like this: $ (printf "AUTH <password>\r\n";) | nc <redis-host> 6379 +PONG With netcat you have to build the RESP protocol by your self. See http://redis.io/topics/protocol update 2018-01-09 I've build a powerfull bash function which pings the redis instance at any cost over tcp function redis-ping() { # ping a redis server at any cost redis-cli -h $1 ping 2>/dev/null || \ echo $((printf "PING\r\n";) | nc $1 6379 2>/dev/null || \ exec 3<>/dev/tcp/$1/6379 && echo -e "PING\r\n" >&3 && head -c 7 <&3) } usage redis-ping localhost
Redis
33,243,121
65
I'm using Redis 2.8 on Windows which I downloaded from github release. After unzip and I've set maxheap in redis.windows.conf file. After running redis-server redis.windows.conf I get # Creating Server TCP listening socket *:6379:No such file or directory, but redis is not running correctly. I don't know why.
You must've used the .msi installer. It automagically registers a windows service which starts instantly after the installation (at least on my win 10 machine). This service uses the default config and binds to port 6379. When you start redis-server from the command line, if you haven't specified a different port through a config file, it picks up the default config again and tries to bind to port 6379 which fails. Your cli works because it connects to the redis service that's already listening on 6379. Your shutdown command stops the service and from there things work as expected. Mystery solved. Case closed.
Redis
31,769,097
65
I am using redis as an in-memory database backend for django cache. In particular, I use django-redis configured as follows: CACHES = { 'default': { 'BACKEND': 'redis_cache.cache.RedisCache', 'KEY_PREFIX': DOMAIN_NAME, 'LOCATION': 'unix:/tmp/redis_6379.sock:1', 'OPTIONS': { 'PICKLE_VERSION': -1, # default 'PARSER_CLASS': 'redis.connection.HiredisParser', 'CLIENT_CLASS': 'redis_cache.client.DefaultClient', }, }, } My django cache seem to work correctly. The weird thing is that I cannot see django cache keys using the redis-cli command line. [edit] Please notice in the following that I tried both with $ redis-cli and $ redis-cli -s /tmp/redis_6379.sock [endedit] with no difference. In particular, using the KEYS * command: $ redis-cli redis 127.0.0.1:6379> keys * (empty list or set) but redis 127.0.0.1:6379> set stefano test OK redis 127.0.0.1:6379> keys * 1) "stefano" while from django shell: In [1]: from django.core.cache import cache In [2]: cache.keys('*') Out[2]: [u'django.contrib.sessions.cachebblhwb3chd6ev2bd85bawuz7g6pgaij8', u'django.contrib.sessions.cachewpxiheosc8qv5w4v6k3ml8cslcahiwna'] If I'm using MONITOR on the cli: redis 127.0.0.1:6379> monitor OK 1373372711.017761 [1 unix:/tmp/redis_6379.sock] "KEYS" "project_prefix:1:*" I can see a request, using the django cache prefix; which should prove the redis-cli is connected to the same service. But even searching for that prefix in the redis-cli returns an (empty list or set) Why is that? What is the mechanisms that compartmentalize the different caches on the same redis instance?
I would say there are two possibilities: 1/ The django app may not connect to the Redis instance you think it is connected to, or the redis-cli client you launch does not connect to the same Redis instance. Please note you do not use the same exact connection mechanism in both cases. Django uses a Unix Domain Socket, while redis-cli uses TCP loopback (by default). You may want to launch redis-cli using the same socket path, to be sure: $ redis-cli -s /tmp/redis_6379.sock Now since you have verified with a MONITOR command that you see the commands sent by Django, we can assume you are connected to the right instance. 2/ There is a database concept in Redis. By default, you have 16 distinct databases, and the current default database is 0. The SELECT command can be used to switch a session to another database. There is one keyspace per database. The INFO KEYSPACE command can be used to check whether some keys are defined in several databases. redis 127.0.0.1:6379[1]> info keyspace # Keyspace db0:keys=1,expires=0 db1:keys=1,expires=0 Here I have two databases, let's check the keys defined in the db0 database: redis 127.0.0.1:6379> keys * 1) "foo" and now in the db1 database: redis 127.0.0.1:6379> select 1 OK redis 127.0.0.1:6379[1]> keys * 1) "bar" My suggestion would be also to check whether the Django application sends any SELECT command at connection time to the Redis instance (with MONITOR). I'm not familiar with Django, but the way you have defined the LOCATION parameter makes me think your data could be in database 1 (due to the suffix).
Redis
17,548,188
65
I've have a Django app that's currently hosted up on Amazon's EC2 service. I have two machines, one with the Django app and the other with my PostgreSQL database. So far it has been rock solid. Many sources claim I should implement Redis into my stack, but what would be the purpose of implementing Redis with Django and Postgresql? How can I implement Redis in my Django code for example? How can I use it with PostgreSQL? These are all the questions I've been trying to find answers to so I came here hoping to get answers from the biggest and the best. I really appreciate any answers. Thank You
Redis is a key-value storage system that operates in RAM memory, it's like a "light database" and since it works at RAM memory level it's orders of magnitude faster compared to reading/writing to PostgreSQL or any other traditional Relational Database. Redis is a so-called NoSQL database, like Mongo and many others. It can't directly replace PostgreSQL, you still want permanent storage, but it works along with Relational Databases as an alternate storage system. You can use Redis if your IO operations start getting expensive and it's great for quick calculations and key-based queries. You can include it in your Django/Python project with a wrapper, for example redis-py. Redis is very simple to install and use, you can check the examples at redis-py. Redis is independent from any Relational Database, that way you can use it for caching, calculating or storing values permanently and/or temporarily. It can help reduce querying to PostgreSQL, in the end you can use it the way you want and take advantage from it to improve your app/architecture. This similar question can help you Redis with Django
Redis
14,989,390
65
I am using Spring Data Redis with Jedis. I am trying to store a hash with key vc:${list_id}. I was able to successfully insert to redis. However, when I inspect the keys using the redis-cli, I don't see the key vc:501381. Instead I see \xac\xed\x00\x05t\x00\tvc:501381. Why is this happening and how do I change this?
Ok, googled around for a while and found help at http://java.dzone.com/articles/spring-data-redis. It happened because of Java serialization. The key serializer for redisTemplate needs to be configured to StringRedisSerializer i.e. like this: <bean id="jedisConnectionFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" p:host-name="${redis.server}" p:port="${redis.port}" p:use-pool="true"/> <bean id="stringRedisSerializer" class="org.springframework.data.redis.serializer.StringRedisSerializer"/> <bean id="redisTemplate" class="org.springframework.data.redis.core.RedisTemplate" p:connection-factory-ref="jedisConnectionFactory" p:keySerializer-ref="stringRedisSerializer" p:hashKeySerializer-ref="stringRedisSerializer" /> Now the key in redis is vc:501381. Or like @niconic says, we can also set the default serializer itself to the string serializer as follows: <bean id="redisTemplate" class="org.springframework.data.redis.core.RedisTemplate" p:connection-factory-ref="jedisConnectionFactory" p:defaultSerializer-ref="stringRedisSerializer" /> which means all our keys and values are strings. Notice however that this may not be preferable, since you may want your values to be not just strings. If your value is a domain object, then you can use Jackson serializer and configure a serializer as mentioned here i.e. like this: <bean id="userJsonRedisSerializer" class="org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer"> <constructor-arg type="java.lang.Class" value="com.mycompany.redis.domain.User"/> </bean> and configure your template as: <bean id="redisTemplate" class="org.springframework.data.redis.core.RedisTemplate" p:connection-factory-ref="jedisConnectionFactory" p:keySerializer-ref="stringRedisSerializer" p:hashKeySerializer-ref="stringRedisSerializer" p:valueSerialier-ref="userJsonRedisSerializer" />
Redis
13,215,024
64
I am using sidekiq in my rails application. By Default, Sidekiq can be accessed by anybody by appending "/sidekiq" after the url. I want to password protect / authenticate only the sidekiq part. How can i do that?
Put the following into your sidekiq initializer require 'sidekiq' require 'sidekiq/web' Sidekiq::Web.use(Rack::Auth::Basic) do |user, password| # Protect against timing attacks: # - See https://codahale.com/a-lesson-in-timing-attacks/ # - See https://thisdata.com/blog/timing-attacks-against-string-comparison/ # - Use & (do not use &&) so that it doesn't short circuit. # - Use digests to stop length information leaking Rack::Utils.secure_compare(::Digest::SHA256.hexdigest(user), ::Digest::SHA256.hexdigest(ENV["SIDEKIQ_USER"])) & Rack::Utils.secure_compare(::Digest::SHA256.hexdigest(password), ::Digest::SHA256.hexdigest(ENV["SIDEKIQ_PASSWORD"])) end And in the routes file: mount Sidekiq::Web => '/sidekiq'
Redis
12,265,421
64
Here is my needs: Enqueue_in(10.hours, ... ) (DJ syntax is perfect.) Multiply workers, concurrently. (Resque or beanstalkd are good for this, but not DJ) Must handle push and pop of 100 jobs a second. (I will need to run a test to make sure, but I think DJ can't handle this many jobs) Resque and beanstalkd don't do the enqueue_in. There is a plugin (resque_scheduler) that does it, but I'm not sure of how stable it is. Our enviroment is on amazon, and they rolled out the beanstalkd for free for who has amazon instances, that is a plus for us, but I'm still not sure what is the best option here. We run rails 2.3 but we are bringing it to speed to rails 3.0.3 soon. But what is my best choice here? Am I missing another gem that does this job better? I feel my only option that actually works now is the resque_scheduler. Edit: Sidekiq (https://github.com/mperham/sidekiq) is another option that you should check it out.
For my projects I will feel very comfortbale with collectiveidea/delayed_job in rails2 and 3. I don't know beanstalkd, but i will try it soon :-). I have followed the suggestions in the resque documentation. I will report it. Resque vs DelayedJob How does Resque compare to DelayedJob, and why would you choose one over the other? Resque supports multiple queues DelayedJob supports finer grained priorities Resque workers are resilient to memory leaks / bloat DelayedJob workers are extremely simple and easy to modify Resque requires Redis DelayedJob requires ActiveRecord Resque can only place JSONable Ruby objects on a queue as arguments DelayedJob can place any Ruby object on its queue as arguments Resque includes a Sinatra app for monitoring what's going on DelayedJob can be queried from within your Rails app if you want to add an interface If you're doing Rails development, you already have a database and ActiveRecord. DelayedJob is super easy to setup and works great. GitHub used it for many months to process almost 200 million jobs. Choose Resque if: You need multiple queues You don't care / dislike numeric priorities You don't need to persist every Ruby object ever You have potentially huge queues You want to see what's going on You expect a lot of failure / chaos You can setup Redis You're not running short on RAM Choose DelayedJob if: You like numeric priorities You're not doing a gigantic amount of jobs each day Your queue stays small and nimble There is not a lot failure / chaos You want to easily throw anything on the queue You don't want to setup Redis Choose Beanstalkd if: You like numeric priorities You want extremely fast queue You don't want to waste you RAM You want to serve high number of jobs You're fine with JSONable Ruby objects on a queue as arguments You need multiple queues In no way is Resque a "better" DelayedJob, so make sure you pick the tool that's best for your app. A nice comparison of queueing backend speed: enqueue work ------------------------------------------------- delayed job | 200 jobs/sec 120 jobs/sec resque | 3800 jobs/sec 300 jobs/sec rabbitmq | 2500 jobs/sec 1300 jobs/sec beanstalk | 9000 jobs/sec 5200 jobs/sec Have a nice day! P.S. There is a RailsCast about resque, Delayed Job (revised version) and Beanstakld. Have a look! P.P.S. My favourite choiche is now Sidekiq ( very Simple, Fast and efficient for simple jobs ), have a look at this page for comparison.
Redis
4,808,351
64
I am aware of redis-cli, and the info and config commands. However, they do not have anything that states the size of the current database. How could I figure this out?
Using the INFO command. full details here: http://redis.io/commands/info sample output: redis-cli redis 127.0.0.1:6379> info redis_version:2.4.11 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:kqueue gcc_version:4.2.1 process_id:300 uptime_in_seconds:1389779 uptime_in_days:16 lru_clock:1854465 used_cpu_sys:59.86 used_cpu_user:73.02 used_cpu_sys_children:0.15 used_cpu_user_children:0.11 connected_clients:1 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:0 used_memory:1329424 used_memory_human:1.27M used_memory_rss:2285568 used_memory_peak:1595680 used_memory_peak_human:1.52M mem_fragmentation_ratio:1.72 mem_allocator:libc loading:0 aof_enabled:0 changes_since_last_save:0 bgsave_in_progress:0 last_save_time:1360719404 bgrewriteaof_in_progress:0 total_connections_received:221 total_commands_processed:29926 expired_keys:2 evicted_keys:0 keyspace_hits:1678 keyspace_misses:3 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:379 vm_enabled:0 role:master db0:keys=23,expires=0
Redis
14,844,672
63
I've been looking at using Redis Pub/Sub as a replacement to RabbitMQ. From my understanding Redis's pub/sub holds a persistent connection to each of the subscribers, and if the connection is terminated, all future messages will be lost and dropped on the floor. One possible solution is to use a list (and blocking wait) to store all the message and pub/sub as just a notification mechanism. I think this gets me most of the way there, but I still have some concerns about the failure cases. what happens when a subscriber dies, and comes back online, how should it process all it's pending messages? when a malformed message comes though the system, how do you handle those exceptions? DeadLetter Queue? is there a standard practice to implementing a retry policy?
When a subscriber (consumer) dies, your list will continue to grow until the client returns. Your producer could trim the list (from either side) once it reaches a specific limit, but that is something you would need to handle at the application level. If you include a timestamp within each message, your consumer can then act on the age of a message, assuming you have application logic you want to enforce on message age. I'm not sure how a malformed message would enter the system, as the connection to Redis is usually TCP with the its integrity assurances. But if this happens, perhaps due to a bug in message encoding at the producer layer, you could provide a general mechanism for handling errors by keeping a queue-per-producer that received consumer's exception messages. Retry policies will depend greatly on your application needs. If you need 100% assurance that a message has been received and processed, then you should consider using Redis transactions (MULTI/EXEC) to wrap the work done by a consumer, so you can ensure that a client doesn't remove a message unless it has completed its work. If you need explicit acknowlegement, then you could use an explicit ACK message on a queue dedicated to the producer process(es). Without knowing more about your application needs, it's hard to know how to choose wisely. Generally, if your messages require full ACID protection, then you probably also need to use redis transactions. If your messages are only meaningful when they are timely, then transactions may not be needed. It sounds as though you can't tolerate dropped messages, so your approach of using a list is good. If you need to implement a priority queue for your messages, you can use the sorted set (the Z-commands) to store your messages, using their priority as the score value, along with a polling consumer.
Redis
6,192,177
63
I have a 20GB+ rdb dump in production. I suspect there's a specific set of keys bloating it. I'd like to have a way to always spot the first 100 biggest objects from static dump analysis or ask it to the server itself, which by the way has ove 7M objects. Dump analysis tools like rdbtools are not helpful in this (I think) really common use case! I was thinking to write a script and iterate the whole keyset with "redis-cli debug object", but I have the feeling there must be some tool I'm missing.
An option was added to redis-cli: redis-cli --bigkeys Sample output based on https://gist.github.com/michael-grunder/9257326 $ ./redis-cli --bigkeys # Press ctrl+c when you have had enough of it... :) # You can use -i 0.1 to sleep 0.1 sec every 100 sampled keys # in order to reduce server load (usually not needed). Biggest string so far: day:uv:483:1201737600, size: 2 Biggest string so far: day:pv:2013:1315267200, size: 3 Biggest string so far: day:pv:3:1290297600, size: 5 Biggest zset so far: day:topref:2734:1289433600, size: 3 Biggest zset so far: day:topkw:2236:1318723200, size: 7 Biggest zset so far: day:topref:651:1320364800, size: 20 Biggest string so far: uid:3467:auth, size: 32 Biggest set so far: uid:3029:allowed, size: 1 Biggest list so far: last:175, size: 51 -------- summary ------- Sampled 329 keys in the keyspace! Total key length in bytes is 15172 (avg len 46.12) Biggest list found 'day:uv:483:1201737600' has 5235597 items Biggest set found 'day:uvx:555:1201737600' has 47 members Biggest hash found 'day:uvy:131:1201737600' has 2888 fields Biggest zset found 'day:uvz:777:1201737600' has 1000 members 0 strings with 0 bytes (00.00% of keys, avg size 0.00) 19 lists with 5236744 items (05.78% of keys, avg size 275618.11) 50 sets with 112 members (15.20% of keys, avg size 2.24) 250 hashs with 6915 fields (75.99% of keys, avg size 27.66) 10 zsets with 1294 members (03.04% of keys, avg size 129.40)
Redis
13,673,058
62
In my setup, the info command shows me the following: [keys] => 1128 [expires] => 1125 I'd like to find those 3 keys without an expiration date. I've already checked the docs to no avail. Any ideas?
Modified from a site that I can't find now. redis-cli keys "*" | while read LINE ; do TTL=`redis-cli ttl "$LINE"`; if [ $TTL -eq -1 ]; then echo "$LINE"; fi; done; edit: Note, this is a blocking call.
Redis
9,817,951
62
A number of sources, including the official Redis documentation, note that using the KEYS command is a bad idea in production environments due to possible blocking. If the approximate size of the dataset is known, does SCAN have any advantage over KEYS? For example, consider a database with at most 100 keys of the form data:number:X where X is an integer. If I want to retrieve all of these, I might use the command KEYS data:number:*. Is this going to be significantly slower than using SCAN 0 MATCH data:number:* COUNT 100? Or are the two commands essentially equivalent in this circumstance? Would it be accurate to say that SCAN is preferable to KEYS because it protects against the scenario where an unexpectedly large set would be returned?
You shouldn't care about current command execution but about the impact to all other commands, since Redis processes commands using a single thread (i.e. while a command is being executed all others need to await until executing one ends). While keys or scan might provide you similar or identical performance executed alone in your case, some milliseconds blocking Redis will significantly decrease overall I/O. This the main reason to use keys for development purposes and scan on production environments. OP said: "While keys or scan might provide you similar or identical performance executed alone in your case, some milliseconds blocking Redis will significantly decrease overall I/O." - This sentence seems to indicate that one command blocks Redis, and the other doesn't, which can't be the case. If I am guaranteed 100 results from my call to KEYS, in what way is it worse than SCAN? Why do you feel that one command is more prone to blocking? There should be a good difference when you can paginate the search. It's not the same being forced to get 100 keys in a single pass than being able to implement pagination and get 100 keys, 10 by 10 (or 50 and 50). This very small interruption can let other commands sent by the application layer be processed by Redis. See what Redis official documentation says about this: Since these commands allow for incremental iteration, returning only a small number of elements per call, they can be used in production without the downside of commands like KEYS or SMEMBERS that may block the server for a long time (even several seconds) when called against big collections of keys or elements .
Redis
32,603,964
61
Is it possible to create namespaces in Redis? From what I found, all the global commands (count, delete all) work on all the objects. Is there a way to create sub-spaces such that these commands will be limited in context? I don't want to set up different Redis servers for this purpose. I assume the answer is "No", and wonder why wasn't this implemented, as it seems to be a useful feature without too much overhead.
A Redis server can handle multiple databases... which are numbered. I think it provides 32 of them by default; you can access them using the -n option to the redis-cli shell scripting command and by similar options to the connection arguments or using the "select()" method on its connection objects. (In this case .select() is the method name for the Python Redis module ... I presume it's named similarly for other libraries and interfaces. There's an option to control how many separate databases you want in the configuration file for the Redis server daemon as well. I don't know what the upper limit would be and there doesn't seem to be a way to dynamically change that (in other words it seems that you'd have to shutdown and restart the server to add additional DBs). Also, there doesn't seem to be an away to associate these DB numbers with any sort of name nor to impose separate ACLS, nor even different passwords, to them. Redis, of course, is schema-less as well.
Redis
8,614,858
61
I want to using Redis in laravel 5.2 however, I'm getting error such a Class 'Predis\Client' not found, How I can solve it.
First download the REDIS to your system (if you haven't already installed it). Go to the folder where you have downloaded the redis and run this command: cd your-redis-folder-name make Go to your project directory and install composer: composer require predis/predis Go to your .env file and add Queue driver: QUEUE_DRIVER=redis use Mail::queue() to send mail via queue. See Doc. And in your terminal run: php artisan queue:listen to send.
Redis
34,865,064
60
I got error NOAUTH Authentication required when I connect to Redis server via command: redis-cli and run ping to check if Redis is working. I found answer for NOAUTH Authentication required error which describes that this error only happens when Redis is set a password, but I checked Redis config file at etc/redis/redis.conf and there is no password setting. Does anyone know that if there are other settings which can cause this error? Thanks for any help. p/s: I am using Ruby on Rails web framework, Redis database is used for Sidekiq. Edited: Redis version is 2.8.4. Server is hosted on AWS. Currently, I decided to set a password for Redis server so that it can not be set password when it is running. (When Redis server is restarted, it will work normal. You can run sudo service redis-server restart to restart Redis server.)
We also faced a similar issue. Looks like someone scanned AWS, connected to all public redis servers, and possibly ran "CONFIG SET REQUIREPASS ''", thus locking down the running instance of redis. Once you restart redis, the config is restored to normal. Best thing would be to use AWS security group policy and block port 6379 for public.
Redis
34,115,213
60
Is there any way to remove/delete an entry by key, using Node_redis? I can't see any such option from the docs..
You can del use like this: redis.del('SampleKey');
Redis
15,219,577
60
Simple, probably dumb question: Suppose I have a Java server that stores in memory commonly used keys and values which I can query (let's say in a HashMap) What's the difference between that and using Memcache (or even Redis)? They both store things in memory. Is there a benefit to one or the other? Does Memcache leaves less of a memory footprint? Can store more in less memory? Faster to query? No difference?
Advantages of Java memory over memcache: Java memory is faster (no network). Java memory won't require serialization, you have Java objects available to you. Advantages of memcache over Java memory: It can be accessed by more than one application server, so your cache will be shared among all your app servers. It can be accessed by a variety of different servers, so long as they all agree on the key scheme and the serialization. It will discard expired cache values, so you get time-based invalidation.
Redis
5,465,737
60
I have some information stored in my RedisToGo instance in Heroku and I want to wipe it so the Redis store is clean. Any idea how to do this?
You can do this with redis-cli. RedisToGo gives you a url in the form: redis://redistogo:[email protected]:9402 So this command will empty your db: redis-cli -h catfish.redistogo.com -p 9402 -a d20739cffb0c0a6fff719acc2728c236 flushall
Redis
9,137,500
59
I'm planning to start using hashes insead of regular keys. But I can't find any information about multi get for hash-keys in Redis wiki. Is this kind of command is supported by Redis? Thank you.
You can query hashes or any keys in pipeline, i.e. in one request to your redis instance. Actual implementation depends on your client, but with redis-py it'd look like this: pipe = conn.pipeline() pipe.hgetall('foo') pipe.hgetall('bar') pipe.hgetall('zar') hash1, hash2, hash3 = pipe.execute() Client will issue one request with 3 commands. This is the same technique that is used to add multiple values to a set at once. Read more at http://redis.io/topics/pipelining
Redis
3,329,408
59
Aerospike is a key-value, in-memory, operational NoSQL database with ACID properties which support complex objects and easy to scale. But I have already used something which does absolutely the same. Redis is also a key-value, in-memory (but persistent to disk) NoSQL database. It also support different complex objects. But in comparison to Aerospike, Redis was in use for a lot of time, already have an active community and a lot of projects developed in it. So what is the difference between aerospike and other no-sql key-value databases like redis. Is there a particular place which is better suited for aerospike. P.S. I am looking for an answer from people who used at least one of these dbs (preferably both) in real world and havend real life experience (not copy-pastes from official website).
If it has to be answered in one word, its "performance". Aerospike's performance is much better than any clustered-nosql solutions out there. Higher performance per-node means smaller cluster which is lower TCO (Total Cost of Ownership) and maintenance. Aerospike does auto-clustering, auto-sharding, auto-rebalancing (when cluster state changes) most of which needs manual steps in other databases. I said "clustered" because I dont want to mix redis in that group (though redis clustering is in beta). Pure in-memory performance of Aerospike and redis will be comparable. But Redis expects a lot of things to be handled at the application layer like sharding, request redirection etc. Even though redis has a way to persist (snapshot or AOF), it has its own problems as its designed more like an addon. Aerospike is developed natively with persistence in mind. The clustering of redis also involves setting up master slave etc. You may want to take a look at this talk comparing and contrasting redis vs aerospike.
Redis
24,482,337
58
In redis there is a SETEX command that allows me to set a key that expires, is there a multi-set version of this command that also has a TTL? both MSET and MSETNX commands do not have such an option.
I was also looking for this kind of operation. I didn't find anything, so I did it with MULTI/EXEC: MULTI expire key1 expire key2 expire key3 EXEC
Redis
16,423,342
58
I have read about Redis and RocksDB, I don't get the advantages of Redis over RocksDB. I know that Redis is all in-memory and RocksDB is in-memory and uses flash storage. If all data fits in-memory, which one should I choose? do they have the same performance? Redis scales linearly with the number of CPU's? I guess that there are others differences that I don't get. I have a dataset which fits in-memory and I was going to choose Redis but it seems that RocksDB offers me the same and if one day the dataset grows too much I wouldn't have to be worried about the memory.
They have nothing in common. You are trying to compare apples and oranges here. Redis is a remote in-memory data store (similar to memcached). It is a server. A single Redis instance is very efficient, but totally non scalable (regarding CPU). A Redis cluster is scalable (regarding CPU). RocksDB is an embedded key/value store (similar to BerkeleyDB or more exactly LevelDB). It is a library, supporting multi-threading and a persistence based on log-structured merge trees.
Redis
31,831,706
57
Sidekiq has been working in development mode just perfectly. Now that I am trying to use it in production, all the jobs are just sitting in enqueue and aren't ever being run. Could anyone point me in the right direction as to how to solve this issue?
Please check if sidekiq process is actually running: ps aux | grep sidekiq If it is not, try to run sidekiq in foreground first and check the output. bundle exec sidekiq -e production
Redis
17,204,826
57
How does Redis handle multiple threads (from different clients) updating the same data structure in Redis ? What is the recommended best practice for such a use case?
if you read the Little redis book at some point this sentence comes. "You might not know it, but Redis is actually single-threaded, which is how every command is guaranteed to be atomic. While one command is executing, no other command will run." Have a look in http://openmymind.net/2012/1/23/The-Little-Redis-Book/ for more information Regards
Redis
17,099,222
57
I'm developing a Python Service(Class) for accessing Redis Server. I want to know how to check if Redis Server is running or not. And also if somehow I'm not able to connect to it. Here is a part of my code import redis rs = redis.Redis("localhost") print rs It prints the following <redis.client.Redis object at 0x120ba50> even if my Redis Server is not running. As I found that my Python Code connects to the Server only when I do a set() or get() with my redis instance. So I dont want other services using my class to get an Exception saying redis.exceptions.ConnectionError: Error 111 connecting localhost:6379. Connection refused. I want to return proper message/Error code. How can I do that??
If you want to test redis connection once at startup, use the ping() command. from redis import Redis redis_host = '127.0.0.1' r = Redis(redis_host, socket_connect_timeout=1) # short timeout for the test r.ping() print('connected to redis "{}"'.format(redis_host)) The command ping() checks the connection and if invalid will raise an exception. Note - the connection may still fail after you perform the test so this is not going to cover up later timeout exceptions.
Redis
12,857,604
57
I'm wondering if there's a way to check if a key already exists in a redis list? I can't use a set because I don't want to enforce uniqueness, but I do want to be able to check if the string is actually there. Thanks.
Your options are as follows: Using LREM and replacing it if it was found. Maintaining a separate SET in conjunction with your LIST Looping through the LIST until you find the item or reach the end. Redis lists are implemented as a http://en.wikipedia.org/wiki/Linked_list, hence the limitations. I think your best option is maintaining a duplicate SET. This is what I tend to do. Just think of it as an extra index. Regardless, make sure your actions are atomic with MULTI-EXEC or Lua scripts.
Redis
9,312,838
57
Anyone know the difference between redis replication and redis sharding? What are they use for? Redis stores data in memory, how does this affect replication/sharding? Is it possible to use both of them together?
Sharding is almost replication's antithesis, though they are orthogonal concepts and work well together. Sharding, also known as partitioning, is splitting the data up by key; While replication, also known as mirroring, is to copy all data. Sharding is useful to increase performance, reducing the hit and memory load on any one resource. Replication is useful for getting a high availability of reads. If you read from multiple replicas, you will also reduce the hit rate on all resources, but the memory requirement for all resources remains the same. It should be noted that, while you can write to a slave, replication is master->slave only. So you cannot scale writes this way. Suppose you have the following tuples: [1:Apple], [2:Banana], [3:Cherry], [4:Durian] and we have two machines A and B. With Sharding, we might store keys 2,4 on machine A; and keys 1,3 on machine B. With Replication, we store keys 1,2,3,4 on machine A and 1,2,3,4 on machine B. Sharding is typically implemented by performing a consistent hash upon the key. The above example was implemented with the following hash function h(x){return x%2==0?A:B}. To combine the concepts, We might replicate each shard. In the above cases, all of the data (2,4) of machine A could be replicated on machine C and all of the data (1,3) of machine B could be replicated on machine D. Any key-value store (of which Redis is only one example) supports sharding, though certain cross-key functions will no longer work. Redis supports replication out of the box.
Redis
2,139,443
57
Recently, we had an outage due to Redis being unable to write to a file system (not sure why it's Amazon EFS) anyway I noted that there was no actual HEALTHCHECK set up for the Docker service to make sure it is running correctly, Redis is up so I can't simply use nc -z to check if the port is open. Is there a command I can execute in the redis:6-alpine (or non-alpine) image that I can put in the healthcheck block of the docker-compose.yml file. Note I am looking for command that is available internally in the image. Not an external healthcheck.
Although the ping operation from @nitrin0 answer generally works. It does not handle the case where the write operation will actually fail. So instead I perform a change that will just increment a value to a key I don't plan to use. image: redis:6 healthcheck: test: [ "CMD", "redis-cli", "--raw", "incr", "ping" ] Note this MUST NOT be performed on a cluster that is initialized by Docker. Since this health check will prevent the cluster from being formed as the Redis are not empty.
Redis
67,904,609
56
dThe following works as expected. But how do I insert the data into forth database instead of default "0" from command prompt? # echo -n "testing" | /home/shantanu/redis-2.4.2/src/redis-cli -x set my_pass OK # echo -n "testing" | /home/shantanu/redis-2.4.2/src/redis-cli -x select 4; set my_pass (error) ERR wrong number of arguments for 'select' command
Just use the -n argument to choose DB number. It available since Redis 2.4.2. echo -n "testing" | redis-cli -n 4 -x set my_pass or redis-cli -n 4 set my_pass testing
Redis
8,253,232
56
We're using AWS, and considering to use DynamoDB or Redis on our new service. Below is our service's character Insert/Delete occur over between hundreds and thousands per minute, and will be larger later. We don't need quick search, only need to find a value with key Data should not be lost. There are another data that doesn't have a lot of Insert/Delete unlike 1. I'm worried about when Redis server down. When the Redis failure, our data will be removed. That's why I'm considering to select Amazon DynamoDB. Because DynamoDB is NoSQL, so Insert/Delete is so fast(slower than Redis, but we don't need to that much speed), and store data permanently. But I'm not sure that my thinking is right or not. If I'm thinking wrong or don't think another important point, I'm going appreciate when you guys teach me. Thanks.
There are two type of Redis deployment in AWS ElastiCache service: Standalone Multi-AZ cluster With standalone installation it is possible to turn on persistence for a Redis instance, so service can recover data after reboot. But in some cases, like underlying hardware degradation, AWS can migrate Redis to another instance and lose persistent log. In Multi-AZ cluster installation it is not possible to enable persistence, only replication is occur. In case of failure it takes a time to promote replica to master state. Another way is to use master and slave endpoints in the application directly, which is complicated. In case of failure which cause a restart both Redis node at time it is possible to lose all data of the cluster configuration too. So, in general, Redis doesn't provide high durability of the data, while gives you very good performance. DynamoDB is highly available and durable storage of you data. Internally it replicates data into several availability zones, so it is highly available by default. It is also fully managed AWS service, so you don't need to care about Clusters, Nodes, Monitoring ... etc, which is considering as a right cloud way. Dynamo DB is charging by R/W operation (on-demand or reserved capacity model) and amount of stored data. In may be really cheap for testing of the service, but much more expensive under the heavy load. You should carefully analyze you workload and calculate total service costs. As for performance: DynamoDB is a SSD Database comparing to Redis in-memory store, but it is possible to use DAX - in-memory cache read replica for DynamoDB as accelerator on heavy load. So you won't be strictly limited with the DynamoDB performance. Here is the link to DynamoDB pricing calculator which one of the most complicated part of service usage: https://aws.amazon.com/dynamodb/pricing/
Redis
56,870,326
55
I am run into trouble .My code below.But I do not know why there is a char 'b' before output string "Hello Python". >>> import redis >>> redisClient = redis.StrictRedis(host='192.168.3.88',port=6379) >>> redisClient.set('test_redis', 'Hello Python') True >>> value = redisClient.get('test_redis') >>> print(value) b'Hello Python' //why char 'b' output?
It means it's a byte string You can use: redis.StrictRedis(host="localhost", port=6379, charset="utf-8", decode_responses=True) using decode_responses=True to make a unicode string.
Redis
25,745,053
55
I'm getting "OOM command not allowed" when trying to set a key, maxmemory is set to 500M with maxmemory-policy "volatile-lru", I'm setting TTL for each key sent to redis. INFO command returns : used_memory_human:809.22M If maxmemory is set to 500M, how did I reached 809M ? INFO command does not show any Keyspaces , how is it possible ? KEYS * returns "(empty list or set)" ,I've tried to change db number , still no keys found. Here is info command output: redis-cli -p 6380 redis 127.0.0.1:6380> info # Server redis_version:2.6.4 redis_git_sha1:00000000 redis_git_dirty:0 redis_mode:standalone os:Linux 2.6.32-358.14.1.el6.x86_64 x86_64 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.7 process_id:28291 run_id:229a2ee688bdbf677eaed24620102e7060725350 tcp_port:6380 uptime_in_seconds:1492488 uptime_in_days:17 lru_clock:1429357 # Clients connected_clients:1 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:0 # Memory used_memory:848529904 used_memory_human:809.22M used_memory_rss:863551488 used_memory_peak:848529192 used_memory_peak_human:809.22M used_memory_lua:31744 mem_fragmentation_ratio:1.02 mem_allocator:jemalloc-3.0.0 # Persistence loading:0 rdb_changes_since_last_save:0 rdb_bgsave_in_progress:0 rdb_last_save_time:1375949883 rdb_last_bgsave_status:ok rdb_last_bgsave_time_sec:-1 rdb_current_bgsave_time_sec:-1 aof_enabled:0 aof_rewrite_in_progress:0 aof_rewrite_scheduled:0 aof_last_rewrite_time_sec:-1 aof_current_rewrite_time_sec:-1 aof_last_bgrewrite_status:ok # Stats total_connections_received:3 total_commands_processed:8 instantaneous_ops_per_sec:0 rejected_connections:0 expired_keys:0 evicted_keys:0 keyspace_hits:0 keyspace_misses:0 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:0 # Replication role:master connected_slaves:0 # CPU used_cpu_sys:18577.25 used_cpu_user:1376055.38 used_cpu_sys_children:0.00 used_cpu_user_children:0.00 # Keyspace redis 127.0.0.1:6380>
Redis' maxmemory volatile-lru policy can fail to free enough memory if the maxmemory limit is already used by the non-volatile keys.
Redis
18,430,324
55
all: here is my server memory info with 'free -m' total used free shared buffers cached Mem: 64433 49259 15174 0 3 31 -/+ buffers/cache: 49224 15209 Swap: 8197 184 8012 my redis-server has used 46G memory, there is almost 15G memory left free As my knowledge,fork is copy on write, it should not failed when there has 15G free memory,which is enough to malloc necessary kernel structures . besides, when redis-server used 42G memory, bgsave is ok and fork is ok too. Is there any vm parameter I can tune to make fork return success ?
More specifically, from the Redis FAQ Redis background saving schema relies on the copy-on-write semantic of fork in modern operating systems: Redis forks (creates a child process) that is an exact copy of the parent. The child process dumps the DB on disk and finally exits. In theory the child should use as much memory as the parent being a copy, but actually thanks to the copy-on-write semantic implemented by most modern operating systems the parent and child process will share the common memory pages. A page will be duplicated only when it changes in the child or in the parent. Since in theory all the pages may change while the child process is saving, Linux can't tell in advance how much memory the child will take, so if the overcommit_memory setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail. Setting overcommit_memory to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis. Redis doesn't need as much memory as the OS thinks it does to write to disk, so may pre-emptively fail the fork.
Redis
11,752,544
55
What are the pros and cons of each? Please advice when to use one and not the other.
Data storage Pub/Sub is a Publisher/Subscriber platform, it's not data storage. Published messages evaporate, regardless if there was any subscriber. In Redis Streams, stream is a data type, a data structure on its own right. Messages or entries are stored in memory and stay there until commanded to be deleted. Sync/Async communication (Push/Pull) Pub/Sub is synchronous communication (push protocol). All parties need to be active at the same time to be able to communicate. Here Redis is a pure synchronous messaging broker. Redis Streams allow for both synchronous (XREAD with BLOCK and the special $ ID is a push protocol) and asynchronous communication (regular XREAD is a pull protocol). XREAD with BLOCK is like Pub/Sub, but with the ability to resume on disconnection without losing messages. Delivery Semantics Pub/Sub is At-most-once, i.e. "fire and forget". Redis Streams allows for both At-most-once or At-least-once (explicit acknowledgement sent by the receiver) Blocking mode for consumers Pub/Sub is blocking-mode only. Once subscribed to a channel, the client is put into subscriber mode and it cannot issue commands (except for [P]SUBSCRIBE, [P]UNSUBSCRIBE, PING and QUIT), it has become read-only. Redis Streams allows consumers to read messages in blocking mode or not. Fan-out Pub/Sub is fan-out only. All active clients get all messages. Redis Streams allows fan-out (with XREAD), but also to provide a different subset of messages from the same stream to many clients. This allows scaling message processing, by routing different messages to different workers, in a way that it is not possible that the same message is delivered to multiple consumers. This last scenario is achieved with consumer groups. Redis Streams provide many more features, like time-stamps, field-value pairs, ranges, etc. It doesn't mean you should always go for Streams. If your use-case can be achieved with Pub/Sub, it is better for you to use Pub/Sub then. With Streams, you have to care for memory usage.
Redis
59,540,563
54
I'm trying to run sidekiq worker with Rails. When I try to docker-compose up worker I get the following error: worker_1 | Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED) worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:332:in `rescue in establish_connection' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:318:in `establish_connection' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:94:in `block in connect' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:280:in `with_reconnect' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:93:in `connect' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:351:in `ensure_connected' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:208:in `block in process' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:293:in `logging' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:207:in `process' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis/client.rb:113:in `call' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis.rb:211:in `block in info' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis.rb:57:in `block in synchronize' worker_1 | /usr/lib/ruby/2.2.0/monitor.rb:211:in `mon_synchronize' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/redis-3.2.2/lib/redis.rb:210:in `info' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/sidekiq-4.0.1/lib/sidekiq/cli.rb:71:in `block in run' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/sidekiq-4.0.1/lib/sidekiq.rb:84:in `block in redis' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/connection_pool-2.2.0/lib/connection_pool.rb:64:in `block (2 levels) in with' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/connection_pool-2.2.0/lib/connection_pool.rb:63:in `handle_interrupt' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/connection_pool-2.2.0/lib/connection_pool.rb:63:in `block in with' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/connection_pool-2.2.0/lib/connection_pool.rb:60:in `handle_interrupt' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/connection_pool-2.2.0/lib/connection_pool.rb:60:in `with' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/sidekiq-4.0.1/lib/sidekiq.rb:81:in `redis' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/sidekiq-4.0.1/lib/sidekiq/cli.rb:68:in `run' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/gems/sidekiq-4.0.1/bin/sidekiq:13:in `<top (required)>' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/bin/sidekiq:23:in `load' worker_1 | /home/app/Nyvur/vendor/bundle/ruby/2.2.0/bin/sidekiq:23:in `<main>' nyvur_worker_1 exited with code 1 Here's my docker-compose file: web: &app_base build: . ports: - "80:80" volumes: - .:/Nyvur command: /usr/bin/start_server.sh links: - postgres - mongo - redis environment: &app_environment SIDEKIQ_CONCURRENCY: 50 SIDEKIQ_TIMEOUT: 10 ENABLE_DEBUG_SERVER: true RACK_ENV: production RAILS_ENV: production worker: build: . volumes: - .:/Nyvur ports: [] links: - postgres - mongo - redis command: bundle exec sidekiq -c 50 postgres: image: postgres:9.1 ports: - "5432:5432" environment: LC_ALL: C.UTF-8 POSTGRES_DB: Nyvur_production POSTGRES_USER: postgres POSTGRES_PASSWORD: 3x1mpl3 mongo: image: mongo:3.0.7 ports: - "27017:27017" redis: image: redis ports: - "6379:6379" My Dockerfile: FROM phusion/passenger-customizable MAINTAINER VodkaMD <[email protected]> ENV RACK_ENV="production" RAILS_ENV="production" SECRET_KEY_BASE="e09afa8b753cb175bcef7eb5f737accd02a4c16d9b6e5d475943605abd4277cdf47c488812d21d9c7117efd489d876f34be52f7ef7e88b21759a079339b198ce" ENV HOME /root CMD ["/sbin/my_init"] RUN /pd_build/utilities.sh RUN /pd_build/ruby2.2.sh RUN /pd_build/python.sh RUN /pd_build/nodejs.sh RUN /pd_build/redis.sh RUN /pd_build/memcached.sh RUN apt-get update && apt-get install -y vim nano dialog net-tools build-essential wget libpq-dev git RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # RUN mkdir /etc/nginx/ssl # RUN openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt RUN rm -f /etc/service/nginx/down RUN rm -f /etc/service/redis/down RUN rm -f /etc/service/sshd/down RUN rm -f /etc/service/memcached/down WORKDIR /tmp ADD Gemfile /tmp/ ADD Gemfile.lock /tmp/ RUN mkdir /home/app/Nyvur ADD . /home/app/Nyvur RUN chown -R app:app /home/app/Nyvur WORKDIR /home/app/Nyvur RUN bundle install --deployment RUN bundle exec rake assets:precompile RUN rm /etc/nginx/sites-enabled/default COPY config/nginx/nginx_nyvur.conf /etc/nginx/sites-enabled/nginx_nyvur.conf ADD config/nginx/postgres-env.conf /etc/nginx/main.d/postgres-env.conf ADD config/nginx/rails-env.conf /etc/nginx/main.d/rails-env.conf ADD config/nginx/start_server.sh /usr/bin/start_server.sh RUN chmod +x /usr/bin/start_server.sh RUN mkdir -p /home/app/Nyvur/tmp/pids RUN mkdir -p /home/app/Nyvur/tmp/sockets RUN mkdir -p /home/app/Nyvur/log RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* EXPOSE 80 443 9292 I've tried different configurations, I've checked other builds, but the problem still persists, So far, Sidekiq runs well outside of Docker.
Check if your redis server is running, start redis by using the following command in the terminal: redis-server
Redis
34,729,752
54
What's the easiest way to getting the number (count) of items in Redis set? Preferably without the need to dump whole set and count the lines... So far, I have found only BITCOUNT, which I have not found that useful...
The SCARD command returns the cardinality (i.e. number of items) of a Redis set. http://redis.io/commands/scard There is a similar command (ZCARD) for sorted sets.
Redis
18,056,518
53
I search through redis command list. I couldn't find the command to get all the available channels in redis pub/sub. In meteor server, the equivalent command is LISTCHANNELS, where it lists all known channels, the number of messages stored on each one and the number of current subscribers. I have a cron that needs to periodically know about the available channels. Does redis have native command for this? Or I need to find a way to implement it myself?
PUBSUB CHANNELS does this as of version 2.8.0.
Redis
8,165,188
53
Now I have to use a java client for redis. I have come across Jedis and Redisson. EDIT: Reframing as the question was kind of opinion based. Which is more efficient in terms of speed? Any benchmarks? Which of them is able to provide the following? Distributed locks(and update some keys in a map) Auto key expiry notification but I want this to be received by only one particular subscriber from among a group of subscribers(Similar to consumer group concept in Apache Kafka). How this can be achieved? PS: Please don't mark it as duplicate of this.
That question is opinion-based but lets get some objective points into it: TL; DR: The driver choice depends on multiple things: Additional dependencies Programming model Scalability Being opinionated regarding the implementation of high-level features Prospect of your project, the direction in which you want to evolve Explanation Additional dependencies Some projects are opinionated regarding additional dependencies and transient dependencies when adding a library. Jedis is almost dependency-free, it requires Apache Commons Pool 2 for connection-pooling. Redisson requires Netty, the JCache API and Project Reactor as basic dependencies. It's extensible because it integrates with a lot of other libraries (Tomcat Session store). Programming model That's how you interact with your Redis client. It also defines the abstraction level. Jedis is a low-level driver exposing Redis API as Java method calls: Jedis jedis = …; jedis.set("key", "value"); List<String> values = jedis.mget("key", "key2", "key3"); Redisson is a high-level client that exposes its functionality through various API objects: Redisson redisson = … RMap map = redisson.getMap("my-map"); // implement java.util.Map map.put("key", "value"); map.containsKey("key"); map.get("key"); Each call invokes one or more Redis calls, some of them are implemented with Lua (Redis "Scripting"). Scalability There are multiple drivers available for Java that come with various properties that might fit your project. Scalability plays into that as well. Looking at drivers it boils down how drivers, work with their resources and which programming models they support. Jedis uses blocking I/O and method calls are synchronous. Your program flow is required to wait until I/O is handled by the sockets. There's no asynchronous (Future, CompletableFuture) or reactive support (RxJava Observable or Reactive Streams Publisher). Jedis client instances are not thread-safe hence they require connection-pooling (Jedis-instance per calling thread). Redisson uses non-blocking I/O and an event-driven communication layer with netty. Method calls are synchronous, asynchronous or reactive (via Project Reactor 2.0 or 3.1). Connections are pooled, but the API itself is thread-safe and requires fewer resources. I'm not entirely sure, but maybe you can even operate on a single connection. That's the most efficient way when working with Redis. Opinion about the client implementation These paragraphs deal with how the clients are implemented. Both clients have an excellent feature coverage, and you can fulfill your requirements with both libraries. Jedis is a straightforward implementation that just writes commands to an OutputStream and parses the responses. No more than that. If you want high-level features, then you need to implement these by using the Redis API. It gives you full control over the commands you invoke and the resulting behavior. Implementing your features might require additional efforts here. Redisson is a high-level client that provides features through its abstractions. While you can use these objects without the need of knowing they are backed by Redis (Map, List, Set, …), each API call translates to one or more Redis calls, some to Lua script execution. You might like or dislike the way Redisson behaves and how it implements the features, but in the end, there's not much you can do about it. Using Redissons high-level features might reduce your implementation efforts. Outlook That section entirely depends on where you're heading to. Jedis supports all Redis API commands, Redis Standalone, Redis Sentinel and Redis Cluster. There are no slave reads in master-slave setups, but I assume that's just a matter of time until jedis will provide these features. With jedis, you can't go async and using advanced features of AWS ElastiCache or slave reads requires your own implementation. Redisson has a broad coverage of various setups. It supports all the things Jedis supports and provides read strategies for Master/Slave setups, has improved support for AWS ElastiCache.
Redis
42,250,951
52
How I can find keys matching a pattern like this: Eg: I have some keys: abc:parent1 abc:parent2 abc:parent1:child1 abc:parent2:child2 How can I find only abc:parent1 abc:parent2
Keys is specifically noted as a command not to be run in production due to the way it works. What you need here is to create an index of your keys. Use a set for storing the key names of the pattern you want. When you add a new we key, add the name of it to the set. For example: Set abc:parent1:child1 breakfast Sadd abc:parent1:index abc:parent1 Then when you need the list: Smembers abc:parent1:index Will give you the list, without the penalties and problems associated with using the "evil" keys command. Additionally you would remove an entry with sremove on key deletion. You also get as a benefit the ability to know how many keys are in the index with a single call. If you absolutely, positively, MUST avoid using an index use SCAN instead of keys. The only time you should even consider keys is if you are running a debug slave where the only process using it is your debugging process.
Redis
32,474,699
52
I noticed that there are two different projects for using redis for django cache https://github.com/sebleier/django-redis-cache/ https://github.com/niwibe/django-redis Is one better known than the other, more of a standard package? I can't decide which to use.
I am currently using django-redis as cache backend for Redis. I haven't used django-redis-cache so far, but what made me take the decision to use django-redis are the following: Modular client system (pluggable clients). Some of the pluggable clients come out of the box (shard client, herd client, etc.) Master-Slave support in the default client. Facilities for raw access to Redis client/connection pool (very useful). Better documented. On django-redis documentation site, you can find more reasons to consider it. What I can tell from my experience so far is that I am very happy with django-redis.
Redis
21,932,097
52
I want to stop the redis server and it just keeps going and going. I am using redis-2.6.7 Check that it is running: redis-server It says "...bind: Address already in use" so it is already running. I have tried redis-cli redis 127.0.0.1:6379> shutdown It just hangs and nothing happens. I break out and check, yes, it is still running. I have tried redis-server stop I get "can't open config file 'stop'" I tried: killall redis-server Still running. The reason that I want to stop it is that it is just hanging when I try to set or get a value via Python. So I thought that I would restart it. EDIT: No commands seem to work from redis-cli. I also tried INFO and it just hangs.
I finally got it down. Get the PID of the process (this worked in Webfaction): ps -u my_account -o pid,rss,command | grep redis Then > kill -9 the_pid I was able to REPRODUCE this issue: Start redis-server Then break it using Pause/Break key Now it hangs and it won't shutdown normally. Also the Python program trying to set/get a key hangs. To avoid this: Just close the window after starting redis-server. It's now running normally.
Redis
15,088,053
52
I know there are node.js libraries for Redis; what I'd like to do is run a Redis server (either on localhost or on a server host somewhere) and call it directly via HTTP (i.e. AJAX or HTTP GET as needed) from JavaScript running inside a browser (i.e. a Greasemonkey or Chrome Extension script, or maybe a bookmarklet or SCRIPT tag). Does Redis have a native REST or HTTP API?
You can't connect directly to Redis from JavaScript running in a browser because Redis does not speak HTTP. What you can do is put webdis in front of Redis, it makes it possible work with a Redis instance over a HTTP interface.
Redis
5,759,120
52
What are the implications of disabling gossip, mingle, and heartbeat on my celery workers? In order to reduce the number of messages sent to CloudAMQP to stay within the free plan, I decided to follow these recommendations. I therefore used the options --without-gossip --without-mingle --without-heartbeat. Since then, I have been using these options by default for all my celery projects but I am not sure if there are any side-effects I am not aware of. Please note: we now moved to a Redis broker and do not have that much limitations on the number of messages sent to the broker we have several instances running multiple celery workers with multiple queues
This is the base documentation which doesn't give us much info heartbeat Is related to communication between the worker and the broker (in your case the broker is CloudAMQP). See explanation With the --without-heartbeat the worker won't send heartbeat events mingle It only asks for "logical clocks" and "revoked tasks" from other workers on startup. Taken from whatsnew-3.1 The worker will now attempt to synchronize with other workers in the same cluster. Synchronized data currently includes revoked tasks and logical clock. This only happens at startup and causes a one second startup delay to collect broadcast responses from other workers. You can disable this bootstep using the --without-mingle argument. Also see docs gossip Workers send events to all other workers and this is currently used for "clock synchronization", but it's also possible to write your own handlers on events, such as on_node_join, See docs Taken from whatsnew-3.1 Workers are now passively subscribing to worker related events like heartbeats. This means that a worker knows what other workers are doing and can detect if they go offline. Currently this is only used for clock synchronization, but there are many possibilities for future additions and you can write extensions that take advantage of this already. Some ideas include consensus protocols, reroute task to best worker (based on resource usage or data locality) or restarting workers when they crash. We believe that although this is a small addition, it opens amazing possibilities. You can disable this bootstep using the --without-gossip argument.
Redis
55,249,197
51
When using the Redis expire commands like SETEX and TTL, there are scenarios in which there is no need for the key to hold a value at all, because the time to live acts as such. However, Redis requires any key to have a value. What would be the most reasonable value to use - if you don't ever want to read it?
Who said that you should actually store anything in redis key? Empty string "" is a perfectly valid value for a redis key, and it's a shortest possible one: > SET foo "" OK > GET foo "" > BITCOUNT foo (integer) 0
Redis
25,557,250
51
I'm using redis-py in my python application to store simple variables or lists of variables in a Redis database, so I thought it would be better to create a connection to the redis server every time I need to save or retrieve a variable as this is not done very often and I don't want to have a permanent connection that timeout. After reading through some basic tutorials, I created the connections using the Redis class, but have not found a way to close the connection, as this is the first time I'm using Redis. I'm not sure if I'm using the best approach for managing the connections so I would like some advice for this. This is how I'm setting or getting a variable now: import redis def getVariable(variable_name): my_server = redis.Redis("10.0.0.1") response = my_server.get(variable_name) return response def setVariable(variable_name, variable_value): my_server = redis.Redis("10.0.0.1") my_server.set(variable_name, variable_value) I basically use this code to store the last connection time or to get an average of requests per second done to my app and stuff like that. Thanks for your advice.
Python uses a reference counter mechanism to deal with objects, so at the end of the blocks, the my_server object will be automatically destroyed and the connection closed. You do not need to close it explicitly. Now this is not how you are supposed to manage Redis connections. Connecting/disconnecting for each operation is too expensive, so it is much better to maintain the connection opened. With redis-py it can be done by declaring a pool of connections: import redis POOL = redis.ConnectionPool(host='10.0.0.1', port=6379, db=0) def getVariable(variable_name): my_server = redis.Redis(connection_pool=POOL) response = my_server.get(variable_name) return response def setVariable(variable_name, variable_value): my_server = redis.Redis(connection_pool=POOL) my_server.set(variable_name, variable_value) Please note connection pool management is mostly automatic and done within redis-py.
Redis
12,967,107
51