question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I am new to PostgreSQL database and I want to know if there any GUI Tools for PostgreSQL just like SQLYog for MySQL?
| There is a comprehensive list of tools on the PostgreSQL Wiki:
https://wiki.postgresql.org/wiki/PostgreSQL_Clients
And of course PostgreSQL itself comes with pgAdmin, a GUI tool for accessing Postgres databases.
| PostgreSQL | 9,667,264 | 183 |
In PostgreSQL 9.3 Beta 2 (?), how do I create an index on a JSON field? I tried it using the -> operator used for hstore but got the following error:
CREATE TABLE publishers(id INT, info JSON);
CREATE INDEX ON publishers((info->'name'));
ERROR: data type json has no default operator class for access method
"btree" HINT: You must specify an operator class for the index or
define a default operator class for the data type.
| Found:
CREATE TABLE publishers(id INT, info JSON);
CREATE INDEX ON publishers((info->>'name'));
As stated in the comments, the subtle difference here is ->> instead of ->. The former one returns the value as text, the latter as a JSON object.
| PostgreSQL | 17,807,030 | 182 |
I have a table with not null column, How to set a null value in this column as default?
I mean, I want to do something like this:
postgres=# ALTER TABLE person ALTER COLUMN phone SET NULL;
but it shows:
postgres=# ALTER TABLE person ALTER COLUMN phone SET NULL;
ERROR: syntax error at or near "NULL"
LINE 1: ALTER TABLE person ALTER COLUMN phone SET NULL;
| ALTER TABLE person ALTER COLUMN phone DROP NOT NULL;
More details in the manual: http://www.postgresql.org/docs/9.1/static/sql-altertable.html
| PostgreSQL | 13,643,806 | 180 |
I'm dealing with dates and times in Rails and Postgres and running into this issue:
The database is in UTC.
The user sets a time zone of choice in the Rails app, but it's only to be used when getting the user's local time for comparing times.
User stores a time, say March 17, 2012, 7pm. I don't want time zone conversions or the time zone to be stored. I just want that date and time saved. That way, if the user changed their time zone, it would still show March 17, 2012, 7pm.
I only use the user's specified time zone to get records "before" or "after" the current time in the user's local time zone.
I currently use timestamp without time zone but when I retrieve rows, Rails (?) converts them to the time zone in the app, which I don't want.
Appointment.first.time
=> Fri, 02 Mar 2012 19:00:00 UTC +00:00
Because the rows in the database seem to come out as UTC, my hack is to take the current time, remove the time zone with Date.strptime(str, "%m/%d/%Y") and then do my query with that:
.where("time >= ?", date_start)
It seems like there must be an easier way to just ignore time zones all around?
| Postgres has two different timestamp data types:
timestamp with time zone, short name: timestamptz
timestamp without time zone, short name: timestamp
timestamptz is the preferred type in the date/time family, literally. It has typispreferred set in pg_type, which can be relevant:
Generating time series between two dates in PostgreSQL
Internal storage and epoch
Internally, timestamps occupy 8 bytes of storage on disk and in RAM. It is an integer value representing the count of microseconds from the Postgres epoch, 2000-01-01 00:00:00 UTC.
Postgres also has built-in knowledge of the commonly used UNIX time counting seconds from the UNIX epoch, 1970-01-01 00:00:00 UTC, and uses that in functions to_timestamp(double precision) or EXTRACT(EPOCH FROM timestamptz).
The source code:
* Timestamps, as well as the h/m/s fields of intervals, are stored as
* int64 values with units of microseconds. (Once upon a time they were
* double values with units of seconds.)
And:
/* Julian-date equivalents of Day 0 in Unix and Postgres reckoning */
#define UNIX_EPOCH_JDATE 2440588 /* == date2j(1970, 1, 1) */
#define POSTGRES_EPOCH_JDATE 2451545 /* == date2j(2000, 1, 1) */
The microsecond resolution translates to a maximum of 6 fractional digits for seconds.
timestamp
For timestamp no time zone is provided explicitly. Postgres ignores any time zone modifier added to input literals!
Nothing is shifted for display. With everything happening in the same time zone this is fine. For a different time zone the meaning changes, but value and display stay the same.
timestamptz
Handling of timestamptz is subtly different. The manual:
For timestamp with time zone, the internally stored value is always in UTC (Universal Coordinated Time ...)
Bold emphasis mine. The time zone itself is never stored. It is an input modifier used to compute the according UTC timestamp, which is stored. Or an output decorator according to the timezone setting of the current session. For input literals without offset, the current timezone setting is assumed. All computations are done with UTC timestamp values.
If more than one time zone may be involved, or if there can be any doubt or misunderstanding, go with timestamptz. Best for most use cases.
Clients like psql or pgAdmin or any application communicating via libpq (like Ruby with the pg gem) are presented with the offset for the current time zone or according to a given time zone (see below). It's always the same point in time, only the display format varies. As the manual puts it:
All timezone-aware dates and times are stored internally in UTC. They
are converted to local time in the zone specified by the TimeZone
configuration parameter before being displayed to the client.
Example in psql:
db=# SELECT timestamptz '2012-03-05 20:00+03';
timestamptz
------------------------
2012-03-05 18:00:00+01
What happened here?
The input literal with (arbitrary) time zone offset +03 is just another way to format the UTC timestamp 2012-03-05 17:00:00. The result of the query is displayed for the current time zone setting, "Vienna/Austria" in my test, with an offset +01 during winter and +02 during summer time ("daylight saving time", DST). So 2012-03-05 18:00:00+01 for the "winter" time.
Postgres only retains the value. Just like with a decimal number: numeric '003.4' or numeric '+3.4' - either results in the exact same internal value.
AT TIME ZONE
To project timestamp values to a specific time zone, use the AT TIME ZONE construct. timestamptz is converted to timestamp and vice versa.
To get UTC 2012-03-05 17:00:00+0 as timestamptz:
SELECT timestamp '2012-03-05 17:00:00' AT TIME ZONE 'UTC'
... which is equivalent to:
SELECT timestamptz '2012-03-05 17:00:00 UTC'
To display the same point in time as EST timestamp (Eastern Standard Time):
SELECT timestamp '2012-03-05 17:00:00' AT TIME ZONE 'UTC' AT TIME ZONE 'EST'
That's right, AT TIME ZONE 'UTC' twice. The first interprets the timestamp value as (given) UTC timestamp returning the type timestamptz. The second converts timestamptz to timestamp as seen on a wall clock in the given time zone 'EST' at this point in time.
Examples
SELECT ts AT TIME ZONE 'UTC'
FROM (
VALUES
(1, timestamptz '2012-03-05 17:00:00+0')
, (2, timestamptz '2012-03-05 18:00:00+1')
, (3, timestamptz '2012-03-05 17:00:00 UTC')
, (4, timestamp '2012-03-05 11:00:00' AT TIME ZONE '+6')
, (5, timestamp '2012-03-05 17:00:00' AT TIME ZONE 'UTC')
, (6, timestamp '2012-03-05 07:00:00' AT TIME ZONE 'US/Hawaii') -- ①
, (7, timestamptz '2012-03-05 07:00:00 US/Hawaii') -- ①
, (8, timestamp '2012-03-05 07:00:00' AT TIME ZONE 'HST') -- ①
, (9, timestamp '2012-03-05 18:00:00+1') -- ② loaded footgun!
) t(id, ts);
Returns 8 (or 9) identical rows with a timestamptz column representing UTC timestamp 2012-03-05 17:00:00. The 9th row sort of happens to work in my time zone, but is an evil trap.
① Rows 6 - 8 with time zone name and time zone abbreviation for Hawaii time are subject to DST (daylight saving time) and might differ, though not for the given winter times. A time zone name like 'US/Hawaii' is aware of DST rules and all historic shifts, while an abbreviation like HST is just a dumb code for a fixed offset. You may need to append a different abbreviation for summer / standard time. The name correctly adjusts any timestamp at any point in time (as recorded in the underlying library). An abbreviation is cheap, but needs to be the right one for the given timestamp:
Time zone names with identical properties yield different result when applied to timestamp
Daylight Saving Time is not among the brightest ideas humanity ever came up with.
② Row 9, marked as loaded footgun happens to work for me. For timestamp [without time zone] input, any time zone offset is ignored! Only the bare timestamp is used. The value is then coerced to timestamptz in the example to match the column type. For this step, the timezone setting of the current session is assumed, which happens to be Europe/Vienna for me and matches +1. But not in other cases, resulting in a different value. In short: Don't cast timestamptz literals to timestamp or you lose the time zone offset.
Your questions
User stores a time, say March 17, 2012, 7pm. I don't want timezone
conversions or the timezone to be stored.
Time zone itself is never stored. Use one of the methods above to enter a UTC timestamp.
I only use the users specified time zone to get records 'before' or
'after' the current time in the users local time zone.
You can use one query for all clients in different time zones.
For absolute global time:
SELECT * FROM tbl WHERE time_col > (now() AT TIME ZONE 'UTC')::time
For time according to the local clock:
SELECT * FROM tbl WHERE time_col > now()::time
Not tired of background information, yet? There is more in the manual.
| PostgreSQL | 9,571,392 | 179 |
I was wondering if anyone would be able to tell me about whether it is possible to use shell to check if a PostgreSQL database exists?
I am making a shell script and I only want it to create the database if it doesn't already exist but up to now haven't been able to see how to implement it.
| Note/Update (2021): While this answer works, philosophically I agree with other comments that the right way to do this is to ask Postgres.
Check whether the other answers that have psql -c or --command in them are a better fit for your use case (e.g. Nicholas Grilly's, Nathan Osman's, bruce's or Pedro's variant
I use the following modification of Arturo's solution:
psql -lqt | cut -d \| -f 1 | grep -qw <db_name>
What it does
psql -l outputs something like the following:
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+-----------+----------+------------+------------+-----------------------
my_db | my_user | UTF8 | en_US.UTF8 | en_US.UTF8 |
postgres | postgres | LATIN1 | en_US | en_US |
template0 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
Using the naive approach means that searching for a database called "List, "Access" or "rows" will succeed. So we pipe this output through a bunch of built-in command line tools to only search in the first column.
The -t flag removes headers and footers:
my_db | my_user | UTF8 | en_US.UTF8 | en_US.UTF8 |
postgres | postgres | LATIN1 | en_US | en_US |
template0 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | LATIN1 | en_US | en_US | =c/postgres +
| | | | | postgres=CTc/postgres
The next bit, cut -d \| -f 1 splits the output by the vertical pipe | character (escaped from the shell with a backslash), and selects field 1. This leaves:
my_db
postgres
template0
template1
grep -w matches whole words, and so won't match if you are searching for temp in this scenario. The -q option suppresses any output written to the screen, so if you want to run this interactively at a command prompt you may with to exclude the -q so something gets displayed immediately.
Note that grep -w matches alphanumeric, digits and the underscore, which is exactly the set of characters allowed in unquoted database names in postgresql (hyphens are not legal in unquoted identifiers). If you are using other characters, grep -w won't work for you.
The exit status of this whole pipeline will be 0 (success) if the database exists or 1 (failure) if it doesn't. Your shell will set the special variable $? to the exit status of the last command. You can also test the status directly in a conditional:
if psql -lqt | cut -d \| -f 1 | grep -qw <db_name>; then
# database exists
# $? is 0
else
# ruh-roh
# $? is 1
fi
| PostgreSQL | 14,549,270 | 177 |
I'm trying to install PostgreSQL for Rails on Mac OS X 10.6. First I tried the MacPorts install but that didn't go well so I did the one-click DMG install. That seemed to work.
I suspect I need to install the PostgreSQL development packages but I have no idea how to do that on OS X.
Here's what I get when I try to do sudo gem install pg:
$ sudo gem install pg
Building native extensions. This could take a while...
ERROR: Error installing pg:
ERROR: Failed to build gem native extension.
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
checking for pg_config... yes
Using config values from /Library/PostgreSQL/8.3/bin/pg_config
checking for libpq-fe.h... yes
checking for libpq/libpq-fs.h... yes
checking for PQconnectdb() in -lpq... no
checking for PQconnectdb() in -llibpq... no
checking for PQconnectdb() in -lms/libpq... no
Can't find the PostgreSQL client library (libpq)
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers. Check the mkmf.log file for more
details. You may need configuration options.
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
--with-pg
--without-pg
--with-pg-dir
--without-pg-dir
--with-pg-include
--without-pg-include=${pg-dir}/include
--with-pg-lib
--without-pg-lib=${pg-dir}/lib
--with-pg-config
--without-pg-config
--with-pg_config
--without-pg_config
--with-pqlib
--without-pqlib
--with-libpqlib
--without-libpqlib
--with-ms/libpqlib
--without-ms/libpqlib
Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/pg-0.11.0 for inspection.
Results logged to /Library/Ruby/Gems/1.8/gems/pg-0.11.0/ext/gem_make.out
| $ sudo su
$ env ARCHFLAGS="-arch x86_64" gem install pg
Building native extensions. This could take a while...
Successfully installed pg-0.11.0
1 gem installed
Installing ri documentation for pg-0.11.0...
Installing RDoc documentation for pg-0.11.0...
WORKED!
| PostgreSQL | 6,209,797 | 177 |
I'm using Python and psycopg2 to interface to postgres.
When I insert a row...
sql_string = "INSERT INTO hundred (name,name_slug,status) VALUES ("
sql_string += hundred_name + ", '" + hundred_slug + "', " + status + ");"
cursor.execute(sql_string)
... how do I get the ID of the row I've just inserted? Trying:
hundred = cursor.fetchall()
returns an error, while using RETURNING id:
sql_string = "INSERT INTO domes_hundred (name,name_slug,status) VALUES ("
sql_string += hundred_name + ", '" + hundred_slug + "', " + status + ") RETURNING id;"
hundred = cursor.execute(sql_string)
simply returns None.
UPDATE: So does currval (even though using this command directly into postgres works):
sql_string = "SELECT currval(pg_get_serial_sequence('hundred', 'id'));"
hundred_id = cursor.execute(sql_string)
Can anyone advise?
thanks!
| cursor.execute("INSERT INTO .... RETURNING id")
id_of_new_row = cursor.fetchone()[0]
And please do not build SQL strings containing values manually. You can (and should!) pass values separately, making it unnecessary to escape and SQL injection impossible:
sql_string = "INSERT INTO domes_hundred (name,name_slug,status) VALUES (%s,%s,%s) RETURNING id;"
cursor.execute(sql_string, (hundred_name, hundred_slug, status))
hundred = cursor.fetchone()[0]
See the psycopg docs for more details: http://initd.org/psycopg/docs/usage.html#passing-parameters-to-sql-queries
| PostgreSQL | 5,247,685 | 177 |
I have a table that I am trying to update multiple values at once. Here is the table schema:
Column | Type | Modifiers
---------------+---------+-----------
user_id | integer |
subservice_id | integer |
I have the user_id and want to insert multiple subservice_id's at once. Is there a syntax in Postgres that will let me do something like this
insert into user_subservices(user_id, subservice_id) values(1, [1, 2, 3]);
How would I do this?
| Multi-value insert syntax is:
insert into table_name values (1,1), (1,2), (1,3), (2,1);
When you need to specify columns:
insert into table_name (user_id, subservice_id) values
(1, 1),
(1, 2),
(1, 3),
(2, 1);
When you need to get the inserted id for example:
insert into table_name (user_id, subservice_id) values
(1, 1),
(1, 2),
(1, 3),
(2, 1)
returning id;
| PostgreSQL | 20,815,028 | 176 |
Table 'animals':
animal_name animal_type
Tom Cat
Jerry Mouse
Kermit Frog
Query:
SELECT
array_to_string(array_agg(animal_name),';') animal_names,
array_to_string(array_agg(animal_type),';') animal_types
FROM animals;
Expected result:
Tom;Jerry;Kerimt, Cat;Mouse;Frog
OR
Tom;Kerimt;Jerry, Cat;Frog;Mouse
Can I be sure that order in first aggregate function will always be the same as in second.
I mean I would't like to get:
Tom;Jerry;Kermit, Frog;Mouse,Cat
| Use an ORDER BY, like this example from the manual:
SELECT array_agg(a ORDER BY b DESC) FROM table;
| PostgreSQL | 7,317,475 | 176 |
I'm trying to insert data to a table from another table and the tables have only one column in common. The problem is, that the TABLE1 has columns that won't accept null values so I can't leave them empty and I can't get them from the TABLE2.
I have TABLE1:
id, col_1 (not null), col_2(not null), col_3 (not null)
and TABLE2:
id, col_a, col_b, col_c
so how could I insert id from TABLE2 to TABLE1 and fill the col_1-3 with hard coded strings like "data1", "data2", "data3"?
INSERT INTO TABLE1 (id) SELECT id FROM TABLE2 WHERE col_a = "something";
will result in:
ERROR: null value in column "col_1" violates not-null constraint
| You can supply literal values in the SELECT:
INSERT INTO TABLE1 (id, col_1, col_2, col_3)
SELECT id, 'data1', 'data2', 'data3'
FROM TABLE2
WHERE col_a = 'something';
A select list can contain any value expression:
But the expressions in the select list do not have to reference any columns in the table expression of the FROM clause; they can be constant arithmetic expressions, for instance.
And a string literal is certainly a value expression.
| PostgreSQL | 6,898,520 | 176 |
How do I add multiple columns in one query statement in PostgreSQL using pgadmin3?
| Try this :
ALTER TABLE table ADD COLUMN col1 int, ADD COLUMN col2 int;
| PostgreSQL | 5,260,697 | 176 |
I'm a little new to Postgres. I want to take a value (which is an integer) in a field in a Postgres table and increment it by one. For example, if the table 'totals' had 2 columns, 'name' and 'total', and Bill had a total of 203, what would be the SQL statement I'd use in order to move Bill's total to 204?
| UPDATE totals
SET total = total + 1
WHERE name = 'bill';
If you want to make sure the current value is indeed 203 (and not accidently increase it again) you can also add another condition:
UPDATE totals
SET total = total + 1
WHERE name = 'bill'
AND total = 203;
| PostgreSQL | 10,233,298 | 175 |
I just upgraded to postgres 10.2 on mac os which matches 10.2 on heroku. I'm trying to download a copy of the database and restore it locally. Before the upgrade the restore would work fine.
I run
pg_restore --verbose --clean --no-acl --no-owner -h localhost -d database_name backup.dump
but I am getting this error:
pg_restore: [archiver] unsupported version (1.13) in file header
The database appears to be working OK. It's a rails app and I upgraded the pg gems. I can run rake db:create and db:migrate just fine.
| You need to upgrade your local postgres to get the last security patch from the 2018-03-01, like Heroku did the 1st march. You need one of the last releases 10.3, 9.6.8, 9.5.12, 9.4.17, and 9.3.22.
The security patch can be found here https://www.postgresql.org/about/news/1834/.
It seems the patch modified pg_dump, that's probably why we can't use pg_restore anymore without that patch for the dump of Heroku (with the patch applied).
| PostgreSQL | 49,064,209 | 174 |
What is the difference between ->> and -> in SQL?
In this thread (Check if field exists in json type column postgresql), the answerer basically recommends using,
json->'attribute' is not null
instead of,
json->>'attribute' is not null
Why use a single arrow instead of a double arrow? In my limited experience, both do the same thing.
| -> returns json (or jsonb) and ->> returns text:
with t (jo, ja) as (values
('{"a":"b"}'::jsonb,('[1,2]')::jsonb)
)
select
pg_typeof(jo -> 'a'), pg_typeof(jo ->> 'a'),
pg_typeof(ja -> 1), pg_typeof(ja ->> 1)
from t
;
pg_typeof | pg_typeof | pg_typeof | pg_typeof
-----------+-----------+-----------+-----------
jsonb | text | jsonb | text
| PostgreSQL | 38,777,535 | 174 |
I'm using Postgres' native array type, and trying to find the records where the ID is not in the array recipient IDs.
I can find where they are IN:
SELECT COUNT(*) FROM messages WHERE (3 = ANY (recipient_ids))
But this doesn't work:
SELECT COUNT(*) FROM messages WHERE (3 != ANY (recipient_ids))
SELECT COUNT(*) FROM messages WHERE (3 = NOT ANY (recipient_ids))
What's the right way to test for this condition?
| SELECT COUNT(*) FROM "messages" WHERE NOT (3 = ANY (recipient_ids))
You can always negate WHERE (condition) with WHERE NOT (condition)
| PostgreSQL | 11,730,777 | 174 |
In PostgreSQL I have a table with a varchar column. The data is supposed to be integers and I need it in integer type in a query. Some values are empty strings.
The following:
SELECT myfield::integer FROM mytable
yields ERROR: invalid input syntax for integer: ""
How can I query a cast and have 0 in case of error during the cast in postgres?
| I was just wrestling with a similar problem myself, but didn't want the overhead of a function. I came up with the following query:
SELECT myfield::integer FROM mytable WHERE myfield ~ E'^\\d+$';
Postgres shortcuts its conditionals, so you shouldn't get any non-integers hitting your ::integer cast. It also handles NULL values (they won't match the regexp).
If you want zeros instead of not selecting, then a CASE statement should work:
SELECT CASE WHEN myfield~E'^\\d+$' THEN myfield::integer ELSE 0 END FROM mytable;
| PostgreSQL | 2,082,686 | 174 |
Does PostgreSQL support computed columns like MS SQL Server?
I can't find anything in the docs, but the feature is included in many other DBMS so maybe I am missing something?
| Postgres 12 or newer
STORED generated columns are introduced with Postgres 12 - as defined in the SQL standard and implemented by some RDBMS including DB2, MySQL, and Oracle. Or the similar "computed columns" of SQL Server.
Trivial example:
CREATE TABLE tbl (
int1 int
, int2 int
, product bigint GENERATED ALWAYS AS (int1 * int2) STORED
);
fiddle
VIRTUAL generated columns may come with one of the next iterations. (Not in Postgres 16, yet).
Related:
Attribute notation for function call gives error
Postgres 11 or older
Up to Postgres 11 "generated columns" are not supported.
You can emulate VIRTUAL generated columns with a function using attribute notation (tbl.col) that looks and works much like a virtual generated column. That's a bit of a syntax oddity which exists in Postgres for historic reasons and happens to fit the case. This related answer has code examples:
Store common query as column?
The expression (looking like a column) is not included in a SELECT * FROM tbl, though. You always have to list it explicitly.
Can also be supported with a matching expression index - provided the function is IMMUTABLE. Like:
CREATE FUNCTION col(tbl) ... AS ... -- your computed expression here
CREATE INDEX ON tbl(col(tbl));
Alternatives
Alternatively, you can implement similar functionality with a VIEW, optionally coupled with expression indexes. Then SELECT * can include the generated column.
"Persisted" (STORED) computed columns can be implemented with triggers in a functionally equivalent way.
Materialized views are a related concept, implemented since Postgres 9.3.
In earlier versions one can manage MVs manually.
| PostgreSQL | 8,250,389 | 173 |
Is it possible to search every column of every table for a particular value in PostgreSQL?
A similar question is available here for Oracle.
| How about dumping the contents of the database, then using grep?
$ pg_dump --data-only --inserts -U postgres your-db-name > a.tmp
$ grep United a.tmp
INSERT INTO countries VALUES ('US', 'United States');
INSERT INTO countries VALUES ('GB', 'United Kingdom');
The same utility, pg_dump, can include column names in the output. Just change --inserts to --column-inserts. That way you can search for specific column names, too. But if I were looking for column names, I'd probably dump the schema instead of the data.
$ pg_dump --data-only --column-inserts -U postgres your-db-name > a.tmp
$ grep country_code a.tmp
INSERT INTO countries (iso_country_code, iso_country_name) VALUES ('US', 'United States');
INSERT INTO countries (iso_country_code, iso_country_name) VALUES ('GB', 'United Kingdom');
| PostgreSQL | 5,350,088 | 173 |
I installed PostgreSQL 9 and the time it is showing is 1 hour behind the server time.
Running Select NOW() shows: 2011-07-12 11:51:50.453842+00
The server date shows: Tue Jul 12 12:51:40 BST 2011
It is 1 hour behind but the timezone shown in phppgadmin is: TimeZone Etc/GMT0
I have tried going into the postgresql.conf and setting
timezone = GMT
then running a restart but no change.
Any ideas I thought it would have just used the server timezone but obviously not?!
SOLUTION!:
I did set to GMT before and it was an hour behind. after searching around turns out that I needed to set it to Europe/London. This takes into account the +1 hour in British summer time, GMT does not!
| The time zone is a session parameter. So, you can change the timezone for the current session.
See the doc.
set timezone TO 'GMT';
Or, more closely following the SQL standard, use the SET TIME ZONE command. Notice two words for "TIME ZONE" where the code above uses a single word "timezone".
SET TIME ZONE 'UTC';
The doc explains the difference:
SET TIME ZONE extends syntax defined in the SQL standard. The standard allows only numeric time zone offsets while PostgreSQL allows more flexible time-zone specifications. All other SET features are PostgreSQL extensions.
| PostgreSQL | 6,663,765 | 172 |
This exception is being thrown by the PostgreSQL 8.3.7 server to my application.
Does anyone know what this error means and what I can do about it?
ERROR: cached plan must not change result type
STATEMENT: select code,is_deprecated from country where code=$1
| I figured out what was causing this error.
My application opened a database connection and prepared a SELECT statement for execution.
Meanwhile, another script was modifying the database table, changing the data type of one of the columns being returned in the above SELECT statement.
I resolved this by restarting the application after the database table was modified. This reset the database connection, allowing the prepared statement to execute without errors.
| PostgreSQL | 2,783,813 | 172 |
I want to add indexes to some of the columns in a table on creation. Is there are way to add them to the CREATE TABLE definition or do I have to add them afterward with another query?
CREATE INDEX reply_user_id ON reply USING btree (user_id);
| There doesn't seem to be any way of specifying an index in the CREATE TABLE syntax. PostgreSQL does however create an index for unique constraints and primary keys by default, as described in this note:
PostgreSQL automatically creates an index for each unique constraint and primary key constraint to enforce uniqueness.
Other than that, if you want a non-unique index, you will need to create it yourself in a separate CREATE INDEX query.
| PostgreSQL | 6,239,657 | 171 |
I've got a PostgreSQL data base that I'd like to configure to accept all incoming connections regardless of the source IP address. How can this be configured in the pg_hba.conf file? I'm using postgreSQL version 8.4.
| Just use 0.0.0.0/0.
host all all 0.0.0.0/0 md5
Make sure the listen_addresses in postgresql.conf (or ALTER SYSTEM SET) allows incoming connections on all available IP interfaces.
listen_addresses = '*'
After the changes you have to reload the configuration. One way to do this is execute this SELECT as a superuser.
SELECT pg_reload_conf();
Note: to change listen_addresses, a reload is not enough, and you have to restart the server.
| PostgreSQL | 3,278,379 | 171 |
I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id
There are multiple users (distinct usr_id's)
time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp.
trans_id is unique only for very small time ranges: over time it repeats
remaining_lives (for a given user) can both increase and decrease over time
example:
time_stamp|lives_remaining|usr_id|trans_id
-----------------------------------------
07:00 | 1 | 1 | 1
09:00 | 4 | 2 | 2
10:00 | 2 | 3 | 3
10:00 | 1 | 2 | 4
11:00 | 4 | 1 | 5
11:00 | 3 | 1 | 6
13:00 | 3 | 3 | 1
As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this:
time_stamp|lives_remaining|usr_id|trans_id
-----------------------------------------
11:00 | 3 | 1 | 6
10:00 | 1 | 2 | 4
13:00 | 3 | 3 | 1
As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work:
SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM
(SELECT usr_id, max(time_stamp) AS max_timestamp
FROM lives GROUP BY usr_id ORDER BY usr_id) a
JOIN lives b ON a.max_timestamp = b.time_stamp
Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work:
SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM
(SELECT usr_id, max(time_stamp || '*' || trans_id)
AS max_timestamp_transid
FROM lives GROUP BY usr_id ORDER BY usr_id) a
JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id
ORDER BY b.usr_id
Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize.
I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function?
Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated!
P.S. You can use the following to create my example case:
create TABLE lives (time_stamp timestamp, lives_remaining integer,
usr_id integer, trans_id integer);
insert into lives values ('2000-01-01 07:00', 1, 1, 1);
insert into lives values ('2000-01-01 09:00', 4, 2, 2);
insert into lives values ('2000-01-01 10:00', 2, 3, 3);
insert into lives values ('2000-01-01 10:00', 1, 2, 4);
insert into lives values ('2000-01-01 11:00', 4, 1, 5);
insert into lives values ('2000-01-01 11:00', 3, 1, 6);
insert into lives values ('2000-01-01 13:00', 3, 3, 1);
| I would propose a clean version based on DISTINCT ON (see docs):
SELECT DISTINCT ON (usr_id)
time_stamp,
lives_remaining,
usr_id,
trans_id
FROM lives
ORDER BY usr_id, time_stamp DESC, trans_id DESC;
| PostgreSQL | 586,781 | 171 |
I have a postgresql db with a number of tables. If I query:
SELECT column_name
FROM information_schema.columns
WHERE table_name="my_table";
I will get a list of the columns returned properly.
However, when I query:
SELECT *
FROM "my_table";
I get the error:
(ProgrammingError) relation "my_table" does not exist
'SELECT *\n FROM "my_table"\n' {}
Any thoughts on why I can get the columns, but can't query the table? Goal is to be able to query the table.
| You have to include the schema if isnt a public one
SELECT *
FROM <schema>."my_table"
Or you can change your default schema
SHOW search_path;
SET search_path TO my_schema;
Check your table schema here
SELECT *
FROM information_schema.columns
For example if a table is on the default schema public both this will works ok
SELECT * FROM parroquias_region
SELECT * FROM public.parroquias_region
But sectors need specify the schema
SELECT * FROM map_update.sectores_point
| PostgreSQL | 36,753,568 | 170 |
I have two separately unique columns in a table: col1, col2. Both have a unique index (col1 is unique and so is col2).
I need INSERT ... ON CONFLICT ... DO UPDATE syntax, and update other columns in case of a conflict, but I can't use both columns as conflict_target.
It works:
INSERT INTO table
...
ON CONFLICT ( col1 )
DO UPDATE
SET
-- update needed columns here
But how to do this for several columns, something like this:
...
ON CONFLICT ( col1, col2 )
DO UPDATE
SET
....
Currently using Postgres 9.5.
| ON CONFLICT requires a unique index* to do the conflict detection. So you just need to create a unique index on both columns:
t=# create table t (id integer, a text, b text);
CREATE TABLE
t=# create unique index idx_t_id_a on t (id, a);
CREATE INDEX
t=# insert into t values (1, 'a', 'foo');
INSERT 0 1
t=# insert into t values (1, 'a', 'bar') on conflict (id, a) do update set b = 'bar';
INSERT 0 1
t=# select * from t;
id | a | b
----+---+-----
1 | a | bar
* In addition to unique indexes, you can also use exclusion constraints. These are a bit more general than unique constraints. Suppose your table had columns for id and valid_time (and valid_time is a tsrange), and you wanted to allow duplicate ids, but not for overlapping time periods. A unique constraint won't help you, but with an exclusion constraint you can say "exclude new records if their id equals an old id and also their valid_time overlaps its valid_time."
| PostgreSQL | 35,888,012 | 170 |
I have this function in PostgreSQL, but I don't know how to return the result of the query:
CREATE OR REPLACE FUNCTION wordFrequency(maxTokens INTEGER)
RETURNS SETOF RECORD AS
$$
BEGIN
SELECT text, count(*), 100 / maxTokens * count(*)
FROM (
SELECT text
FROM token
WHERE chartype = 'ALPHABETIC'
LIMIT maxTokens
) AS tokens
GROUP BY text
ORDER BY count DESC
END
$$
LANGUAGE plpgsql;
But I don't know how to return the result of the query inside the PostgreSQL function.
I found that the return type should be SETOF RECORD, right? But the return command is not right.
What is the right way to do this?
| Use RETURN QUERY:
CREATE OR REPLACE FUNCTION word_frequency(_max_tokens int)
RETURNS TABLE (txt text -- also visible as OUT param in function body
, cnt bigint
, ratio bigint)
LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY
SELECT t.txt
, count(*) AS cnt -- column alias only visible in this query
, (count(*) * 100) / _max_tokens -- I added parentheses
FROM (
SELECT t.txt
FROM token t
WHERE t.chartype = 'ALPHABETIC'
LIMIT _max_tokens
) t
GROUP BY t.txt
ORDER BY cnt DESC; -- potential ambiguity
END
$func$;
Call:
SELECT * FROM word_frequency(123);
Defining the return type explicitly is much more practical than returning a generic record. This way you don't have to provide a column definition list with every function call. RETURNS TABLE is one way to do that. There are others. Data types of OUT parameters have to match exactly what is returned by the query.
Choose names for OUT parameters carefully. They are visible in the function body almost anywhere. Table-qualify columns of the same name to avoid conflicts or unexpected results. I did that for all columns in my example.
But note the potential naming conflict between the OUT parameter cnt and the column alias of the same name. In this particular case (RETURN QUERY SELECT ...) Postgres uses the column alias over the OUT parameter either way. This can be ambiguous in other contexts, though. There are various ways to avoid any confusion:
Use the ordinal position of the item in the SELECT list: ORDER BY 2 DESC. Example:
Select first row in each GROUP BY group?
Repeat the expression ORDER BY count(*).
(Not required here.) Set the configuration parameter plpgsql.variable_conflict or use the special command #variable_conflict error | use_variable | use_column in the function. See:
Naming conflict between function parameter and result of JOIN with USING clause
Don't use "text" or "count" as column names. Both are legal to use in Postgres, but "count" is a reserved word in standard SQL and a basic function name and "text" is a basic data type. Can lead to confusing errors. I use txt and cnt in my examples, you may want more explicit names.
Added a missing ; and corrected a syntax error in the header. (_max_tokens int), not (int maxTokens) - data type after name.
While working with integer division, it's better to multiply first and divide later, to minimize the rounding error. Or work with numeric or a floating point type. See below.
Alternative
This is what I think your query should actually look like (calculating a relative share per token):
CREATE OR REPLACE FUNCTION word_frequency(_max_tokens int)
RETURNS TABLE (txt text
, abs_cnt bigint
, relative_share numeric)
LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY
SELECT t.txt, t.cnt
, round((t.cnt * 100) / (sum(t.cnt) OVER ()), 2) -- AS relative_share
FROM (
SELECT t.txt, count(*) AS cnt
FROM token t
WHERE t.chartype = 'ALPHABETIC'
GROUP BY t.txt
ORDER BY cnt DESC
LIMIT _max_tokens
) t
ORDER BY t.cnt DESC;
END
$func$;
The expression sum(t.cnt) OVER () is a window function. You could use a CTE instead of the subquery. Pretty, but a subquery is typically cheaper in simple cases like this one (mostly before Postgres 12).
A final explicit RETURN statement is not required (but allowed) when working with OUT parameters or RETURNS TABLE (which makes implicit use of OUT parameters).
round() with two parameters only works for numeric types. count() in the subquery produces a bigint result and a sum() over this bigint produces a numeric result, thus we deal with a numeric number automatically and everything just falls into place.
| PostgreSQL | 7,945,932 | 170 |
How do you change the column type and also set that column to not null together?
I am trying:
ALTER TABLE mytable ALTER COLUMN col TYPE character varying(15) SET NOT NULL
This returns an error.
What is the right syntax?
| This should be correct:
ALTER TABLE mytable
ALTER COLUMN col TYPE character varying(15),
ALTER COLUMN col SET NOT NULL
| PostgreSQL | 16,197,236 | 169 |
In my PostgreSQL database I have 2 users: postgres and myuser.
The default user is postgres, but this user has no permission to query my foreign tables and myuser does. How can I check if I'm connected with the right user?
If I'm using the wrong user, how do I change to the right one?
| To get information about current connection from the psql command prompt:
\conninfo
This displays more informations, though.
To change user:
\c - a_new_user
‘-’ substitutes for the current database.
To change database and user:
\c a_new_database a_new_user
The SQL command to get this information:
SELECT current_user;
Examples:
postgres=# \conninfo
You are connected to database "postgres" as user "postgres" via socket in "/var/run/postgresql" at port "5432"
postgres=# \c a_new_database a_new_user
psql (12.1 (Ubuntu 12.1-1.pgdg16.04+1), server 9.5.20)
You are now connected to database "a_new_database" as user "a_new_user".
a_new_database=# SELECT current_user;
current_user
--------------
a_new_user
(1 row)
This page list few interesting functions and variables.
https://www.postgresql.org/docs/current/static/functions-info.html
| PostgreSQL | 39,735,141 | 168 |
I want to run a small PostgreSQL database which runs in memory only, for each unit test I write. For instance:
@Before
void setUp() {
String port = runPostgresOnRandomPort();
connectTo("postgres://localhost:"+port+"/in_memory_db");
// ...
}
Ideally I'll have a single postgres executable checked into the version control, which the unit test will use.
Something like HSQL, but for postgres. How can I do that?
Were can I get such a Postgres version? How can I instruct it not to use the disk?
| (Moving my answer from Using in-memory PostgreSQL and generalizing it):
You can't run Pg in-process, in-memory
I can't figure out how to run in-memory Postgres database for testing. Is it possible?
No, it is not possible. PostgreSQL is implemented in C and compiled to platform code. Unlike H2 or Derby you can't just load the jar and fire it up as a throwaway in-memory DB.
Its storage is filesystem based, and it doesn't have any built-in storage abstraction that would allow you to use a purely in-memory datastore. You can point it at a ramdisk, tempfs, or other ephemeral file system storage though.
Unlike SQLite, which is also written in C and compiled to platform code, PostgreSQL can't be loaded in-process either. It requires multiple processes (one per connection) because it's a multiprocessing, not a multithreading, architecture. The multiprocessing requirement means you must launch the postmaster as a standalone process.
Use throwaway containers
Since I originally wrote this the use of containers has become widespread, well understood and easy.
It should be a no-brainer to just configure a throw-away postgres instance in a Docker container for your test uses, then tear it down at the end. You can speed it up with hacks like LD_PRELOADing libeatmydata to disable that pesky "don't corrupt my data horribly on crash" feature ;).
There are a lot of wrappers to automate this for you for any test suite and language or toolchain you would like.
Alternative: preconfigure a connection
(Written before easy containerization; no longer recommended)
I suggest simply writing your tests to expect a particular hostname/username/password to work, and having the test harness CREATE DATABASE a throwaway database, then DROP DATABASE at the end of the run. Get the database connection details from a properties file, build target properties, environment variable, etc.
It's safe to use an existing PostgreSQL instance you already have databases you care about in, so long as the user you supply to your unit tests is not a superuser, only a user with CREATEDB rights. At worst you'll create performance issues in the other databases. I prefer to run a completely isolated PostgreSQL install for testing for that reason.
Instead: Launch a throwaway PostgreSQL instance for testing
Alternately, if you're really keen you could have your test harness locate the initdb and postgres binaries, run initdb to create a database, modify pg_hba.conf to trust, run postgres to start it on a random port, create a user, create a DB, and run the tests. You could even bundle the PostgreSQL binaries for multiple architectures in a jar and unpack the ones for the current architecture to a temporary directory before running the tests.
Personally I think that's a major pain that should be avoided; it's way easier to just have a test DB configured. However, it's become a little easier with the advent of include_dir support in postgresql.conf; now you can just append one line, then write a generated config file for all the rest.
Faster testing with PostgreSQL
For more information about how to safely improve the performance of PostgreSQL for testing purposes, see a detailed answer I wrote on this topic earlier: Optimise PostgreSQL for fast testing
H2's PostgreSQL dialect is not a true substitute
Some people instead use the H2 database in PostgreSQL dialect mode to run tests. I think that's almost as bad as the Rails people using SQLite for testing and PostgreSQL for production deployment.
H2 supports some PostgreSQL extensions and emulates the PostgreSQL dialect. However, it's just that - an emulation. You'll find areas where H2 accepts a query but PostgreSQL doesn't, where behaviour differs, etc. You'll also find plenty of places where PostgreSQL supports doing something that H2 just can't - like window functions, at the time of writing.
If you understand the limitations of this approach and your database access is simple, H2 might be OK. But in that case you're probably a better candidate for an ORM that abstracts the database because you're not using its interesting features anyway - and in that case, you don't have to care about database compatibility as much anymore.
Tablespaces are not the answer!
Do not use a tablespace to create an "in-memory" database. Not only is it unnecessary as it won't help performance significantly anyway, but it's also a great way to disrupt access to any other you might care about in the same PostgreSQL install. The 9.4 documentation now contains the following warning:
WARNING
Even though located outside the main PostgreSQL data directory,
tablespaces are an integral part of the database cluster and cannot be
treated as an autonomous collection of data files. They are dependent
on metadata contained in the main data directory, and therefore cannot
be attached to a different database cluster or backed up individually.
Similarly, if you lose a tablespace (file deletion, disk failure,
etc), the database cluster might become unreadable or unable to start.
Placing a tablespace on a temporary file system like a ramdisk risks
the reliability of the entire cluster.
because I noticed too many people were doing this and running into trouble.
(If you've done this you can mkdir the missing tablespace directory to get PostgreSQL to start again, then DROP the missing databases, tables etc. It's better to just not do it.)
| PostgreSQL | 7,872,693 | 168 |
I'm trying to port some old MySQL queries to PostgreSQL, but I'm having trouble with this one:
DELETE FROM logtable ORDER BY timestamp LIMIT 10;
PostgreSQL doesn't allow ordering or limits in its delete syntax, and the table doesn't have a primary key so I can't use a subquery. Additionally, I want to preserve the behavior where the query deletes exactly the given number or records -- for example, if the table contains 30 rows but they all have the same timestamp, I still want to delete 10, although it doesn't matter which 10.
So; how do I delete a fixed number of rows with sorting in PostgreSQL?
Edit: No primary key means there's no log_id column or similar. Ah, the joys of legacy systems!
| You could try using the ctid:
DELETE FROM logtable
WHERE ctid IN (
SELECT ctid
FROM logtable
ORDER BY timestamp
LIMIT 10
)
The ctid is:
The physical location of the row version within its table. Note that although the ctid can be used to locate the row version very quickly, a row's ctid will change if it is updated or moved by VACUUM FULL. Therefore ctid is useless as a long-term row identifier.
There's also oid but that only exists if you specifically ask for it when you create the table.
| PostgreSQL | 5,170,546 | 168 |
I have installed PostgreSQL 9.6.2 on my Windows 8.1. But the pgadmin4 is not able to contact the local server. I have tried several solutions suggested here in stackoverflow, tried to uninstall and reinstall PostgreSQL 9.6.2 , tried to modify the config.py, config_distro.py, and delete the files in Roaming folder,i tried standalone pgadmin4 installation, but no success.However, in my local machine i am able to access the server using psql.exe and log as as superuser (postgres user). Can you please suggest any possible solutions to starting/running pgadmin4 ? Thank you.
| I found the same issue when upgrading to pgAdmin 4 (v1.6). On Windows I found that clearing out the content inside C:\Users\%USERNAME%\AppData\Roaming\pgAdmin\sessions folder fixed the issue for me. I believe it was attempting to use the sessions from the prior version and was failing. I know the question was marked as answered, but downgrading may not always be an option.
Note: AppData\Roaming\pgAdmin is a hidden folder.
| PostgreSQL | 43,211,296 | 167 |
I would like to use the psql in the postgres image in order to run some queries on the database.
But unfortunately when I attach to the postgres container, I got that error the psql command is not found...
For me a little bit it is a mystery how I can run postgre sql queries or commands in the container.
How run the psql command in the postgres container? (I am a new guy in Docker world)
I use Ubuntu as a host machine, and I did not install the postgres on the host machine, I use the postgres container instead.
docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------
yiialkalmi_app_1 /bin/bash Exit 0
yiialkalmi_nginx_1 nginx -g daemon off; Up 443/tcp, 0.0.0.0:80->80/tcp
yiialkalmi_php_1 php-fpm Up 9000/tcp
yiialkalmi_postgres_1 /docker-entrypoint.sh postgres Up 5432/tcp
yiialkalmi_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
Here the containers:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
315567db2dff yiialkalmi_nginx "nginx -g 'daemon off" 18 hours ago Up 3 hours 0.0.0.0:80->80/tcp, 443/tcp yiialkalmi_nginx_1
53577722df71 yiialkalmi_php "php-fpm" 18 hours ago Up 3 hours 9000/tcp yiialkalmi_php_1
40e39bd0329a postgres:latest "/docker-entrypoint.s" 18 hours ago Up 3 hours 5432/tcp yiialkalmi_postgres_1
5cc47477b72d redis:latest "docker-entrypoint.sh" 19 hours ago Up 3 hours 6379/tcp yiialkalmi_redis_1
And this is my docker-compose.yml:
app:
image: ubuntu:16.04
volumes:
- .:/var/www/html
nginx:
build: ./docker/nginx/
ports:
- 80:80
links:
- php
volumes_from:
- app
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d
php:
build: ./docker/php/
expose:
- 9000
links:
- postgres
- redis
volumes_from:
- app
postgres:
image: postgres:latest
volumes:
- /var/lib/postgres
environment:
POSTGRES_DB: project
POSTGRES_USER: project
POSTGRES_PASSWORD: project
redis:
image: redis:latest
expose:
- 6379
| docker exec -it yiialkalmi_postgres_1 psql -U project -W project
Some explanation
docker exec -it
The command to run a command to a running container. The it flags open an interactive tty. Basically it will cause to attach to the terminal. If you wanted to open the bash terminal you can do this
docker exec -it yiialkalmi_postgres_1 bash
yiialkalmi_postgres_1
The container name (you could use the container id instead, which in your case would be 40e39bd0329a )
psql -U project -W project
The command to execute to the running container
U user
W Tell psql that the user needs to be prompted for the password at connection time. This parameter is optional. Without this parameter, there is an extra connection attempt which will usually find out that a password is needed, see the PostgreSQL docs.
project the database you want to connect to. There is no need for the -d parameter to mark it as the dbname when it is the first non-option argument, see the docs: -d "is equivalent to specifying dbname as the first non-option argument on the command line."
These are specified by you here
environment:
POSTGRES_DB: project
POSTGRES_USER: project
POSTGRES_PASSWORD: project
| PostgreSQL | 37,099,564 | 167 |
I am relatively new to PostgreSQL and I know how to pad a number with zeros to the left in SQL Server but I'm struggling to figure this out in PostgreSQL.
I have a number column where the maximum number of digits is 3 and the min is 1: if it's one digit it has two zeros to the left, and if it's 2 digits it has 1, e.g. 001, 058, 123.
In SQL Server I can use the following:
RIGHT('000' + cast([Column1] as varchar(3)), 3) as [Column2]
This does not exist in PostgreSQL. Any help would be appreciated.
| You can use the rpad and lpad functions to pad numbers to the right or to the left, respectively. Note that this does not work directly on numbers, so you'll have to use ::char or ::text to cast them:
SELECT RPAD(numcol::text, 3, '0'), -- Zero-pads to the right up to the length of 3
LPAD(numcol::text, 3, '0') -- Zero-pads to the left up to the length of 3
FROM my_table
| PostgreSQL | 26,379,446 | 166 |
I want to write a function with pl/pgsql.
I'm using PostgresEnterprise Manager v3 and using shell to make a function, but in the shell I must define return type. If I don't define the return type, I'm not able to create a function.
How can create a function without return result, i.e a Function that creates a new table?
| Use RETURNS void like below:
CREATE FUNCTION stamp_user(id int, comment text) RETURNS void AS $$
#variable_conflict use_variable
DECLARE
curtime timestamp := now();
BEGIN
UPDATE users SET last_modified = curtime, comment = comment
WHERE users.id = id;
END;
$$ LANGUAGE plpgsql;
| PostgreSQL | 14,216,716 | 166 |
I am using pgAdmin version 1.14.3. PostgreSQL database version is 9.1.
I got all Db script for table creation but unable to export all data inside tables. Could not find any option to export data in db script form.
|
Right-click on your table and pick option Backup..
On File Options, set Filepath/Filename and pick PLAIN for Format
Ignore Dump Options #1 tab
In Dump Options #2 tab, check USE INSERT COMMANDS
In Dump Options #2 tab, check Use Column Inserts if you want column names in your inserts.
Hit Backup button
Edit: In case you are using pgAdmin on a remote server: Once the UI announces the backup was successful, you might want to download the data. For this, follow these steps:
Click on Tools (in the menu)
Select Storage Manager
Select the file that was just created
Click on the small "Download File" button
| PostgreSQL | 11,257,132 | 166 |
I have a table software and columns in it as dev_cost, sell_cost. If dev_cost is 16000 and sell_cost is 7500, how do I find the quantity of software to be sold in order to recover the dev_cost?
I have queried as below:
select dev_cost / sell_cost from software ;
It is returning 2 as the answer. But we need to get 3, right?
What would be the query for that?
| Your columns have integer types, and integer division truncates the result towards zero. To get an accurate result, you'll need to cast at least one of the values to float or decimal:
select cast(dev_cost as decimal) / sell_cost from software ;
or just:
select dev_cost::decimal / sell_cost from software ;
You can then round the result up to the nearest integer using the ceil() function:
select ceil(dev_cost::decimal / sell_cost) from software ;
(See demo on SQLFiddle.)
| PostgreSQL | 34,504,497 | 165 |
Suppose I've next data
id date another_info
1 2014-02-01 kjkj
1 2014-03-11 ajskj
1 2014-05-13 kgfd
2 2014-02-01 SADA
3 2014-02-01 sfdg
3 2014-06-12 fdsA
I want for each id extract last information:
id date another_info
1 2014-05-13 kgfd
2 2014-02-01 SADA
3 2014-06-12 fdsA
How could I manage that?
| The most efficient way is to use Postgres' distinct on operator
select distinct on (id) id, date, another_info
from the_table
order by id, date desc;
If you want a solution that works across databases (but is less efficient) you can use a window function:
select id, date, another_info
from (
select id, date, another_info,
row_number() over (partition by id order by date desc) as rn
from the_table
) t
where rn = 1
order by id;
The solution with a window function is in most cases faster than using a sub-query.
| PostgreSQL | 28,085,468 | 165 |
I'm trying to import some data into my database. So I've created a temporary table,
create temporary table tmp(pc varchar(10), lat decimal(18,12), lon decimal(18,12), city varchar(100), prov varchar(2));
And now I'm trying to import the data,
copy tmp from '/home/mark/Desktop/Canada.csv' delimiter ',' csv
But then I get the error,
ERROR: invalid byte sequence for encoding "UTF8": 0xc92c
How do I fix that? Do I need to change the encoding of my entire database (if so, how?) or can I change just the encoding of my tmp table? Or should I attempt to change the encoding of the file?
| If you need to store UTF8 data in your database, you need a database that accepts UTF8. You can check the encoding of your database in pgAdmin. Just right-click the database, and select "Properties".
But that error seems to be telling you there's some invalid UTF8 data in your source file. That means that the copy utility has detected or guessed that you're feeding it a UTF8 file.
If you're running under some variant of Unix, you can check the encoding (more or less) with the file utility.
$ file yourfilename
yourfilename: UTF-8 Unicode English text
(I think that will work on Macs in the terminal, too.) Not sure how to do that under Windows.
If you use that same utility on a file that came from Windows systems (that is, a file that's not encoded in UTF8), it will probably show something like this:
$ file yourfilename
yourfilename: ASCII text, with CRLF line terminators
If things stay weird, you might try to convert your input data to a known encoding, to change your client's encoding, or both. (We're really stretching the limits of my knowledge about encodings.)
You can use the iconv utility to change encoding of the input data.
iconv -f original_charset -t utf-8 originalfile > newfile
You can change psql (the client) encoding following the instructions on Character Set Support. On that page, search for the phrase "To enable automatic character set conversion".
| PostgreSQL | 4,867,272 | 165 |
I did backup on database on different server and that has different role than I need, with this command:
pg_dump -Fc db_name -f db_name.dump
Then I copied backup to another server where I need to restore the database, but there is no such owner that was used for that database. Let say database has owner owner1, but on different server I only have owner2 and I need to restore that database and change owner.
What I did on another server when restoring:
createdb -p 5433 -T template0 db_name
pg_restore -p 5433 --role=owner2 -d db_name db_name.dump
But when restore is run I get these errors:
pg_restore: [archiver (db)] could not execute query: ERROR: role "owner1" does not exist
How can I specify it so it would change owner? Or is it impossible?
| You should use the --no-owner option, this stops pg_restore trying to set the ownership of the objects to the original owner. Instead the objects will be owned by the user specified by --role
createdb -p 5433 -T template0 db_name
pg_restore -p 5433 --no-owner --role=owner2 -d db_name db_name.dump
pg_restore doc
| PostgreSQL | 31,469,008 | 164 |
I have been seeing quite a large variation in response times regarding LIKE queries to a particular table in my database. Sometimes I will get results within 200-400 ms (very acceptable) but other times it might take as much as 30 seconds to return results.
I understand that LIKE queries are very resource intensive but I just don't understand why there would be such a large difference in response times. I have built a btree index on the owner1 field but I don't think it helps with LIKE queries. Anyone have any ideas?
Sample SQL:
SELECT gid, owner1 FORM parcels
WHERE owner1 ILIKE '%someones name%' LIMIT 10
I've also tried:
SELECT gid, owner1 FROM parcels
WHERE lower(owner1) LIKE lower('%someones name%') LIMIT 10
And:
SELECT gid, owner1 FROM parcels
WHERE lower(owner1) LIKE lower('someones name%') LIMIT 10
With similar results.
Table Row Count: about 95,000.
| FTS does not support LIKE
The previously accepted answer was incorrect. Full Text Search with its full text indexes is not for the LIKE operator at all, it has its own operators and doesn't work for arbitrary strings. It operates on words based on dictionaries and stemming. It does support prefix matching for words, but not with the LIKE operator:
Get partial match from GIN indexed TSVECTOR column
Trigram index for LIKE
Install the additional module pg_trgm which provides operator classes for GIN and GiST trigram indexes to support all LIKE and ILIKE patterns, not just left-anchored ones:
Example index:
CREATE INDEX tbl_col_gin_trgm_idx ON tbl USING gin (col gin_trgm_ops);
Or:
CREATE INDEX tbl_col_gist_trgm_idx ON tbl USING gist (col gist_trgm_ops);
Difference between GiST and GIN index
Example query:
SELECT * FROM tbl WHERE col LIKE 'foo%';
SELECT * FROM tbl WHERE col LIKE '%foo%'; -- works with leading wildcard, too
SELECT * FROM tbl WHERE col ILIKE '%foo%'; -- works case insensitively as well
Trigrams? What about shorter strings?
Words with less than 3 letters in indexed values still work. The manual:
Each word is considered to have two spaces prefixed and one space
suffixed when determining the set of trigrams contained in the string.
And search patterns with less than 3 letters? The manual:
For both LIKE and regular-expression searches, keep in mind that a
pattern with no extractable trigrams will degenerate to a full-index scan.
Meaning, that index / bitmap index scans still work (query plans for prepared statement won't break), it just won't buy you better performance. Typically no big loss, since 1- or 2-letter strings are hardly selective (more than a few percent of the underlying table matches) and index support would not improve performance (much) to begin with, because a full table scan is faster.
Prefix matching
Search patterns with no leading wildcard: col LIKE 'foo%'.
^@ operator / starts_with() function
Quoting the release notes of Postgres 11:
Add prefix-match operator text ^@ text, which is supported by SP-GiST
(Ildus Kurbangaliev)
This is similar to using var LIKE 'word%' with a btree index, but it
is more efficient.
Example query:
SELECT * FROM tbl WHERE col ^@ 'foo'; -- no added wildcard
But the potential of operator and function stays limited until planner support is improved in Postgres 15 and the ^@ operator is documented properly. The release notes:
Allow the ^@ starts-with operator and the starts_with() function to
use btree indexes if using the C collation (Tom Lane)
Previously these could only use SP-GiST indexes.
COLLATE "C"
Since Postgres 9.1, an index with COLLATE "C" provides the same functionality as the operator class text_pattern_ops described below. See:
Is there a difference between text_pattern_ops and COLLATE "C"?
Example index:
CREATE INDEX tbl_col_text_collate_c_idx ON tbl(col COLLATE "C");
text_pattern_ops (original answer)
For just left-anchored patterns (no leading wildcard) you get the optimum with a suitable operator class for a btree index: text_pattern_ops or varchar_pattern_ops. Both built-in features of standard Postgres, no additional module needed. Similar performance, but much smaller index.
Example index:
CREATE INDEX tbl_col_text_pattern_ops_idx ON tbl(col text_pattern_ops);
Example query:
SELECT * FROM tbl WHERE col LIKE 'foo%'; -- no leading wildcard
Or, if you should be running your database with the 'C' locale (effectively no locale), then everything is sorted according to byte order anyway and a plain btree index with default operator class does the job.
Further reading
Pattern matching with LIKE, SIMILAR TO or regular expressions
How is LIKE implemented?
Finding similar strings with PostgreSQL quickly
| PostgreSQL | 1,566,717 | 164 |
I want to drop 200 columns in my table in PostgreSQL. I tried:
ALTER TABLE my_table
DROP COLUMN col1, col2
But I get an error like this:
ERROR: syntax error at or near "col2"
| As per the docs, you can do this:
ALTER TABLE table DROP COLUMN col1, DROP COLUMN col2;
(You may need to wrap some of your column names in " quotes if they happen to be keywords.)
| PostgreSQL | 13,474,537 | 163 |
I believe the title is self-explanatory. How do you create the table structure in PostgreSQL to make a many-to-many relationship.
My example:
Product(name, price);
Bill(name, date, Products);
| The SQL DDL (data definition language) statements could look like this:
CREATE TABLE product (
product_id serial PRIMARY KEY -- implicit primary key constraint
, product text NOT NULL
, price numeric NOT NULL DEFAULT 0
);
CREATE TABLE bill (
bill_id serial PRIMARY KEY
, bill text NOT NULL
, billdate date NOT NULL DEFAULT CURRENT_DATE
);
CREATE TABLE bill_product (
bill_id int REFERENCES bill (bill_id) ON UPDATE CASCADE ON DELETE CASCADE
, product_id int REFERENCES product (product_id) ON UPDATE CASCADE
, amount numeric NOT NULL DEFAULT 1
, CONSTRAINT bill_product_pkey PRIMARY KEY (bill_id, product_id) -- explicit pk
);
I made a few adjustments:
The n:m relationship is normally implemented by a separate table - bill_product in this case.
I added serial columns as surrogate primary keys. In Postgres 10 or later consider an IDENTITY column instead. See:
Safely rename tables using serial primary key columns
Auto increment table column
https://www.2ndquadrant.com/en/blog/postgresql-10-identity-columns/
I highly recommend that, because the name of a product is hardly unique (not a good "natural key"). Also, enforcing uniqueness and referencing the column in foreign keys is typically cheaper with a 4-byte integer (or even an 8-byte bigint) than with a string stored as text or varchar.
Don't use names of basic data types like date as identifiers. While this is possible, it is bad style and leads to confusing errors and error messages. Use legal, lower case, unquoted identifiers. Never use reserved words and avoid double-quoted mixed case identifiers if you can.
"name" is not a good name. I renamed the column of the table product to be product (or product_name or similar). That is a better naming convention. Otherwise, when you join a couple of tables in a query - which you do a lot in a relational database - you end up with multiple columns named "name" and have to use column aliases to sort out the mess. That's not helpful. Another widespread anti-pattern would be just "id" as column name.
I am not sure what the name of a bill would be. bill_id will probably suffice in this case.
price is of data type numeric to store fractional numbers precisely as entered (arbitrary precision type instead of floating point type). If you deal with whole numbers exclusively, make that integer. For example, you could save prices as Cents.
The amount ("Products" in your question) goes into the linking table bill_product and is of type numeric as well. Again, integer if you deal with whole numbers exclusively.
You see the foreign keys in bill_product? I created both to cascade changes: ON UPDATE CASCADE. If a product_id or bill_id should change, the change is cascaded to all depending entries in bill_product and nothing breaks. Those are just references without significance of their own.
I also used ON DELETE CASCADE for bill_id: If a bill gets deleted, its details die with it.
Not so for products: You don't want to delete a product that's used in a bill. Postgres will throw an error if you attempt this. You would add another column to product to mark obsolete rows ("soft-delete") instead.
All columns in this basic example end up to be NOT NULL, so NULL values are not allowed. (Yes, all columns - primary key columns are defined UNIQUE NOT NULL automatically.) That's because NULL values wouldn't make sense in any of the columns. It makes a beginner's life easier. But you won't get away so easily, you need to understand NULL handling anyway. Additional columns might allow NULL values, functions and joins can introduce NULL values in queries etc.
Read the chapter on CREATE TABLE in the manual.
Primary keys are implemented with a unique index on the key columns, that makes queries with conditions on the PK column(s) fast. However, the sequence of key columns is relevant in multicolumn keys. Since the PK on bill_product is on (bill_id, product_id) in my example, you may want to add another index on just product_id or (product_id, bill_id) if you have queries looking for a given product_id and no bill_id. See:
PostgreSQL composite primary key
Is a composite index also good for queries on the first field?
Working of indexes in PostgreSQL
Read the chapter on indexes in the manual.
| PostgreSQL | 9,789,736 | 163 |
JSON value may consist of a string value. eg.:
postgres=# SELECT to_json('Some "text"'::TEXT);
to_json
-----------------
"Some \"text\""
How can I extract that string as a Postgres text value?
::TEXT doesn't work. It returns quoted json, not the original string:
postgres=# SELECT to_json('Some "text"'::TEXT)::TEXT;
to_json
-----------------
"Some \"text\""
P.S. I'm using PostgreSQL 9.3
| In 9.4.4 using the #>> operator works for me:
select to_json('test'::text) #>> '{}';
To use with a table column:
select jsoncol #>> '{}' from mytable;
| PostgreSQL | 27,215,216 | 162 |
What is the default directory where PostgreSQL will keep all databases on Linux?
| The "directory where postgresql will keep all databases" (and configuration) is called "data directory" and corresponds to what PostgreSQL calls (a little confusingly) a "database cluster", which is not related to distributed computing, it just means a group of databases and related objects managed by a PostgreSQL server.
The location of the data directory depends on the distribution. If you install from source, the default is /usr/local/pgsql/data:
In file system terms, a database
cluster will be a single directory
under which all data will be stored.
We call this the data directory or
data area. It is completely up to you
where you choose to store your data.
There is no default, although
locations such as
/usr/local/pgsql/data or
/var/lib/pgsql/data are popular.
(ref)
Besides, an instance of a running PostgreSQL server is associated to one cluster; the location of its data directory can be passed to the server daemon ("postgres") in the -D command line option, or by the PGDATA environment variable (usually in the scope of the running user, typically postgres). You can usually see the running server with something like this:
[root@server1 ~]# ps auxw | grep postgres | grep -- -D
postgres 1535 0.0 0.1 39768 1584 ? S May17 0:23 /usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data
Note that it is possible, though not very frequent, to run two instances of the same PostgreSQL server (same binaries, different processes) that serve different "clusters" (data directories). Of course, each instance would listen on its own TCP/IP port.
| PostgreSQL | 3,004,523 | 162 |
To have an integer auto-numbering primary key on a table, you can use SERIAL
But I noticed the table information_schema.columns has a number of identity_ fields, and indeed, you could create a column with a GENERATED specifier...
What's the difference? Were they introduced with different PostgreSQL versions? Is one preferred over the other?
| SERIAL is the "old" implementation of auto-generated unique values that has been part of Postgres for ages. However that is not part of the SQL standard.
To be more compliant with the SQL standard, Postgres 10 introduced the syntax using GENERATED AS IDENTITY.
The underlying implementation is still based on a sequence, the definition now complies with the SQL standard. One thing that this new syntax allows is to prevent an accidental override of the value.
Consider the following tables:
CREATE TABLE t1 (id SERIAL PRIMARY KEY);
CREATE TABLE t2 (id INTEGER PRIMARY KEY GENERATED ALWAYS AS IDENTITY);
Now when you run:
INSERT INTO t1 (id) VALUES (1);
The underlying sequence and the values in the table are not in sync any more. If you run another
INSERT INTO t1 DEFAULT VALUES;
You will get an error because the sequence was not advanced by the first insert, and now tries to insert the value 1 again.
With the second table however,
INSERT INTO t2 (id) VALUES (1);
Results in:
ERROR: cannot insert into column "id"
Detail: Column "id" is an identity column defined as GENERATED ALWAYS.
So you can't accidentally "forget" the sequence usage. You can still force this, using the OVERRIDING SYSTEM VALUE option:
INSERT INTO t2 (id) OVERRIDING SYSTEM VALUE VALUES (1);
which still leaves you with a sequence that is out-of-sync with the values in the table, but at least you were made aware of that.
IDENTITY columns also have another advantage: they also minimize the grants you need to give to a role in order to allow inserts.
While a table using a SERIAL column requires the INSERT privilege on the table and the USAGE privilege on the underlying sequence this is not needed for tables using an IDENTITY columns. Granting the INSERT privilege is enough.
It is recommended to use the new identity syntax rather than serial
| PostgreSQL | 55,300,370 | 161 |
I have PostgreSQL 9.3 and 9.4 installed on my Linux Mint machine.
How can I restart PostgreSQL 9.4?
A method to restart both versions together is also fine.
| Try this as root (maybe you can use sudo or su):
/etc/init.d/postgresql restart
Without any argument the script also gives you a hint on how to restart a specific version
[Uqbar@Feynman ~] /etc/init.d/postgresql
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ...]
Similarly, in case you have it, you can also use the service tool:
[Uqbar@Feynman ~] service postgresql
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force reload|status} [version ...]
Please, pay attention to the optional [version ...] trailing argument.
That's meant to allow you, the user, to act on a specific version, in case you were running multiple ones. So you can restart version X while keeping version Y and Z untouched and running.
Finally, in case you are running systemd, then you can use systemctl like this:
[Uqbar@Feynman ~] systemctl status postgresql
● postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2017-11-14 12:33:35 CET; 7min ago
...
You can replace status with stop, start or restart as well as other actions. Please refer to the documentation for full details.
In order to operate on multiple concurrent versions, the syntax is slightly different. For example to stop v12 and reload v13 you can run:
systemctl stop postgresql-12.service
systemctl reload postgresql-13.service
Thanks to @Jojo for pointing me to this very one.
Finally Keep in mind that root permissions may be needed for non-informative tasks as in the other cases seen earlier.
| PostgreSQL | 34,918,025 | 161 |
Let say you have a SELECT id from table query (the real case is a complex query) that does return you several results.
The problem is how to get all id return in a single row, comma separated?
| SELECT string_agg(id::text, ',') FROM table
Requires PostgreSQL 9.0 but that's not a problem.
| PostgreSQL | 11,899,024 | 161 |
I have a database with hundreds of tables, what I need to do is export specified tables and insert statements for the data to one sql file.
The only statement I know can achieve this is
pg_dump -D -a -t zones_seq interway > /tmp/zones_seq.sql
Should I run this statement for each and every table or is there a way to run a similar statement to export all selected tables into one big sql big. The pg_dump above does not export the table schema only inserts, I need both
Any help will be appreciated.
| Right from the manual: "Multiple tables can be selected by writing multiple -t switches"
So you need to list all of your tables
pg_dump --column-inserts -a -t zones_seq -t interway -t table_3 ... > /tmp/zones_seq.sql
Note that if you have several table with the same prefix (or suffix) you can also use wildcards to select them with the -t parameter:
"Also, the table parameter is interpreted as a pattern according to the same rules used by psql's \d commands"
| PostgreSQL | 7,359,827 | 161 |
I want to find the cumulative or running amount of field and insert it from staging to table. My staging structure is something like this:
ea_month id amount ea_year circle_id
April 92570 1000 2014 1
April 92571 3000 2014 2
April 92572 2000 2014 3
March 92573 3000 2014 1
March 92574 2500 2014 2
March 92575 3750 2014 3
February 92576 2000 2014 1
February 92577 2500 2014 2
February 92578 1450 2014 3
I want my target table to look something like this:
ea_month id amount ea_year circle_id cum_amt
February 92576 1000 2014 1 1000
March 92573 3000 2014 1 4000
April 92570 2000 2014 1 6000
February 92577 3000 2014 2 3000
March 92574 2500 2014 2 5500
April 92571 3750 2014 2 9250
February 92578 2000 2014 3 2000
March 92575 2500 2014 3 4500
April 92572 1450 2014 3 5950
I am really very much confused with how to go about achieving this result. I want to achieve this result using PostgreSQL.
Can anyone suggest how to go about achieving this result-set?
| Basically, you need a window function. That's a standard feature nowadays. In addition to genuine window functions, you can use any aggregate function as window function in Postgres by appending an OVER clause.
The special difficulty here is to get partitions and sort order right:
SELECT ea_month, id, amount, ea_year, circle_id
, sum(amount) OVER (PARTITION BY circle_id
ORDER BY ea_year, ea_month) AS cum_amt
FROM tbl
ORDER BY circle_id, ea_year, ea_month;
And no GROUP BY.
The sum for each row is calculated from the first row in the partition to the current row - or quoting the manual to be precise:
The default framing option is RANGE UNBOUNDED PRECEDING, which is
the same as RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. With
ORDER BY, this sets the frame to be all rows from the partition
start up through the current row's last ORDER BY peer.
Bold emphasis mine.
This is the cumulative (or "running") sum you are after.
In default RANGE mode, rows with the same rank in the sort order are "peers" - same (circle_id, ea_year, ea_month) in this query. All of those show the same running sum with all peers added to the sum. But I assume your table is UNIQUE on (circle_id, ea_year, ea_month), then the sort order is deterministic and no row has peers. (And you might as well use the cheaper ROWS mode.)
Postgres 11 added tools to include / exclude peers with the new frame_exclusion options. See:
Aggregating all values not in the same group
Now, ORDER BY ... ea_month won't work with strings for month names. Postgres would sort alphabetically according to the locale setting.
If you have actual date values stored in your table you can sort properly. If not, I suggest to replace ea_year and ea_month with a single column the_date of type date in your table.
Transform what you have with to_date():
to_date(ea_year || ea_month , 'YYYYMonth') AS the_date
For display, you can get original strings with to_char():
to_char(the_date, 'Month') AS ea_month
to_char(the_date, 'YYYY') AS ea_year
While stuck with the unfortunate design, this will work:
SELECT ea_month, id, amount, ea_year, circle_id
, sum(amount) OVER (PARTITION BY circle_id ORDER BY the_date) AS cum_amt
FROM (SELECT *, to_date(ea_year || ea_month, 'YYYYMonth') AS the_date FROM tbl) sub
ORDER BY circle_id, mon;
| PostgreSQL | 22,841,206 | 160 |
I'm using PostgreSQL 9.1. I have the column name of a table. Is it possible to find the table(s) that has/have this column? If so, how?
| You can also do
select table_name from information_schema.columns where column_name = 'your_column_name'
| PostgreSQL | 18,508,422 | 159 |
In Microsoft SQL Server, it's possible to specify an "accent insensitive" collation (for a database, table or column), which means that it's possible for a query like
SELECT * FROM users WHERE name LIKE 'João'
to find a row with a Joao name.
I know that it's possible to strip accents from strings in PostgreSQL using the unaccent_string contrib function, but I'm wondering if PostgreSQL supports these "accent insensitive" collations so the SELECT above would work.
| Update for Postgres 12 or later
Postgres 12 adds nondeterministic ICU collations, enabling case-insensitive and accent-insensitive grouping and ordering. The manual:
ICU locales can only be used if support for ICU was configured when PostgreSQL was built.
If so, this works for you:
CREATE COLLATION ignore_accent (provider = icu, locale = 'und-u-ks-level1-kc-true', deterministic = false);
CREATE INDEX users_name_ignore_accent_idx ON users(name COLLATE ignore_accent);
SELECT * FROM users WHERE name = 'João' COLLATE ignore_accent;
fiddle
Read the manual for details.
This blog post by Laurenz Albe may help to understand.
But ICU collations also have drawbacks. The manual:
[...] they also have some drawbacks. Foremost, their use leads to a
performance penalty. Note, in particular, that B-tree cannot use
deduplication with indexes that use a nondeterministic collation.
Also, certain operations are not possible with nondeterministic
collations, such as pattern matching operations. Therefore, they
should be used only in cases where they are specifically wanted.
My "legacy" solution is typically still superior:
For all versions
Use the unaccent module for that - which is completely different from what you are linking to.
unaccent is a text search dictionary that removes accents (diacritic
signs) from lexemes.
Install once per database with:
CREATE EXTENSION unaccent;
If you get an error like:
ERROR: could not open extension control file
"/usr/share/postgresql/<version>/extension/unaccent.control": No such file or directory
Install the contrib package on your database server like instructed in this related answer:
Error when creating unaccent extension on PostgreSQL
Among other things, it provides the function unaccent() you can use with your example (where LIKE seems not needed).
SELECT *
FROM users
WHERE unaccent(name) = unaccent('João');
Index
To use an index for that kind of query, create an index on the expression. However, Postgres only accepts IMMUTABLE functions for indexes. If a function can return a different result for the same input, the index could silently break.
unaccent() only STABLE not IMMUTABLE
Unfortunately, unaccent() is only STABLE, not IMMUTABLE. According to this thread on pgsql-bugs, this is due to three reasons:
It depends on the behavior of a dictionary.
There is no hard-wired connection to this dictionary.
It therefore also depends on the current search_path, which can change easily.
Some tutorials on the web instruct to just alter the function volatility to IMMUTABLE. This brute-force method can break under certain conditions.
Others suggest a simple IMMUTABLE wrapper function (like I did myself in the past).
There is an ongoing debate whether to make the variant with two parameters IMMUTABLE which declares the used dictionary explicitly. Read here or here.
Best for now
This approach is more efficient than other solutions floating around, and safer.
Create an IMMUTABLE SQL wrapper function executing the two-parameter form with hard-wired, schema-qualified function and dictionary.
Since nesting a non-immutable function would disable function inlining, base it on a copy of the C-function, (fake) declared IMMUTABLE as well. Its only purpose is to be used in the SQL function wrapper. Not meant to be used on its own.
The sophistication is needed as there is no way to hard-wire the dictionary in the declaration of the C function. (Would require to hack the C code itself.) The SQL wrapper function does that and allows both function inlining and expression indexes.
CREATE OR REPLACE FUNCTION public.immutable_unaccent(regdictionary, text)
RETURNS text
LANGUAGE c IMMUTABLE PARALLEL SAFE STRICT AS
'$libdir/unaccent', 'unaccent_dict';
Then:
CREATE OR REPLACE FUNCTION public.f_unaccent(text)
RETURNS text
LANGUAGE sql IMMUTABLE PARALLEL SAFE STRICT AS
$func$
SELECT public.immutable_unaccent(regdictionary 'public.unaccent', $1)
$func$;
In Postgres 14 or later, an SQL-standard function is slightly cheaper, yet. Using the short form for a single statement:
CREATE OR REPLACE FUNCTION public.f_unaccent(text)
RETURNS text
LANGUAGE sql IMMUTABLE PARALLEL SAFE STRICT
RETURN public.immutable_unaccent(regdictionary 'public.unaccent', $1);
See:
What does BEGIN ATOMIC mean in a PostgreSQL SQL function / procedure?
Drop PARALLEL SAFE from both functions for Postgres 9.5 or older.
public being the schema where you installed the extension (public is the default).
The explicit type declaration (regdictionary) defends against hypothetical attacks with overloaded variants of the function by malicious users.
Previously, I advocated a wrapper function based on the STABLE function unaccent() shipped with the unaccent module. That disabled function inlining. This version executes ten times faster than the simple wrapper function I had here earlier.
And that was already twice as fast as the first version which added SET search_path = public, pg_temp to the function - until I discovered that the dictionary can be schema-qualified, too. Still (Postgres 12) not too obvious from documentation.
If you lack the necessary privileges to create C functions, you are back to the second best implementation: An IMMUTABLE function wrapper around the STABLE unaccent() function provided by the module:
CREATE OR REPLACE FUNCTION public.f_unaccent(text)
RETURNS text
LANGUAGE sql IMMUTABLE PARALLEL SAFE STRICT AS
$func$
SELECT public.unaccent('public.unaccent', $1) -- schema-qualify function and dictionary
$func$;
Finally, the expression index to make queries fast:
CREATE INDEX users_unaccent_name_idx ON users(public.f_unaccent(name));
Remember to recreate indexes involving this function after any change to function or dictionary, like an in-place major release upgrade that would not recreate indexes. Recent major releases all had updates for the unaccent module.
Adapt queries to match the index (so the query planner will use it):
SELECT * FROM users
WHERE f_unaccent(name) = f_unaccent('João');
We don't need the function in the expression to the right of the operator. There we can also supply unaccented strings like 'Joao' directly.
The faster function does not translate to much faster queries using the expression index. Index look-ups operate on pre-computed values and are very fast either way. But index maintenance and queries not using the index benefit. And access methods like bitmap index scans may have to recheck values in the heap (the main relation), which involves executing the underlying function. See:
"Recheck Cond:" line in query plans with a bitmap index scan
Security for client programs has been tightened with Postgres 10.3 / 9.6.8 etc. You need to schema-qualify function and dictionary name as demonstrated when used in any indexes. See:
'text search dictionary “unaccent” does not exist' entries in postgres log, supposedly during automatic analyze
Ligatures
In Postgres 9.5 or older ligatures like 'Œ' or 'ß' have to be expanded manually (if you need that), since unaccent() always substitutes a single letter:
SELECT unaccent('Œ Æ œ æ ß');
unaccent
----------
E A e a S
You will love this update to unaccent in Postgres 9.6:
Extend contrib/unaccent's standard unaccent.rules file to handle all
diacritics known to Unicode, and expand ligatures correctly (Thomas
Munro, Léonard Benedetti)
Bold emphasis mine. Now we get:
SELECT unaccent('Œ Æ œ æ ß');
unaccent
----------
OE AE oe ae ss
Pattern matching
For LIKE or ILIKE with arbitrary patterns, combine this with the module pg_trgm in PostgreSQL 9.1 or later. Create a trigram GIN (typically preferable) or GIST expression index. Example for GIN:
CREATE INDEX users_unaccent_name_trgm_idx ON users
USING gin (f_unaccent(name) gin_trgm_ops);
Can be used for queries like:
SELECT * FROM users
WHERE f_unaccent(name) LIKE ('%' || f_unaccent('João') || '%');
GIN and GIST indexes are more expensive (to maintain) than plain B-tree:
Difference between GiST and GIN index
There are simpler solutions for just left-anchored patterns. More about pattern matching and performance:
Pattern matching with LIKE, SIMILAR TO or regular expressions
pg_trgm also provides useful operators for "similarity" (%) and "distance" (<->).
Trigram indexes also support simple regular expressions with ~ et al. and case insensitive pattern matching with ILIKE:
PostgreSQL accent + case insensitive search
| PostgreSQL | 11,005,036 | 159 |
I am using postgresql with django in my project. I've got them in different containers and the problem is that i need to wait for postgres before running django. At this time i am doing it with sleep 5 in command.sh file for django container. I also found that netcat can do the trick but I would prefer way without additional packages. curl and wget can't do this because they do not support postgres protocol.
Is there a way to do it?
| I've spent some hours investigating this problem and I got a solution.
Docker depends_on just consider service startup to run another service. Than it happens because as soon as db is started, service-app tries to connect to ur db, but it's not ready to receive connections. So you can check db health status in app service to wait for connection. Here is my solution, it solved my problem. :)
Important: I'm using docker-compose version 2.1.
version: '2.1'
services:
my-app:
build: .
command: su -c "python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
depends_on:
db:
condition: service_healthy
links:
- db
volumes:
- .:/app_directory
db:
image: postgres:10.5
ports:
- "5432:5432"
volumes:
- database:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
volumes:
database:
In this case it's not necessary to create a .sh file.
| PostgreSQL | 35,069,027 | 158 |
Being completely new to PL/pgSQL, what is the meaning of double dollar signs in this function:
CREATE OR REPLACE FUNCTION check_phone_number(text)
RETURNS boolean AS $$
BEGIN
IF NOT $1 ~ e'^\\+\\d{3}\\ \\d{3} \\d{3} \\d{3}$' THEN
RAISE EXCEPTION 'Wrong formated string "%". Expected format is +999 999';
END IF;
RETURN true;
END;
$$ LANGUAGE plpgsql STRICT IMMUTABLE;
I'm guessing that, in RETURNS boolean AS $$, $$ is a placeholder.
The last line is a bit of a mystery: $$ LANGUAGE plpgsql STRICT IMMUTABLE;
By the way, what does the last line mean?
| These dollar signs ($$) are used for dollar quoting, which is in no way specific to function definitions. It can be used to replace single quotes enclosing string literals (constants) anywhere in SQL scripts.
The body of a function happens to be such a string literal. Dollar quoting is a PostgreSQL-specific substitute for single quotes to avoid escaping of nested single quotes (recursively). You can enclose the function body in single-quotes just as well. But then you have to escape all nested single quotes:
CREATE OR REPLACE FUNCTION check_phone_number(text)
RETURNS boolean
LANGUAGE plpgsql STRICT IMMUTABLE AS
'
BEGIN
IF NOT $1 ~ e''^\\+\\d{3}\\ \\d{3} \\d{3} \\d{3}$'' THEN
RAISE EXCEPTION ''Malformed string: "%". Expected format is +999 999'', $1;
END IF;
RETURN true;
END
';
(Added the missing parameter for RAISE, btw.)
This isn't such a good idea. Use dollar-quoting instead. More specifically, also put a (meaningful) token inside the $$ to avoid confusion with nested quotes in the function body. A common case, actually.
CREATE OR REPLACE FUNCTION check_phone_number(text)
RETURNS boolean
LANGUAGE plpgsql STRICT IMMUTABLE AS
$func$
BEGIN
...
END
$func$;
See:
Insert text with single quotes in PostgreSQL
To your second question:
Read the most excellent manual on CREATE FUNCTION to understand the last line of your example.
| PostgreSQL | 12,144,284 | 158 |
I know that EXPIREAT in Redis is used to specify when a key will expire. My problem though is that it takes an absolute UNIX timestamp. I'm finding a hard time thinking about what I should set as an argument if I want the key to expire at the end of the day.
This is how I set my key:
client.set(key, body);
So to set the expire at:
client.expireat(key, ???);
Any ideas? I'm using this with nodejs and sailsjs
Thanks!
| If you want to expire it 24 hrs later
client.expireat(key, parseInt((+new Date)/1000) + 86400);
Or if you want it to expire exactly at the end of today, you can use .setHours on a new Date() object to get the time at the end of the day, and use that.
var todayEnd = new Date().setHours(23, 59, 59, 999);
client.expireat(key, parseInt(todayEnd/1000));
| Redis | 30,565,571 | 38 |
I'm using Redis in my application, both for Sidekiq queues, and for model caching.
What is the best way to have a Redis connection available to my models, considering that the models that will be hitting Redis will be called both from my Web application (ran via Puma), and from background jobs inside Sidekiq?
I'm currently doing this in my initializers:
Redis.current = Redis.new(host: 'localhost', port: 6379)
And then simply use Redis.current.get / Redis.current.set (and similar) throughout the code...
This should be thread-safe, as far as I understand, since the Redis Client only runs one command at a time, using a Monitor.
Now, Sidekiq has its own connection pool to Redis, and recommends doing
Sidekiq.redis do |conn|
conn.get
conn.set
end
As I understand it, this would be better than the approach of just using Redis.current because you don't have multiple workers on multiple threads waiting on each other on a single connection when they hit Redis.
However, how can I make this connection that I get from Sidekiq.redis available to my models? (without having to pass it around as a parameter in every method call)
I can't set Redis.current inside that block, since it's global, and I'm back to everyone using the same connection (plus switching between them randomly, which might even be non-thread-safe)
Should I store the connection that I get from Sidekiq.Redis into a Thread-local variable, and use that thread-local variable everywhere?
In that case, what do I do in the "Puma" context? How do I set the thread-local variable?
Any thoughts on this are greatly appreciated.
Thank you!
| You use a separate global connection pool for your application code. Put something like this in your redis.rb initializer:
require 'connection_pool'
REDIS = ConnectionPool.new(size: 10) { Redis.new }
Now in your application code anywhere, you can do this:
REDIS.with do |conn|
# some redis operations
end
You'll have up to 10 connections to share amongst your puma/sidekiq workers. This will lead to better performance since, as you correctly note, you won't have all the threads fighting over a single Redis connection.
All of this is documented here: https://github.com/mperham/sidekiq/wiki/Advanced-Options#connection-pooling
| Redis | 28,113,940 | 38 |
https://github.com/andymccurdy/redis-py
I know in ruby we use the quit() method. I can't find anything here for python
python:
import redis
r = redis.StrictRedis(host='localhost', port=6379, db=0)
r.set('foo', 'bar')
print r.get('foo')
#r.close() doesn't work
ruby
require "redis"
redis = Redis.new
redis.set("mykey", "hello world")
puts redis.get("mykey")
redis.quit()
| Just use redis.Redis. It uses a connection pool under the hood, so you don't have to worry about managing at that level.
If you absolutely have to use a low level connection, you need to do the response handling that is normally done for you by redis.Redis.
Here's an example of executing a single command using the low level connection:
def execute_low_level(command, *args, **kwargs):
connection = redis.Connection(**kwargs)
try:
connection.connect()
connection.send_command(command, *args)
response = connection.read_response()
if command in redis.Redis.RESPONSE_CALLBACKS:
return redis.Redis.RESPONSE_CALLBACKS[command](response)
return response
finally:
del connection
Example usage:
response = execute_low_level(
'HGET', 'redis:key', 'hash:key', host='localhost', port=6379)
But as I said before, redis.Redis is the way to go in 99.9% of cases.
| Redis | 24,875,806 | 38 |
I am using redis as a read cache. I have created an initializer
config/initializer/redis.rb
$redis = Redis.new(:host => ENV["REDIS_HOST"], :port => ENV["REDIS_PORT"])
I am using this global in my unicorn.rb to create a new connection whenever a new worker is created.
before_fork do |server, worker|
# clear redis connection
$redis.quit unless $redis.blank?
end
# Give each child process its own Redis connection
after_fork do |server, worker|
$redis = Redis.new(:host => ENV["REDIS_HOST"], :port => ENV["REDIS_PORT"])
end
I am also using this global variable whenever I need to access my redis servers. But I am not comfortable using this global variable. Are there any better options than using global variable?
| There is Redis.current, which you can use to store your one-and-only Redis instance.
So instead of using $redis, you can assign your instance as follows:
Redis.current = Redis.new(:host => ENV["REDIS_HOST"], :port => ENV["REDIS_PORT"])
Redis.current was introduced to redis-rb in 2010 as a standard way to grab a redis connection, so I was surprised that no other answer mentioned it.
Update: Starting with version 4.6.0 Redis.current has been deprecated. The author notes that typical multi-threaded applications will find a lot of locking around a shared redis client. They recommend to define an own place to get a redis client, but also suggest to use a connection pool.
The accepted answer is therefore the simplest solution to achieve something comparable to Redis.current, but may not perform optimal in multi-threaded environments.
| Redis | 21,075,781 | 38 |
Is there a Redis data structure, which would allow atomic operation of popping (get+remove) multiple elements, which it contains?
There are well known SPOP or RPOP, but they always return a single value. Therefore, when I need first N values from set/list, I need to call the command N-times, which is expensive. Let's say the set/list contains millions of items. Is there anything like SPOPM "setName" 1000, which would return and remove 1000 random items from set or RPOPM "listName" 1000, which would return 1000 right-most items from list?
I know there are commands like SRANDMEMBER and LRANGE, but they do not remove the items from the data structure. They can be deleted separately. However, if there are more clients reading from the same data structure, some items can be read more than once and some can be deleted without reading! Therefore, atomicity is what my question is about.
Also, I am fine if the time complexity for such operation is more expensive. I doubt it will be more expensive than issuing N (let's say 1000, N from the previous example) separate requests to Redis server.
I also know about separate transaction support. However, this sentence from Redis docs discourages me from using it for parallel processes modifying the set (destructively reading from it):
When using WATCH, EXEC will execute commands only if the watched keys were not modified, allowing for a check-and-set mechanism.
| Use LRANGE with LTRIM in a pipeline. The pipeline will be run as one atomic transaction. Your worry above about WATCH, EXEC will not be applicable here because you are running the LRANGE and LTRIM as one transaction without the ability for any other transactions from any other clients to come between them. Try it out.
| Redis | 20,621,775 | 38 |
I wonder if there is a feature in redis that allow me to get all expired keys (I mean some kind of event, that gives me an opportunity to take back all expire records). The purpose of it is in saving old values into another database. I've heard that it's possible using publishing mechanism, but google can't help we with this idea.
| Current development version of redis contains a new feature: keyspace notifications. Documentation: http://redis.io/topics/notifications
Keyspace notifications allows clients to subscribe to Pub/Sub channels in order to receive events affecting the Redis data set in some way.
Examples of the events that is possible to receive are the following:
All the commands affecting a given key.
All the keys receiving an LPUSH operation.
All the keys expiring in the database 0.
Hopefully, it will make it to stable soon.
BTW, it won't be very useful in helping you save values of expired keys. When expiration event is fired, the value is gone already.
| Redis | 14,647,494 | 38 |
I'm using redis in my python application to store simple values like counters and time stamp lists, but trying to get a counter and comparing it with a number I came across a problem.
If I do:
import redis
...
myserver = redis.Redis("localhost")
myserver.set('counter', 5)
and then try to get that value like this:
if myserver.get('counter') < 10:
myserver.incr('counter')
then I get a type error in the if statement because I'm comparing '5' < 10, which means I'm storing an integer value and getting a string one (which can be considered as a different value).
My question is: is this supposed to work like that? I mean its a very basic type, I understand if I have to parse objects but an int? Seems that I'm doing something wrong.
Is there any configuration I'm missing?
Is there any way to make redis return the right type and not always a string?
I say this because its the same for lists and datetimes or even floating point values.
Could this be a problem with the redis-py client I'm using and not redis itself?
| Technically speaking you need to take care of that on your own.
However, have a look at this link, especially at the part of their README that refers to parsers and response callbacks, maybe that's something you can use. Question would be whether this is an overkill for you or not.
| Redis | 13,060,632 | 38 |
Context
I'm using redis. The database is < 100 MB.
However, I want to make daily backups.
I'm also running on Ubuntu Server 12.04
When type in:
redis-cli save
I don't know where dump.rdb is saved to (since redis is started as a service and not in my local directory).
Questions:
How do I find where redis is saving my dump.rdb to?
Is there someway that I can specify a filename to 'save', so I type in something like:
redis-cli save ~/db-2012-06-24.rdb
Thanks
| To be a little more helpfull... How to find or set where redis is saving the dump.rdb file (ubuntu server):
First find you redis.conf file: In your terminal run:
ps -e aux | grep redis
I found my redis.conf file in:
var/etc/redis/
If yours is the same place then open the file with:
pico var/etc/redis/redis.conf
Look for:
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# Also the Append Only File will be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /var/lib/redis
Depending on your setting for "dbfilename" and "dir" then that is where you find your redis dump.rdb file.
Update:
To see your redis configurations just run:
redis-cli CONFIG GET *
| Redis | 11,180,999 | 38 |
I need to store huge amount of binary files (10 - 20 TB, each file ranging from 512 kb to 100 MB).
I need to know if Redis will be efficient for my system.
I need following properties in my system:
High Availability
Failover
Sharding
I intend to use a cluster of commodity hardware to reduce costing as much as possible. Please suggest pros and cons of building such a system using Redis. I am also concerned about high ram requirements of Redis.
| I would not use Redis for such a task. Other products will be a better fit IMO.
Redis is an in-memory data store. If you want to store 10-20 TB of data, you will need 10-20 TB of RAM, which is expensive. Furthermore, the memory allocator is optimized for small objects, not big ones. You would probably have to cut your files in various small pieces, it would not be really convenient.
Redis does not provide an ad-hoc solution for HA and failover. A master/slave replication is provided (and works quite well), but with no support for the automation of this failover. Clients have to be smart enough to switch to the correct server. Something on server-side (but this is unspecified) has to switch the roles between master and slaves nodes in a reliable way. In other words, Redis only provides a do-it-yourself HA/failover solution.
Sharding has to be implemented on client-side (like with memcached). Some clients have support for it, but not all of them. The fastest client (hiredis) does not. Anyway, things like rebalancing has to be implemented on top of Redis. Redis Cluster which is supposed to support such sharding capabilities is not ready yet.
I would suggest to use some other solutions. MongoDB with GridFS can be a possibility. Hadoop with HDFS is another one. If you like cutting edge projects, you may want to give the Elliptics Network a try.
| Redis | 8,786,395 | 38 |
Web Dynos can handle HTTP Requests
and while Web Dynos handles them Worker Dynos can handle jobs from it.
But I don't know how to make Web Dynos and Worker Dynos to communicate each other.
For example, I want to receive a HTTP request by Web Dynos
, send it to Worker Dynos
, process the job and send back result to Web Dynos
, show results on Web.
Is this possible in Node.js? (With RabbitMQ or Kue or etc)?
I could not find an example in Heroku Documentation
Or Should I implement all codes in Web Dynos and scaling Web Dynos only?
| As the high-level article on background jobs and queuing suggests, your web dynos will need to communicate with your worker dynos via an intermediate mechanism (often a queue).
To accomplish what it sounds like you're hoping to do follow this general approach:
Web request is received by the web dyno
Web dyno adds a job to the queue
Worker dyno receives job off the queue
Worker dyno executes job, writing incremental progress to a shared component
Browser-side polling requests status of job from the web dyno
Web dyno queries shared component for progress of background job and sends state back to browser
Worker dyno completes execution of the job and marks it as complete in shared component
Browser-side polling requests status of job from the web dyno
Web dyno queries shared component for progress of background job and sends completed state back to browser
As far as actual implementation goes I'm not too familiar with the best libraries in Node.js, but the components that glue this process together are available on Heroku as add-ons.
Queue: AMQP is a well-supported queue protocol and the CloudAMQP add-on can serve as the message queue between your web and worker dynos.
Shared state: You can use one of the Postgres add-ons to share the state of an job being processed or something more performant such as Memcache or Redis.
So, to summarize, you must use an intermediate add-on component to communicate between dynos on Heroku. While this approach involves a little more engineering, the result is a properly-decoupled and scalable architecture.
| Redis | 11,429,774 | 37 |
I am developing an application where chats has to cached and monitored, currently it is an local application where i have installed redis and redis-cli.
The problem i'm facing is (node:5368) UnhandledPromiseRejectionWarning: Error: The client is closed
Attaching code snippet below
//redis setup
const redis = require('redis');
const client = redis.createClient()//kept blank so that default options are available
//runs when client connects
io.on("connect", function (socket) {
//this is client side socket
//console.log("a new user connected...");
socket.on("join", function ({ name, room }, callback) {
//console.log(name, room);
const { msg, user } = addUser({ id: socket.id, name, room });
// console.log(user);
if (msg) return callback(msg); //accessible in frontend
//emit to all users
socket.emit("message", {
user: "Admin",
text: `Welcome to the room ${user.name}`,
});
//emit to all users except current one
socket.broadcast
.to(user.room)
.emit("message", { user: "Admin", text: `${user.name} has joined` });
socket.join(user.room); //pass the room that user wants to join
//get all users in the room
io.to(user.room).emit("roomData", {
room: user.room,
users: getUsersInRoom(user.room),
});
callback();
}); //end of join
//user generated messages
socket.on("sendMessage", async(message, callback)=>{
const user = getUser(socket.id);
//this is where we can store the messages in redis
await client.set("messages",message);
io.to(user.room).emit("message", { user: user.name, text: message });
console.log(client.get('messages'));
callback();
}); //end of sendMessage
//when user disconnects
socket.on("disconnect", function () {
const user = removeUser(socket.id);
if (user) {
console.log(client)
io.to(user.room).emit("message", {
user: "Admin",
text: `${user.name} has left `,
});
}
}); //end of disconnect
I am getting above error when user sends a message to the room or when socket.on("sendMessage") is called.
Where am I going wrong?
Thank you in advance.
| You should await client.connect() before using the client
| Redis | 70,185,436 | 37 |
This will be my first time connecting Spring to Redis. The documentation for jedis connection factory: http://www.baeldung.com/spring-data-redis-tutorial
Offers the following code:
@Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory jedisConFactory
= new JedisConnectionFactory();
jedisConFactory.setHostName("localhost");
jedisConFactory.setPort(6379);
return jedisConFactory;
}
Looks great, but my IDE is telling me that the setHostName and setPort methods have been deprecated (even though I'm using the versions from the tutorial).
I was wondering if anyone had a simple "get spring data connected to redis" example that uses the non-deprecated API calls?
| With Spring Data Redis 2.0, those methods have been deprecated.
You now need to configure using RedisStandaloneConfiguration
Reference: https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/connection/jedis/JedisConnectionFactory.html#setHostName-java.lang.String-
Example:
JedisConnectionFactory jedisConnectionFactory() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration("localhost", 6379);
redisStandaloneConfiguration.setPassword(RedisPassword.of("yourRedisPasswordIfAny"));
return new JedisConnectionFactory(redisStandaloneConfiguration);
}
| Redis | 49,021,994 | 37 |
Referred this link https://anton.logvinenko.name/en/blog/how-to-install-redis-and-redis-php-client.html
And done following steps
PhpRedis for PHP 7 (Skip it if you have different PHP version)
Install required package
apt-get install php7.0-dev
Download PhpRedis
cd /tmp
wget https://github.com/phpredis/phpredis/archive/php7.zip -O phpredis.zip
But, https://github.com/phpredis/phpredis/archive/php7.zip file not found for installation.
| Try to use this url https://github.com/phpredis/phpredis/archive/5.2.2.zip
wget https://github.com/phpredis/phpredis/archive/5.2.2.zip -O phpredis.zip
Or use this command:
sudo apt-get install php-redis
| Redis | 46,955,555 | 37 |
node -v : 8.1.2
I use redis client node_redis with node 8 util.promisify , no blurbird.
the callback redis.get is ok, but promisify type get error message
TypeError: Cannot read property 'internal_send_command' of undefined
at get (D:\Github\redis-test\node_modules\redis\lib\commands.js:62:24)
at get (internal/util.js:229:26)
at D:\Github\redis-test\app.js:23:27
at Object. (D:\Github\redis-test\app.js:31:3)
at Module._compile (module.js:569:30)
at Object.Module._extensions..js (module.js:580:10)
at Module.load (module.js:503:32)
at tryModuleLoad (module.js:466:12)
at Function.Module._load (module.js:458:3)
at Function.Module.runMain (module.js:605:10)
my test code
const util = require('util');
var redis = require("redis"),
client = redis.createClient({
host: "192.168.99.100",
port: 32768,
});
let get = util.promisify(client.get);
(async function () {
client.set(["aaa", JSON.stringify({
A: 'a',
B: 'b',
C: "C"
})]);
client.get("aaa", (err, value) => {
console.log(`use callback: ${value}`);
});
try {
let value = await get("aaa");
console.log(`use promisify: ${value}`);
} catch (e) {
console.log(`promisify error:`);
console.log(e);
}
client.quit();
})()
| changing let get = util.promisify(client.get);
to let get = util.promisify(client.get).bind(client);
solved it for me :)
| Redis | 44,815,553 | 37 |
I am used to psql which I can use by feeding it the connection string without having to break it in different arguments, that is,
psql postgres://<username>:<password>@<host>:<port>
This is useful when I have such string from Heroku, for example.
Can I do something similar with redis-cli? I want to feed it directly a connection string, such as the one that is stored on Heroku as environment variable when I install a Redis add-on. Is that possible? Example of the syntax I would like to use:
redis-cli redis://<username>:<password>@<host>:<port>
| No, at the moment (v3.2.1) redis-cli does not support the URI connection schema. If you want, you can make a feature or pull request for that in the Redis repository.
UPDATE:
The -u option was released with Redis 4.0, see Release notes. For example:
redis-cli -u redis://user:pass@host:6379/0
| Redis | 38,271,281 | 37 |
Which is better suited for the following environment:
Persistence not a compulsion.
Multiple servers (with Ehcache some cache sync must be required).
Infrequent writes and frequent reads.
Relatively small database (very less memory requirement).
I will pour out what's in my head currently. I may be wrong about these.
I know Redis requires a separate server (?) and Ehcache provides local cache so it must be faster but will replicate cache across servers (?). Updating all caches after some update on one is possible with Ehcache.
My question is which will suit better for the environment I mentioned?
Whose performance will be better or what are scenarios when one may outperform another?
Thanks in advance.
| You can think Redis as a shared data structure, while Ehcache is a memory block storing serialized data objects. This is the main difference.
Redis as a shared data structure means you can put some predefined data structure (such as String, List, Set etc) in one language and retrieve it in another language. This is useful if your project is multilingual, for example: Java the backend side , and PHP the front side. You can use Redis for a shared cache. But it can only store predefined data structure, you cannot insert any Java objects you want.
If your project is only Java, i.e. not multilingual, Ehcache is a convenient solution.
| Redis | 33,123,633 | 37 |
Just learned these 3 new techniques from https://unix.stackexchange.com/questions/87908/how-do-you-empty-the-buffers-and-cache-on-a-linux-system:
To free pagecache:
# echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
# echo 3 > /proc/sys/vm/drop_caches
I am trying to understand what exactly are pagecache, dentries and inodes. What exactly are they?
Do freeing them up also remove the useful memcached and/or redis cache?
--
Why i am asking this question? My Amazon EC2 server RAM was getting filled up over the days - from 6% to up to 95% in a matter of 7 days. I am having to run a bi-weekly cronjob to remove these cache. Then memory usage drops to 6% again.
| With some oversimplification, let me try to explain in what appears to be the context of your question because there are multiple answers.
It appears you are working with memory caching of directory structures. An inode in your context is a data structure that represents a file. A dentries is a data structure that represents a directory. These structures could be used to build a memory cache that represents the file structure on a disk. To get a directly listing, the OS could go to the dentries--if the directory is there--list its contents (a series of inodes). If not there, go to the disk and read it into memory so that it can be used again.
The page cache could contain any memory mappings to blocks on disk. That could conceivably be buffered I/O, memory mapped files, paged areas of executables--anything that the OS could hold in memory from a file.
Your commands flush these buffers.
| Redis | 29,870,068 | 37 |
Wikipedia says that Redis is an in-memory database, but it also says that it can persist "data to the disk at least every 2 seconds". I feel like these two things are mutually exclusive. How can it be considered in-memory yet (it can) store data on disk? I assumed the definition of in-memory meant that it does not store to disk.
This is a similar question: Redis concept: In memory or DB? The difference is that he's asking about the persistence implementation. My question is about the concept of in-memory vs persistence.
|
Redis is an in-memory but persistent on disk database, so it represents a different trade off where very high write and read speed is achieved with the limitation of data sets that can't be larger than memory. Another advantage of in memory databases is that the memory representation of complex data structures is much simpler to manipulate compared to the same data structure on disk, so Redis can do a lot, with little internal complexity. At the same time the two on-disk storage formats (RDB and AOF) don't need to be suitable for random access, so they are compact and always generated in an append-only fashion (Even the AOF log rotation is an append-only operation, since the new version is generated from the copy of data in memory).
http://redis.io/topics/faq
In redis, all data has to be in memory. This is the point which is totally different from other NoSQL. Usually when you access and read some data in a database, you don't know if the data is in memory (cache) or not, but in the case of Redis, it's guaranteed that all data is in memory. Writing to disk is optional, which you can think of as having the trunk on memory and a kind of backup on the disk. You may lose data which is saved after last saving to a disk if you suddenly turn off the server.
And of course the advantage of it is a performance. Since all data is in RAM, it's incredibly fast.
| Redis | 28,710,322 | 37 |
I have read the redis-python document and searched online, I can not find anything about the db parameter for Redis(). What is it use for?
| By default, redis has 16 databases, which can be addressed by their indexes. This is what it's for.
See SELECT command.
| Redis | 24,392,141 | 37 |
I am on my box ubuntu 12.04 (precise32), where Redis was installed, but I can not find out the Redis version. How can I resolve this problem?
It was installed using the redisio cookbook.
|
If you want to find the version of the server:
$ redis-server -v
For example in my system I get this result:
Redis server v=2.8.4 sha=00000000:0 malloc=libc bits=64 build=92637893332b8579
If you want to get the version of the client:
$ redis-cli -v
If you want to know the version of the server, from the client:
> INFO
and the first line is the version of the Redis server.
| Redis | 22,153,504 | 37 |
Redis is often used as a cache, although it offers a lot more than just in-memory caching (it supports persistence, for instance).
What are the reasons why one would choose to use Redis rather than the .NET MemoryCache? Persistence and data types (other than key-value pairs) come to mind, but I'm sure there must be other reasons for using an extra architectural layer (i.e. Redis).
| MemoryCache is embedded in the process, hence can only be used as a plain key-value store from that process.
A separate server counterpart of MemoryCache would be memcached.
Whereas redis is a data structure server which can be hosted on other servers and can be interacted with over the network just like memcached, but redis supports a long list of complex data types and operations on them, to provide logical and intelligent caching.
| Redis | 28,970,362 | 36 |
Instade of move I want to copy all my keys from a particular db to another.
Is it possible in redis if yes than how ?
| If you can't use MIGRATE COPY because of your redis version (2.6) you might want to copy each key separately which takes longer but doesn't require you to login to the machines themselves and allows you to move data from one database to another.
Here's how I copy all keys from one database to another (but without preserving ttls)
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=0
target_host=localhost
target_port=6379
target_db=1
#copy all keys without preserving ttl!
redis-cli -h $source_host -p $source_port -n $source_db keys \* | while read key; do
echo "Copying $key"
redis-cli --raw -h $source_host -p $source_port -n $source_db DUMP "$key" \
| head -c -1 \
| redis-cli -x -h $target_host -p $target_port -n $target_db RESTORE "$key" 0
done
Keys are not going to be overwritten, in order to do that, delete those keys before copying or simply flush the whole target database before starting.
| Redis | 23,222,616 | 36 |
I could be totally off, but my understanding of how cache stores used to work before they began to add persistence features, is that items would get expired based on their TTL. And if the store started to fill up available RAM, they would each have their algorithms for expiring the least "important" keys in the store.
Now I read that Redis has persistence features. But you can turn them off. Assuming you turn off persistence, what happen when RAM fills up? How does Redis decide what to expire?
I expect to have lots of data without TTLs and want to make sure it's safe to let Redis figure out what to expire.
| I don't think the question is related to virtual memory management, but more about the expiration of the items in Redis, which is a totally different topic.
Contrary to memcached, Redis is not only a cache. So the user is supposed to choose about the item eviction policy using various mechanisms. You can evict all your items, or only a part of them.
The general policy should be selected in the configuration file with the maxmemory and maxmemory-policy parameters, decribed below:
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys with an
# EXPIRE set. It will try to start freeing keys that are going to expire
# in little time and preserve keys with a longer time to live.
# Redis will also try to remove objects from free lists if possible.
#
# If all this fails, Redis will start to reply with errors to commands
# that will use more memory, like SET, LPUSH, and so on, and will continue
# to reply to most read-only commands like GET.
#
# WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
# 'state' server or cache, not as a real DB. When Redis is used as a real
# database the memory usage will grow over the weeks, it will be obvious if
# it is going to use too much memory in the long run, and you'll have the time
# to upgrade. With maxmemory after the limit is reached you'll start to get
# errors for write operations, and this may even lead to DB inconsistency.
#
maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached? You can select among five behavior:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key accordingly to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys->random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with all the kind of policies, Redis will return an error on write
# operations, when there are not suitable keys for eviction.
#
# At the date of writing this commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
maxmemory-policy volatile-lru
# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can select as well the sample
# size to check. For instance for default Redis will check three keys and
# pick the one that was used less recently, you can change the sample size
# using the following configuration directive.
#
maxmemory-samples 3
Then individual item expiration can be set using the following commands:
EXPIRE
EXPIREAT
The per item expiration property is useful with volatile-* policies.
Expiration can also be removed using PERSIST.
The expiration property adds a slight memory overhead, so it should be used only if required.
Finally, it is worth mentioning that a part of an object cannot be expired, only the whole object itself. For instance, a whole list or set corresponding to a key can be expired, but individual list or set items cannot.
| Redis | 8,652,388 | 36 |
I am using redis with Akka so I need no blocking calls. Lettuce has async-future call built into it. But Jedis is the recommended client by Redis. Can someone tell me if I am using both of them the right way. If so which one is better.
JEDIS
I am using a static Jedis connection pool to get con and using Akka future callback to process the result. My concern here is when I use another thread (callable) to get the result that thread is eventually going to block for the result. While Lettuce might have some more efficient way of doing this.
private final class OnSuccessExtension extends OnSuccess<String> {
private final ActorRef senderActorRef;
private final Object message;
@Override
public void onSuccess(String valueRedis) throws Throwable {
log.info(getContext().dispatcher().toString());
senderActorRef.tell((String) message, ActorRef.noSender());
}
public OnSuccessExtension(ActorRef senderActorRef,Object message) {
this.senderActorRef = senderActorRef;
this.message=message;
}
}
ActorRef senderActorRef = getSender(); //never close over a future
if (message instanceof String) {
Future<String> f =akka.dispatch.Futures.future(new Callable<String>() {
public String call() {
String result;
try(Jedis jedis=JedisWrapper.redisPool.getResource()) {
result = jedis.get("name");
}
return result;
}
}, ex);
f.onSuccess(new OnSuccessExtension(senderActorRef,message), ex);
}
LETTUCE
ExecutorService executorService = Executors.newFixedThreadPool(10);
public void onReceive(Object message) throws Exception {
ActorRef senderActorRef = getSender(); //never close over a future
if (message instanceof String) {
final RedisFuture<String> future = lettuce.connection.get("name");
future.addListener(new Runnable() {
final ActorRef sender = senderActorRef;
final String msg =(String) message;
@Override
public void run() {
try {
String value = future.get();
log.info(value);
sender.tell(message, ActorRef.noSender());
} catch (Exception e) {
}
}
}, executorService);
If lettuce is a better option for Async calls. Then what type of executor should I go with in production environment. If possible can I use a Akka dispatcher as an execution context for Letture future call.
| There is no one answer to your question because it depends.
Jedis and lettuce are both mature clients. To complete the list of Java clients, there is also Redisson, which adds another layer of abstraction (Collection/Queue/Lock/... interfaces instead of raw Redis commands).
It pretty much depends on how you're working with the clients. In general, Jedis (java based client to connect to redis) is single-threaded in terms of data access, so the only benefit you gain by concurrency is offloading the protocol and I/O work to different threads. That is not fully true to lettuce and Redisson since they use netty under the hood (netty binds one socket channel to a particular event loop thread).
With Jedis, you can use only one connection only with one thread at a time. That correlates nicely with the Akka actor model because one actor instance is occupied only by one thread at a time.
On the other side, you need as much Jedis connections as threads that deal with a particular actor. If you start sharing Jedis connections across different actors, you either go for connection pooling, or you need to have a dedicated Jedis connection per actor instance. Please keep in mind that you need to take care of the reconnection (once a Redis connection is broken) by yourself.
With Redisson and lettuce, you get transparent reconnection if you wish to do so (That's the default value to lettuce, not sure about Redisson).
By using lettuce and Redisson you can share one connection amongst all actors because they are thread-safe. You cannot share one lettuce connection in two cases:
Blocking operations (since you would block all other users of the connection)
Transactions (MULTI/EXEC, since you would mix different operations with the transactions and that is certainly a thing you do not want to do so)
Jedis has no async interface, so you're required to do this by yourself. That's feasible, and I did something similar with MongoDB, offloading/decoupling the I/O part to other actors. You can use the approach from your code, but you're not required to provide an own executor service because you do non-blocking operations in the runnable listener.
With lettuce 4.0 you'll get Java 8 support (which is way better in terms of the async API because of the CompletionStage interface), and you can even use RxJava (reactive programming) to approach concurrency.
Lettuce is not opinionated on your concurrency model. It allows you to use it according to you needs, except the plain Future/ListenableFuture API of Java 6/7 and Guava is not very nice to use.
HTH, Mark
| Redis | 32,857,922 | 35 |
In my current application, we are dealing with some information which rarely changes.
For performance optimization, we want to store them in the cache.
But the problem is in invaliding these objects whenever these are updated.
We have not finalized the caching product.
As we are building this application on Azure, we will probably use Azure Redis cache.
One strategy could be to add code in Update API which will invalidate object in the cache.
I am not sure if this is a clean way?
We do not want to go with Cache Expiration based on time (TTL).
Could you please suggest some other strategies used for cache invalidation?
| Invalidate the cache during the Update stage is a viable approach, and was extremely used in the past.
You have two options here when the UPDATE happens:
You may try to set the new value during update operation, or
Just delete the old one and update during a read operation.
If you want an LRU cache, then UPDATE may just delete the old value, and the first time the object will be fetched, you'll create it again after the read from the actual database. However, if you know that your cache is very small and you are using another main database for concerns different than data size, you may update directly during UPDATE.
However, all this is not enough to be completely consistent.
When you write to your DB, the Redis cache may be unavailable for a few seconds for example, so data remains not synchronized between the two.
What do you do in that case?
There are several options you could use at the same time.
Set a TTL anyway, so that eventually broken data is refreshed.
Use lazy read repair. When you read from the DB, from time to time check with the primary if the value matches. If not update the cached item (or delete it).
Use epochs or similar ways to access your data. Not always possible, however sometimes you access cached data about a given object. When possible you may change the object ID/handle every time you modify it, so that it is impossible that you access stale data in the cache: every key name refers to a specific version of your object.
So the del-cache-on-update and write-cache-on-read is the basic strategy, but you can employ other additional systems to eventually repair the inconsistencies.
There is another option actually instead of using the above options, which is to have a background process using Redis SCAN to verify key by key if there are inconsistencies. This process can be slow and can run against a replica of your database.
As you can see here the main idea is always the same: if an update to the cache fails, don't make it a permanent issue that will remain there potentially forever, give it a chance to fix itself at a later time.
| Redis | 30,166,321 | 35 |
Can I set global TTL in redis? Instead of setting TTL every time I set a key.
I googled, but cannot found any clue. So it seems cannot be done?
Thanks.
| No, Redis doesn't have a notion of a global/default TTL and yes, you do have to set it for each key independently. However, depending on your requirements and on what you're trying to do, there may be other ways to achieve your goal. Put differently, why do you need it?
For example, if you want to use Redis as a cache and not worry about having to remove "old" items, you can get simply by setting the maxmemory_policy to allkey-lru. This will evict the least recently used keys whenever Redis' memory is exhausted.
EDIT: for more information, see the helpful links in the comments below from @arganzheng and @Kristján, as well as the inline documentation in the redis.conf configuration file.
| Redis | 25,618,045 | 35 |
I want to use redis command line (using redis-cli) to store json values. This is what I do
redis 127.0.0.1:6379> set test '{"a":"b"}'
This command fails with message :
Invalid argument(s)
I don't have problem with setting values that don't contain double quotes. What is the correct way to escape double quotes?
| Add slashes to quotes
set test "{\"a\":\"b\"}"
| Redis | 21,065,225 | 35 |
I have seen several references to people running Redis on Azure, but no implementation or any sort of 'howto' on it. Has anyone seen such an example?
|
Download Redis for Windows - see the section 'Redis Service builds for Windows' on https://github.com/ServiceStack/ServiceStack.Redis. I ended up using the win64 version from dmajkic https://github.com/dmajkic/redis/downloads
Create an Azure worker role, delete the default class (you don't need c# code at all). Add the file redis-server.exe from the downloaded redis source (the exe can be found in redis/src).
In the service definition file add the following config
<WorkerRole name="my.Worker" vmsize="Small">
<Runtime executionContext="limited">
<EntryPoint>
<ProgramEntryPoint commandLine="redis-server.exe" setReadyOnProcessStart="true" />
</EntryPoint>
</Runtime>
<Imports>
<Import moduleName="Diagnostics" />
<Import moduleName="RemoteAccess" />
<Import moduleName="RemoteForwarder" />
</Imports>
<Endpoints>
<InternalEndpoint name="Redis" protocol="tcp" port="6379" />
</Endpoints>
</WorkerRole>
You can refer to the redis server from your web role using the following
var ipEndpoint = RoleEnvironment.Roles["my.Worker"].Instances[0].InstanceEndpoints["Redis"].IPEndpoint;
host = string.Format("{0}:{1}", ipEndpoint.Address, ipEndpoint.Port);
Hope that helps.
| Redis | 10,140,669 | 35 |
I want to use redis' pubsub to transmit some messages, but don't want be blocked using listen, like the code below:
import redis
rc = redis.Redis()
ps = rc.pubsub()
ps.subscribe(['foo', 'bar'])
rc.publish('foo', 'hello world')
for item in ps.listen():
if item['type'] == 'message':
print item['channel']
print item['data']
The last for section will block. I just want to check if a given channel has data, how can I accomplish this? Is there a check like method?
| If you're thinking of non-blocking, asynchronous processing, you're probably using (or should use) asynchronous framework/server.
if you're using Tornado, there is Tornado-Redis. It's using native Tornado generator calls. Its Websocket demo provides example on how to use it in combination with pub/sub.
if you're using Twisted, there is txRedis. There you also have pub/sub example.
it also seems that you can use Redis-py combined with Gevent with no problems using Gevent's monkey patching (gevent.monkey.patch_all()).
UPDATE:
It's been 5 years since the original answer, in the mean time Python got native async IO support. There now is AIORedis, an async IO Redis client.
| Redis | 7,871,526 | 35 |
I'm using redis-py binding in Python 2 to connect to my Redis server. The server requires a password. I don't know how to AUTH after making the connection in Python.
The following code does not work:
import redis
r = redis.StrictRedis()
r.auth('pass')
It says:
'StrictRedis' object has no attribute 'auth'
Also,
r = redis.StrictRedis(auth='pass')
does not work either. No such keyword argument.
I've used Redis binding in other languages before, and usually the method name coincides with the Redis command. So I would guess r.auth will send AUTH, but unfortunately it does not have this method.
So what is the standard way of AUTH? Also, why call this StrictRedis? What does Strict mean here?
| Thanks to the hints from the comments. I found the answer from https://redis-py.readthedocs.org/en/latest/.
It says
class redis.StrictRedis(host='localhost', port=6379, db=0, password=None, socket_timeout=None, connection_pool=None, charset='utf-8', errors='strict', unix_socket_path=None)
So AUTH is in fact password passed by keyword argument.
| Redis | 30,149,493 | 34 |
I have to store some machine details in redis. As there are many different machines i am planning to use the below structure
server1 => {name => s1, cpu=>80}
server2 => {name => s2, cpu=>40}
I need to store more than one value against the key CPU. Also i need to maintain only the last 10 values in the list of values against cpu
1) How can i store a list against the key inside the hash?
2) I read about ltrim. But it accepts a key. How can i do a ltrim for key cpu inside server1?
I am using jedis.
| Redis' data structures cannot be nested inside other data structures, so storing a List inside a Hash is not possible. Instead, use different keys for your servers' CPU values (e.g. server1:cpu).
| Redis | 29,203,717 | 34 |
I can't seem to find useful information about Redis commands. I want to know the data type of the value of a given key. For instance to list all the keys of my database I run the following command:
keys *
In my setup, I get the following result:
1) "username:testuser:uid"
2) "uid:1:first"
3) "uid:1:email"
4) "uid:1:hash"
5) "global:next_uid"
6) "members:email"
7) "uid:1:username"
8) "uid:1:last"
9) "uid:1:salt"
10) "uid:1:access"
11) "uid:1:company"
12) "email:[email protected]:uid"
13) "uid:1:phone_number"
How do I know what data type the key members:email contains? I tried to run get members:email but and I get the error (error) ERR Operation against a key holding the wrong kind of value
Any thoughts?
| You could use the type command:
http://redis.io/commands/type
| Redis | 19,077,591 | 34 |
we have the following use case: Every time a certain key expires, we need to get notified and do something, based on it's value. But when redis fires the expired event, the key was already removed from the db when we try to access it later on, which is expected of course.
Now is there a way to access the entry again, after it expired? I guess not.
So second option: Is there a way to tell redis to publish the whole value object instead of just the key when sending those events? I guess it could be added through Lua, but I'd interested in an easier option, if possible. We also need this behaviour for other events, we basically need all notifications to publish the value, not the key (we could do a GET once the event was received, but we want to get around the second call, primarily to have an atomic process, since the value could have changed between publishing the event and doing the GET to retrieve the value).
Hope it's understandable. Maybe we can't see the obvious, so thanks in advance!
| The feature that Eli linked to allows you to listen when a key expires. However, it does not give you the value of the key. Futhermore, based on the filed github issue it does not look like you can expect to have this feature built in anytime soon if ever. The solution I use is to create a special "shadow" expiration key that is linked to the key where you have an actual value.
So lets say you have a key called testkey and it has an integer value of 100. Furthermore, the key will expire after 10 seconds at which point you want to get the value of the key. (Maybe you were incrementing the key during the 10 seconds it existed).
First you need to setup listening for keyspace events. In particular you want to listen for expired events. You can do this from your config or use the config set command in redis. (see here for more info: http://redis.io/topics/notifications)
CONFIG SET notify-keyspace-events Ex
Now you can subscribe to a special keyevent channel where you will be notified that the key expired.
SUBSCRIBE __keyevent@0__:expired
The format of the channel to subscribe to is __keyevent@<db>__:<eventName>. In our example we're assuming we're working with the default database 0 and we want to listen for the expired event.
When the testkey expires you will now get a message in the __keyevent__ channel where the message is the name of the key that expired. Of course at this point the key is gone so we can no longer access the value! The solution is to use a special expiration key.
When you create your testkey also create a special expiring "shadow" key (don't expire the actual testkey). For example:
SET testkey 100
SET shadowkey:testkey "" EX 10
Now in the __keyevent@0__:expired channel you will get a message telling you that the key shadowkey:testkey expired. Take the value of the message (which is the name of the key), split on the colon (or whatever separator you decide to use), and then manually get the value of the key and delete it.
// set your key value
SET testkey 100
//set your "shadow" key, note the value here is irrelevant
SET shadowkey:testkey "" EX 10
// Get an expiration message in the channel __keyevent@0__:expired
// Split the key on ":", take the second part to get your original key
// Then get the value and do whatever with it
GET testkey
// Then delete the key
DEL testkey
Note that the value of the shadowkey isn't used so you want to use the smallest possible value which according to this answer (Redis store key without a value) is an empty string "". It's a little more work to setup but the above system does exactly what you need. The overhead is a few extra commands to actually retrieve and delete your key plus the storage cost of an empty key.
| Redis | 18,328,058 | 34 |
I am storing a list in Redis like this:
redis.lpush('foo', [1,2,3,4,5,6,7,8,9])
And then I get the list back like this:
redis.lrange('foo', 0, -1)
and I get something like this:
[b'[1, 2, 3, 4, 5, 6, 7, 8, 9]']
How can I convert this to actual Python list?
Also, I don't see anything defined in RESPONSE_CALLBACKS that can help? Am I missing something?
A possible solution (which in my opinion sucks) can be:
result = redis.lrange('foo',0, -1)[0].decode()
result = result.strip('[]')
result = result.split(', ')
# lastly, if you know all your items in the list are integers
result = [int(x) for x in result]
UPDATE
Ok, so I got the solution.
Actually, the lpush function expects all the list items be passed as arguments and NOT as a single list. The function signature from redis-py source makes it clear...
def lpush(self, name, *values):
"Push ``values`` onto the head of the list ``name``"
return self.execute_command('LPUSH', name, *values)
What I am doing above is send a single list as an argument, which is then sent to redis as a SINGLE item.
I should be unpacking the list instead as suggested in the answer:
redis.lpush('foo', *[1,2,3,4,5,6,7,8,9])
which returns the result I expect...
redis.lrange('foo', 0, -1)
[b'9', b'8', b'7', b'6', b'5', b'4', b'3', b'2', b'1']
| I think you're bumping into semantics which are similar to the distinction between list.append() and list.extend(). I know that this works for me:
myredis.lpush('foo', *[1,2,3,4])
... note the * (map-over) operator prefixing the list!
| Redis | 15,850,112 | 34 |
Im just starting off with Redis with Rails so this maybe a dumb question.
I am trying to save a hash to redis server but when I retrieve it its just a string IE.
hash = {"field" => "value", "field2" => "value2"}
$redis.set('data', hash)
#So collecting the data
@data = $redis.get('data')
This is obviously wrong as its returning as a string.
I have also tried looping some results and using the hset ie.
@data.each do |d|
$redis.hset('data', d.field, d.value)
end
# errror
# ERR Operation against a key holding the wrong kind of value
Not sure where to go. I have deleted the key $redis.del('data') to make sure that was not the issue.
Hope you can advise, Lee
| I should have read the redis docs more thorough.
Answer:
IN
$redis.set 'data', hash.to_json
OUT
data = JSON.parse($redis.get("data"))
| Redis | 9,832,124 | 34 |
To flush redis, the FLUSHALL command is to be used.
Using Redis 2.6.16, when I tried both FLUSHALL and FLUSHDB commands while using redis-cli, I got an unknown command error. Other commands work fine.
a) What is going wrong with the FLUSH* commands?
b) Is a workaround to do a shutdown of Redis, then delete the rdb file? (I believe so)
UPDATE:
No, we never solved this.
(The only known solution is to use step 'b' above)
| It could be that your Redis configuration has renamed some commands to prevent your database from being accidentaly deleted.
Look for the following lines in your redis.conf:
rename-command FLUSHDB ""
rename-command FLUSHALL ""
| Redis | 22,815,364 | 33 |
I have started to work with laravel. It is quite interesting to work. I have started to use the features of laravel. I have started to use redis by install redis server in my system and change the configuration for redis in app/config/database.php file. The redis is working fine for the single variables by using set. i.e.,
$redis = Redis::connection();
$redis->set('name', 'Test');
and i could able to get the value by using
$redis->get('name');
But i want to set the array by using set function. If i try do that getting the following error
strlen() expects parameter 1 to be string, array given
I have tried by using following codes.
$redis->set('name', array(5, 10));
$values = $redis->lrange('names', array(5, 10));
and if i use
$values = $redis->command('lrange', array(5, 10));
getting the following error
'command' is not a registered Redis command
Can any one explain me the problem and is that possible with redis?...we can set the array values using redis ?
| This has been answered in the comments but to make the answer clearer for people visiting in the future.
Redis is language agnostic so it won't recognise any datatype specific to PHP or any other language. The easiest way would be to serialise / json_encode the data on set then unserialise/json_decode on get.
Example to store data using json_encode:
use Illuminate\Support\Facades\Redis;
$redis = Redis::connection();
$redis->set('user_details', json_encode([
'first_name' => 'Alex',
'last_name' => 'Richards'
])
);
Example to retrieve data using json_decode:
use Illuminate\Support\Facades\Redis;
$redis = Redis::connection();
$response = $redis->get('user_details');
$response = json_decode($response);
| Redis | 22,718,903 | 33 |
I'm a bit confused with all the available storing options of Redis.
I want to do something simple and I don't want to over engineer it.
I'm working with phpredis and Redis v2.8.6.
I have this simple associative array that I need to store. I also need to be able to retrieve an item by its key and loop over all the items.
$a = array(
'12345' => array(
'name' => 'Post A',
'val2' => 'blah blah',
'val3' => 'blah blah blah',
),
'54321' => array(
'name' => 'Post B',
'val2' => 'blah blah',
'val3' => 'blah blah blah',
),
'998877' => array(
'name' => 'Post C',
'val2' => 'blah blah',
'val3' => 'blah blah blah',
)
);
So what I was doing till now was using hash type. storing my array like this:
foreach ($a as $key => $value) {
$this->redis->hSet('posts', $key, json_encode($value));
}
Like that I could access the key easily like this:
public function getPost($postId)
{
return json_decode($this->redis->hGet('posts', $postId), true);
}
// This is returning the information of Post A
$post = getPost(12345);
But now I need to loop over all the posts I don't know how to do it and if I can do it with my current structure. I don't know if I need to store all the post_id in another list to be able to loop over all the posts?
So my question is which data type(s) should I use to store my list of posts, allowing me to fetch a single post by its id and looping over all the posts?
Thanks,
Maxime
| You can use SET and Hash and SORT in combination
redis 127.0.0.1:6379> HMSET TEST_12345 name "Post A" val2 "Blah Blah" val3 "Blah Blah Blah"
OK
redis 127.0.0.1:6379> HMSET TEST_54321 name "Post B" val2 "Blah Blah" val3 "Blah Blah Blah"
OK
redis 127.0.0.1:6379> HMSET TEST_998877 name "Post C" val2 "Blah Blah" val3 "Blah Blah Blah"
OK
redis 127.0.0.1:6379> SADD All_keys TEST_12345 TEST_54321 TEST_998877
(integer) 3
redis 127.0.0.1:6379> HGETALL TEST_12345
To GET one HASH:
redis 127.0.0.1:6379> HGETALL TEST_12345
1) "name"
2) "Post A"
3) "val2"
4) "Blah Blah"
5) "val3"
6) "Blah Blah Blah"
TO GET All HASH
redis 127.0.0.1:6379> SORT All_keys BY nosort GET *->name GET *->val2 GET *->val3
1) "Post A"
2) "Blah Blah"
3) "Blah Blah Blah"
4) "Post B"
5) "Blah Blah"
6) "Blah Blah Blah"
7) "Post C"
8) "Blah Blah"
9) "Blah Blah Blah"
If you don't want to use sort you can use Fetch All the key names from SET using SMEMBERS and then use Redis Pipeline to fetch all the keys
| Redis | 22,001,247 | 33 |
I'm very new to Redis, and looking to see if its possible to do. Imagine I'm receiving data like this:
{ "account": "abc", "name": "Bob", "lname": "Smith" }
{ "account": "abc", "name": "Sam", "lname": "Wilson" }
{ "account": "abc", "name": "Joe"}
And receiving this data for another account:
{ "account": "xyz", "name": "Bob", "lname": "Smith" }
{ "account": "xyz", "name": "Sam", "lname": "Smith"}
I would like to keep this data in Redis in similar format:
abc:name ["Bob", "Sam", "Joe"]
abc:lname ["Smith", "Wilson", Null]
And for xyz:
xyz:name["Bob", "Sam"]
xyz:lname["Smith", "Smith"]
So the question is what data types should I use to store this Redis?
| If your goal is to check if Bob is used as a name for the account abc the solution should be something like:
Sample Data
{ "account": "abc", "name": "Bob", "lname": "Smith" }
{ "account": "abc", "name": "Sam", "lname": "Wilson" }
{ "account": "abc", "name": "Joe"}
Do this (using a redis set):
SADD abc:name Bob Sam Joe
SADD abc:lname Wilson Smith
You'll then be able to check if Bob is used as a name for the account abc, with:
SISMEMBER abc:name Bob
> true
To retrieve all values of a field use SMEMBERS:
SMEMBERS abc:name
> ["Bob", "Sam", "Joe"]
Note:
The key name here is under the [account]:[field] format. Where [account] can be abc, xyz and so on and field can be name, lname ...
If you don't want unique value, for instance:
abc:name ["Bob", "Sam", "Joe", "Bob", "Joe"]
then you should use a list instead
| Redis | 19,791,828 | 33 |
I'm developing application using Bottle. In my registration form, I'm confirming email by mail with a unique key. I'm storing this key in REDIS with expiry of 4 days. If user does not confirm email within 4 days, key gets expired. for this, I want to permanently delete the user entry from my database(mongoDB).
Ofcourse I dont require continous polling to my redis server to check whether key exists or not.
Is there any way to get a callback from Redis??
OR is there any other efficient way?
| This feature implemented in Redis 2.8, read about it here http://redis.io/topics/notifications
| Redis | 13,174,615 | 33 |
Context
I have a live running redis-server.
I want to make a backup.
Idea:
I want to do the following:
cp dump.rdb ~/some-other-location/06-24-2012.rdb ?
Concern
I don't see anything that promises me that dump.rdb is always a consistent database store. (I.e. it appears possible to me that while I am executing cp, redis is halfway through writing some piece of data, and thus dump.rdb is not in a consistent state.)
Problem:
This is bad, because I will now have to shut down the redis db in order to make a copy of dump.rdb
Question:
What is the correct way, while a redis-server is running, to make a live backup of the database? And what part of the manual promises me that this method creates a database that is in a consistent (not half written) state.
Thanks!
| From http://redis.io/topics/persistence
Redis is very data backup friendly since you can copy RDB files while the database is running: the RDB is never modified once produced, and while it gets produced it uses a temporary name and is renamed into its final destination atomically using rename(2) only when the new snapshot is complete.
So, the correct way is to simply copy the dump.rdb to your backup location.
| Redis | 11,182,012 | 33 |
I read about HStores in Postgres something that is offered by Redis as well.
Our application is written in NodeJS. Two questions:
Performance-wise, is Postgres HStore comparable to Redis?
for session storage, what would you recommend--Redis, or Postgres with some other kind of data type (like HStore, or maybe even the usual relational table)? And how bad is one option vs the other?
Another constraint, is that we will need to use the data that is already in PostgreSQL and combine it with the active sessions (which we aren't sure where to store at this point, if in Redis or PostgreSQL).
From what we have read, we have been pointed out to use Redis as a Session manager, but due to the PostgreSQL constraint, we are not sure how to combine both and the possible performance issues that may arise.
Thanks!
| Redis will be faster than Postgres because Pg offers reliability guarantees on your data (when the transaction is committed, it is guaranteed to be on disk), whereas Redis has a concept of writing to disk when it feels like it, so shouldn't be used for critical data.
Redis seems like a good option for your session data, or heck even store in a cookie or in your client side Javascript. But if you need data from your database on every request then it might not be even worth involving Redis. It very much depends on your application.
| Redis | 9,153,157 | 33 |
Installing redis is really easy. I have done it on several VM. But on one instance, I am facing the following problem.
[root@server redis-2.4.2]# make
cd src && make all
make[1]: Entering directory `/home/user/redis-2.4.2/src'
MAKE hiredis
make[2]: Entering directory `/home/user/redis-2.4.2/deps/hiredis'
cc -c -std=c99 -pedantic -O3 -fPIC -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb net.c
make[2]: cc: Command not found
make[2]: *** [net.o] Error 127
make[2]: Leaving directory `/home/user/redis-2.4.2/deps/hiredis'
make[1]: *** [dependencies] Error 2
make[1]: Leaving directory `/home/user/redis-2.4.2/src'
make: *** [all] Error 2
Is there any other way to install redis?
| wget http://download.redis.io/redis-stable.tar.gz
tar xvzf redis-stable.tar.gz
cd redis-stable
sudo apt-get install make
sudo apt-get install gcc
sudo apt-get install tcl
sudo apt-get install build-essential
sudo apt-get update
## if there is another error like "fatal error: jemalloc/jemalloc.h: No such file or directory"
## just run "make distclean"
make
make test
| Redis | 8,131,008 | 33 |
I have Spring Redis working using spring-data-redis with all default configuration likes localhost default port and so on.
Now I am trying to make the same configuration by configuring it in application.properties file. But I cannot figure out how should I create beans exactly that my property values are read.
Redis Configuration File
@EnableRedisHttpSession
@Configuration
public class SpringSessionRedisConfiguration {
@Bean
JedisConnectionFactory connectionFactory() {
return new JedisConnectionFactory();
}
@Autowired
@Bean
RedisCacheManager redisCacheManager(final StringRedisTemplate stringRedisTemplate) {
return new RedisCacheManager(stringRedisTemplate);
}
@Autowired
@Bean
StringRedisTemplate template(final RedisConnectionFactory connectionFactory) {
return new StringRedisTemplate(connectionFactory);
}
}
Standard Parameters in application.properties
spring.redis.sentinel.master=themaster
spring.redis.sentinel.nodes=192.168.188.231:26379
spring.redis.password=12345
What I tried,
I can possibly use @PropertySource and then inject @Value and get the values. But I don't want to do that as those properties are not defined by me but are from Spring.
In this documentation Spring Redis Documentation, it only says that it can be configured using properties but doesn't show concrete example.
I also went through Spring Data Redis API classes, and found that RedisProperties should help me, but still cannot figure out how exactly to tell Spring to read from properties file.
| You can use @PropertySource to read options from application.properties or other property file you want. Please look PropertySource usage example and working example of usage spring-redis-cache. Or look at this small sample:
@Configuration
@PropertySource("application.properties")
public class SpringSessionRedisConfiguration {
@Value("${redis.hostname}")
private String redisHostName;
@Value("${redis.port}")
private int redisPort;
@Bean
public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
return new PropertySourcesPlaceholderConfigurer();
}
@Bean
JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory factory = new JedisConnectionFactory();
factory.setHostName(redisHostName);
factory.setPort(redisPort);
factory.setUsePool(true);
return factory;
}
@Bean
RedisTemplate<Object, Object> redisTemplate() {
RedisTemplate<Object, Object> redisTemplate = new RedisTemplate<Object, Object>();
redisTemplate.setConnectionFactory(jedisConnectionFactory());
return redisTemplate;
}
@Bean
RedisCacheManager cacheManager() {
RedisCacheManager redisCacheManager = new RedisCacheManager(redisTemplate());
return redisCacheManager;
}
}
In present time (december 2015) the spring.redis.sentinel options in application.properties has limited support of RedisSentinelConfiguration:
Please note that currently only Jedis and lettuce Lettuce support Redis Sentinel.
You may read more about this in official documentation.
| Redis | 34,201,135 | 32 |
The think I'm trying to implement is an id table. Basically it has the structure (user_id, lecturer_id) which user_id refers to the primary key in my User table and lecturer_id refers to the primary key of my Lecturer table.
I'm trying to implement this in redis but if I set the key as User's primary id, when I try to run a query like get all the records with lecturer id=5 since lecturer is not the key, but value I won't be able to reach it in O(1) time.
How can I form a structure like the id table I mentioned in above, or Redis does not support that?
| One of the things you learn fast while working with redis is that you get to design your data structure around your accessing needs, specially when it comes to relations (it's not a relational database after all)
There is no way to search by "value" with a O(1) time complexity as you already noticed, but there are ways to approach what you describe using redis. Here's what I would recommend:
Store your user data by user id (in e.g. a hash) as you are already doing.
Have an additional set for each lecturer id containing all user ids that correspond to the lecturer id in question.
This might seem like duplicating the data of the relation, since your user data would have to store the lecture id, and your lecture data would store user ids, but that's the (tiny) price to pay if one is to build relations in a no-relational data store like redis. In practical terms this works well; memory is rarely a bottleneck for small-ish data-sets (think thousands of ids).
To get a better picture at how are people using redis to model applications with relations, I recommend reading Design and implementation of a simple Twitter clone and the source code of Lamernews, both of which are written by redis author Salvatore Sanfilippo.
| Redis | 12,745,818 | 32 |
I am using Ubuntu to develop my website. Recently, I started to use redis.
When I started my computer, redis-server will start by its own.
What method can I stop my redis-server starting by itself?
| It seems that the redis-server package uses rc.d scripts, and the preferred way to deal with them in Ubuntu is using update-rc.d:
sudo update-rc.d redis-server disable
Should do the trick. You can also disable it in a certain runlevel only:
sudo update-rc.d redis-server disable 2
| Redis | 11,857,198 | 32 |
Does anybody know a good solution for export/import in Redis?
Generally I need to dump DB (and edit the dump as a case) from a server and load it to another one (e.g. localhost).
Maybe some scripts?
| Redis has two binary format files supported: RDB and AOF.
RDB is a dump like what you asked. You can call save to force a rdb. It will be stored in the dbfilename setting you have, or dump.rdb in the current working directory if that setting is missing.
More Info:
http://redis.io/topics/persistence
| Redis | 8,704,805 | 32 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.