question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
How can I SELECT the last row in a MySQL table?
I'm INSERTing data and I need to retrieve a column value from the previous row.
There's an auto_increment in the table.
|
Yes, there's an auto_increment in there
If you want the last of all the rows in the table, then this is finally the time where MAX(id) is the right answer! Kind of:
SELECT fields FROM table ORDER BY id DESC LIMIT 1;
| MySQL | 4,073,923 | 250 |
How can I drop the "Unique Key Constraint" on a column of a MySQL table using phpMyAdmin?
| A unique constraint is also an index.
First use SHOW INDEX FROM tbl_name to find out the name of the index. The name of the index is stored in the column called key_name in the results of that query.
Then you can use DROP INDEX:
DROP INDEX index_name ON tbl_name
or the ALTER TABLE syntax:
ALTER TABLE tbl_name DROP INDEX index_name
| MySQL | 3,487,691 | 250 |
I want to remove the password for user root in localhost. How can I do that? By mistake I have set the password of root user. That's why phpmyadmin is giving an error:
#1045 - Access denied for user 'root'@'localhost' (using password: NO)
| You need to set the password for root@localhost to be blank. There are two ways:
The MySQL SET PASSWORD command:
SET PASSWORD FOR root@localhost=PASSWORD(''); -- MySQL 5.x
SET PASSWORD FOR root@localhost=''; -- MySQL 8.x
Using the command-line mysqladmin tool:
mysqladmin -u root -pType_in_your_current_password_here password ''
| MySQL | 3,032,054 | 250 |
Background
I'm a first year CS student and I work part time for my dad's small business. I don't have any experience in real world application development. I have written scripts in Python, some coursework in C, but nothing like this.
My dad has a small training business and currently all classes are scheduled, recorded and followed up via an external web application. There is an export/"reports" feature but it is very generic and we need specific reports. We don't have access to the actual database to run the queries. I've been asked to set up a custom reporting system.
My idea is to create the generic CSV exports and import (probably with Python) them into a MySQL database hosted in the office every night, from where I can run the specific queries that are needed. I don't have experience in databases but understand the very basics. I've read a little about database creation and normal forms.
We may start having international clients soon, so I want the database to not explode if/when that happens. We also currently have a couple big corporations as clients, with different divisions (e.g. ACME parent company, ACME healthcare division, ACME bodycare division)
The schema I have come up with is the following:
From the client perspective:
Clients is the main table
Clients are linked to the department they work for
Departments can be scattered around a country: HR in London, Marketing in Swansea, etc.
Departments are linked to the division of a company
Divisions are linked to the parent company
From the classes perspective:
Sessions is the main table
A teacher is linked to each session
A statusid is given to each session. E.g. 0 - Completed, 1 - Cancelled
Sessions are grouped into "packs" of an arbitrary size
Each packs is assigned to a client
I "designed" (more like scribbled) the schema on a piece of paper, trying to keep it normalised to the 3rd form. I then plugged it into MySQL Workbench and it made it all pretty for me: (Click here for full-sized graphic)
(source: maian.org)
Example queries I'll be running
Which clients with credit still left are inactive (those without a class scheduled in the future)
What is the attendance rate per client/department/division (measured by the status id in each session)
How many classes has a teacher had in a month
Flag clients who have low attendance rate
Custom reports for HR departments with attendance rates of people in their division
Question(s)
Is this overengineered or am I headed the right way?
Will the need to join multiple tables for most queries result in a big performance hit?
I have added a 'lastsession' column to clients, as it is probably going to be a common query. Is this a good idea or should I keep the database strictly normalised?
Thanks for your time
| Some more answers to your questions:
1) You're pretty much on target for someone who is approaching a problem like this for the first time. I think the pointers from others on this question thus far pretty much cover it. Good job!
2 & 3) The performance hit you will take will largely be dependent on having and optimizing the right indexes for your particular queries / procedures and more importantly the volume of records. Unless you are talking about well over a million records in your main tables you seem to be on track to having a sufficiently mainstream design that performance will not be an issue on reasonable hardware.
That said, and this relates to your question 3, with the start you have you probably shouldn't really be overly worried about performance or hyper-sensitivity to normalization orthodoxy here. This is a reporting server you are building, not a transaction based application backend, which would have a much different profile with respect to the importance of performance or normalization. A database backing a live signup and scheduling application has to be mindful of queries that take seconds to return data. Not only does a report server function have more tolerance for complex and lengthy queries, but the strategies to improve performance are much different.
For example, in a transaction based application environment your performance improvement options might include refactoring your stored procedures and table structures to the nth degree, or developing a caching strategy for small amounts of commonly requested data. In a reporting environment you can certainly do this but you can have an even greater impact on performance by introducing a snapshot mechanism where a scheduled process runs and stores pre-configured reports and your users access the snapshot data with no stress on your db tier on a per request basis.
All of this is a long-winded rant to illustrate that what design principles and tricks you employ may differ given the role of the db you're creating. I hope that's helpful.
| MySQL | 2,320,633 | 250 |
I know how to use INDEX as in the following code. And I know how to use foreign key and primary key.
CREATE TABLE tasks (
task_id int unsigned NOT NULL AUTO_INCREMENT,
parent_id int unsigned NOT NULL DEFAULT 0,
task varchar(100) NOT NULL,
date_added timestamp NOT NULL,
date_completed timestamp NULL,
PRIMARY KEY ( task_id ),
INDEX parent ( parent_id )
)
However I found a code using KEY instead of INDEX as following.
CREATE TABLE orders (
order_id int unsigned NOT NULL AUTO_INCREMENT,
-- etc
KEY order_date ( order_date )
)
I could not find any explanation on the official MySQL page. Could anyone tell me what is the differences between KEY and INDEX?
The only difference I see is that when I use KEY ..., I need to repeat the word, e.g. KEY order_date ( order_date ).
| There's no difference. They are synonyms, though INDEX should be preferred (as INDEX is ISO SQL compliant, while KEY is a MySQL-specific, non-portable, extension).
From the CREATE TABLE manual entry:
KEY is normally a synonym for INDEX. The key attribute PRIMARY KEY can also be specified as just KEY when given in a column definition. This was implemented for compatibility with other database systems.
By "The key attribute PRIMARY KEY can also be specified as just KEY when given in a column definition.", it means that these three CREATE TABLE statements below are equivalent and generate identical TABLE objects in the database:
CREATE TABLE orders1 (
order_id int PRIMARY KEY
);
CREATE TABLE orders2 (
order_id int KEY
);
CREATE TABLE orders3 (
order_id int NOT NULL,
PRIMARY KEY ( order_id )
);
...while these 2 statements below (for orders4, orders5) are equivalent with each other, but not with the 3 statements above, as here KEY and INDEX are synonyms for INDEX, not a PRIMARY KEY:
CREATE TABLE orders4 (
order_id int NOT NULL,
KEY ( order_id )
);
CREATE TABLE orders5 (
order_id int NOT NULL,
INDEX ( order_id )
);
...as the KEY ( order_id ) and INDEX ( order_id ) members do not define a PRIMARY KEY, they only define a generic INDEX object, which is nothing like a KEY at all (as it does not uniquely identify a row).
As can be seen by running SHOW CREATE TABLE orders1...5:
Table
SHOW CREATE TABLE...
orders1
CREATE TABLE orders1 ( order_id int NOT NULL, PRIMARY KEY ( order_id ))
orders2
CREATE TABLE orders2 ( order_id int NOT NULL, PRIMARY KEY ( order_id ))
orders3
CREATE TABLE orders3 ( order_id int NOT NULL, PRIMARY KEY ( order_id ))
orders4
CREATE TABLE orders4 ( order_id int NOT NULL, KEY ( order_id ))
orders5
CREATE TABLE orders5 ( order_id int NOT NULL, KEY ( order_id ))
| MySQL | 1,401,572 | 250 |
Error Code: 2013. Lost connection to MySQL server during query
I am using MySQL Workbench. Also, I am running a batch of inserts, about 1000 lines total (Ex. INSERT INTO mytable SELECT * FROM mysource1; INSERT INTO mytable SELECT * FROM mysource2;...mysource3...mysource4 multiplied 1000 times) Each batch takes a considerable amount of time, some of them, more than 600 seconds.
How can I configure workbench, to continue working overnight, without stopping and without losing the connection?
| From the now unavailable internet archive:
Go to Edit -> Preferences -> SQL Editor and set to a higher value this parameter: DBMS connection read time out (in seconds). For instance: 86400.
Close and reopen MySQL Workbench. Kill your previously query that
probably is running and run the query again.
| MySQL | 15,712,512 | 248 |
I have the following sql create statement
mysql> CREATE TABLE IF NOT EXISTS `erp`.`je_menus` (
-> `id` INT(11) NOT NULL AUTO_INCREMENT ,
-> `name` VARCHAR(100) NOT NULL ,
-> `description` VARCHAR(255) NOT NULL ,
-> `live_start_date` DATETIME NULL DEFAULT NULL ,
-> `live_end_date` DATETIME NULL DEFAULT NULL ,
-> `notes` VARCHAR(255) NULL ,
-> `create_date` TIMESTAMP NOT NULL DEFAULT '0000-00-00 00:00:00',
-> `created_by` INT(11) NOT NULL ,
-> `update_date` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ,
-> `updated_by` INT(11) NOT NULL ,
-> `status` VARCHAR(45) NOT NULL ,
-> PRIMARY KEY (`id`) )
-> ENGINE = InnoDB;
giving following error
ERROR 1067 (42000): Invalid default value for 'create_date'
What is the error here?
| That is because of server SQL Mode - NO_ZERO_DATE.
From the reference: NO_ZERO_DATE - In strict mode, doesn't allow '0000-00-00' as a valid date. You can still insert zero dates with the IGNORE option. When not in strict mode, the date is accepted but a warning is generated.
| MySQL | 9,192,027 | 248 |
Is it possible to set a user variable based on the result of a query in MySQL?
What I want to achieve is something like this (we can assume that both USER and GROUP are unique):
set @user = 123456;
set @group = select GROUP from USER where User = @user;
select * from USER where GROUP = @group;
Please note that I know it's possible but I do not wish to do this with nested queries.
| Yes, but you need to move the variable assignment into the query:
SET @user := 123456;
SELECT @group := `group` FROM user WHERE user = @user;
SELECT * FROM user WHERE `group` = @group;
Test case:
CREATE TABLE user (`user` int, `group` int);
INSERT INTO user VALUES (123456, 5);
INSERT INTO user VALUES (111111, 5);
Result:
SET @user := 123456;
SELECT @group := `group` FROM user WHERE user = @user;
SELECT * FROM user WHERE `group` = @group;
+--------+-------+
| user | group |
+--------+-------+
| 123456 | 5 |
| 111111 | 5 |
+--------+-------+
2 rows in set (0.00 sec)
Note that for SET, either = or := can be used as the assignment operator. However inside other statements, the assignment operator must be := and not = because = is treated as a comparison operator in non-SET statements.
UPDATE:
Further to comments below, you may also do the following:
SET @user := 123456;
SELECT `group` FROM user LIMIT 1 INTO @group;
SELECT * FROM user WHERE `group` = @group;
| MySQL | 3,888,735 | 248 |
I have a mysqldump backup of my mysql database consisting of all of our tables which is about 440 megs. I want to restore the contents of just one of the tables from the mysqldump. Is this possible? Theoretically, I could just cut out the section that rebuilds the table I want but I don't even know how to effectively edit a text document that size.
| You can try to use sed in order to extract only the table you want.
Let say the name of your table is mytable and the file mysql.dump is the file containing your huge dump:
$ sed -n -e '/CREATE TABLE.*`mytable`/,/Table structure for table/p' mysql.dump > mytable.dump
This will copy in the file mytable.dump what is located between CREATE TABLE mytable and the next CREATE TABLE corresponding to the next table.
You can then adjust the file mytable.dump which contains the structure of the table mytable, and the data (a list of INSERT).
| MySQL | 1,013,852 | 248 |
I've just started with Laravel and I get the following error:
Unknown column 'updated_at' insert into gebruikers (naam, wachtwoord,
updated_at, created_at)
I know the error is from the timestamp column when you migrate a table but I'm not using the updated_at field. I used to use it when I followed the Laravel tutorial but now that I am making (or attempting to make) my own stuff. I get this error even though I don't use timestamps. I can't seem to find the place where it's being used. This is the code:
Controller
public function created()
{
if (!User::isValidRegister(Input::all())) {
return Redirect::back()->withInput()->withErrors(User::$errors);
}
// Register the new user or whatever.
$user = new User;
$user->naam = Input::get('naam');
$user->wachtwoord = Hash::make(Input::get('password'));
$user->save();
return Redirect::to('/users');
}
Route
Route::get('created', 'UserController@created');
Model
public static $rules_register = [
'naam' => 'unique:gebruikers,naam'
];
public static $errors;
protected $table = 'gebruikers';
public static function isValidRegister($data)
{
$validation = Validator::make($data, static::$rules_register);
if ($validation->passes()) {
return true;
}
static::$errors = $validation->messages();
return false;
}
I must be forgetting something... What am I doing wrong here?
| In the model, write the below code;
public $timestamps = false;
This would work.
Explanation : By default laravel will expect created_at & updated_at column in your table.
By making it to false it will override the default setting.
| MySQL | 28,277,955 | 247 |
I am trying to create a container with a MySQL database and add a schema to these database.
My current Dockerfile is:
FROM mysql
MAINTAINER (me) <email>
# Copy the database schema to the /data directory
COPY files/epcis_schema.sql /data/epcis_schema.sql
# Change the working directory
WORKDIR data
CMD mysql -u $MYSQL_USER -p $MYSQL_PASSWORD $MYSQL_DATABASE < epcis_schema.sql
In order to create the container I am following the documentation provided on Docker and executing this command:
docker run --name ${CONTAINER_NAME} -e MYSQL_ROOT_PASSWORD=${DB_ROOT_PASSWORD} -e MYSQL_USER=${DB_USER} -e MYSQL_PASSWORD=${DB_USER_PASSWORD} -e MYSQL_DATABASE=${DB_NAME} -d mvpgomes/epcisdb
But when I execute this command the Container is not created and in the Container status it is possible to see that the CMD was not executed successfully, in fact only the mysql command is executed.
Anyway, is there a way to initialize the database with the schema or do I need to perform these operations manually?
| I had this same issue where I wanted to initialize my MySQL Docker instance's schema, but I ran into difficulty getting this working after doing some Googling and following others' examples. Here's how I solved it.
1) Dump your MySQL schema to a file.
mysqldump -h <your_mysql_host> -u <user_name> -p --no-data <schema_name> > schema.sql
2) Use the ADD command to add your schema file to the /docker-entrypoint-initdb.d directory in the Docker container. The docker-entrypoint.sh file will run any files in this directory ending with ".sql" against the MySQL database.
Dockerfile:
FROM mysql:5.7.15
MAINTAINER me
ENV MYSQL_DATABASE=<schema_name> \
MYSQL_ROOT_PASSWORD=<password>
ADD schema.sql /docker-entrypoint-initdb.d
EXPOSE 3306
3) Start up the Docker MySQL instance.
docker-compose build
docker-compose up
Thanks to Setting up MySQL and importing dump within Dockerfile for clueing me in on the docker-entrypoint.sh and the fact that it runs both SQL and shell scripts!
| MySQL | 29,145,370 | 245 |
I'm using now() in MySQL query.
INSERT INTO table SET data = '$data', date = now()
But I want to add 1 day to this date (so that date should contain tomorrow).
Is it possible?
| You can use:
NOW() + INTERVAL 1 DAY
If you are only interested in the date, not the date and time then you can use CURDATE instead of NOW:
CURDATE() + INTERVAL 1 DAY
| MySQL | 3,887,509 | 245 |
I have a table like this:
TITLE
DESCRIPTION
test1
value blah blah value
test2
value test
test3
test test test
test4
valuevaluevaluevaluevalue
I am trying to figure out how to return the number of times a string occurs in each of the DESCRIPTION's.
So, if I want to count the number of times 'value' appears, the sql statement will return this:
TITLE
DESCRIPTION
COUNT
test1
value blah blah value
2
test2
value test
1
test3
test test test
0
test4
valuevaluevaluevaluevalue
5
Is there any way to do this? I do not want to use php at all, just mysql.
| This should do the trick:
SELECT
title,
description,
ROUND (
(
LENGTH(description)
- LENGTH( REPLACE ( description, "value", "") )
) / LENGTH("value")
) AS count
FROM <table>
| MySQL | 12,344,795 | 244 |
What's the equivalent to show tables (from MySQL) in PostgreSQL?
| From the psql command line interface,
First, choose your database
\c database_name
Then, this shows all tables in the current schema:
\dt
Programmatically (or from the psql interface too, of course):
SELECT * FROM pg_catalog.pg_tables;
The system tables live in the pg_catalog database.
| PostgreSQL | 769,683 | 2,500 |
How do you perform the equivalent of Oracle's DESCRIBE TABLE in PostgreSQL with psql command?
| Try this (in the psql command-line tool):
\d+ tablename
See the manual for more info.
| PostgreSQL | 109,325 | 2,192 |
I'd like to select the first row of each set of rows grouped with a GROUP BY.
Specifically, if I've got a purchases table that looks like this:
SELECT * FROM purchases;
My Output:
id
customer
total
1
Joe
5
2
Sally
3
3
Joe
2
4
Sally
1
I'd like to query for the id of the largest purchase (total) made by each customer. Something like this:
SELECT FIRST(id), customer, FIRST(total)
FROM purchases
GROUP BY customer
ORDER BY total DESC;
Expected Output:
FIRST(id)
customer
FIRST(total)
1
Joe
5
2
Sally
3
| DISTINCT ON is typically simplest and fastest for this in PostgreSQL.
(For performance optimization for certain workloads see below.)
SELECT DISTINCT ON (customer)
id, customer, total
FROM purchases
ORDER BY customer, total DESC, id;
Or shorter (if not as clear) with ordinal numbers of output columns:
SELECT DISTINCT ON (2)
id, customer, total
FROM purchases
ORDER BY 2, 3 DESC, 1;
If total can be null, add NULLS LAST:
...
ORDER BY customer, total DESC NULLS LAST, id;
Works either way, but you'll want to match existing indexes
db<>fiddle here
Major points
DISTINCT ON is a PostgreSQL extension of the standard, where only DISTINCT on the whole SELECT list is defined.
List any number of expressions in the DISTINCT ON clause, the combined row value defines duplicates. The manual:
Obviously, two rows are considered distinct if they differ in at least
one column value. Null values are considered equal in this
comparison.
Bold emphasis mine.
DISTINCT ON can be combined with ORDER BY. Leading expressions in ORDER BY must be in the set of expressions in DISTINCT ON, but you can rearrange order among those freely. Example.
You can add additional expressions to ORDER BY to pick a particular row from each group of peers. Or, as the manual puts it:
The DISTINCT ON expression(s) must match the leftmost ORDER BY
expression(s). The ORDER BY clause will normally contain additional
expression(s) that determine the desired precedence of rows within
each DISTINCT ON group.
I added id as last item to break ties:
"Pick the row with the smallest id from each group sharing the highest total."
To order results in a way that disagrees with the sort order determining the first per group, you can nest above query in an outer query with another ORDER BY. Example.
If total can be null, you most probably want the row with the greatest non-null value. Add NULLS LAST like demonstrated. See:
Sort by column ASC, but NULL values first?
The SELECT list is not constrained by expressions in DISTINCT ON or ORDER BY in any way:
You don't have to include any of the expressions in DISTINCT ON or ORDER BY.
You can include any other expression in the SELECT list. This is instrumental for replacing complex subqueries and aggregate / window functions.
I tested with Postgres versions 8.3 – 16. But the feature has been there at least since version 7.1, so basically always.
Index
The perfect index for the above query would be a multi-column index spanning all three columns in matching sequence and with matching sort order:
CREATE INDEX purchases_3c_idx ON purchases (customer, total DESC, id);
May be too specialized. But use it if read performance for the particular query is crucial. If you have DESC NULLS LAST in the query, use the same in the index so that sort order matches and the index is perfectly applicable.
Effectiveness / Performance optimization
Weigh cost and benefit before creating tailored indexes for each query. The potential of above index largely depends on data distribution.
The index is used because it delivers pre-sorted data. In Postgres 9.2 or later the query can also benefit from an index only scan if the index is smaller than the underlying table. The index has to be scanned in its entirety, though. Example.
For few rows per customer (high cardinality in column customer), this is very efficient. Even more so if you need sorted output anyway. The benefit shrinks with a growing number of rows per customer.
Ideally, you have enough work_mem to process the involved sort step in RAM and not spill to disk. But generally setting work_mem too high can have adverse effects. Consider SET LOCAL for exceptionally big queries. Find how much you need with EXPLAIN ANALYZE. Mention of "Disk:" in the sort step indicates the need for more:
Configuration parameter work_mem in PostgreSQL on Linux
Optimize simple query using ORDER BY date and text
For many rows per customer (low cardinality in column customer), an "index skip scan" or "loose index scan" would be (much) more efficient. But that's not implemented up to Postgres 16. Serious work to implement it one way or another has been ongoing for years now, but so far unsuccessful. See here and here.
For now, there are faster query techniques to substitute for this. In particular if you have a separate table holding unique customers, which is the typical use case. But also if you don't:
SELECT DISTINCT is slower than expected on my table in PostgreSQL
Optimize GROUP BY query to retrieve latest row per user
Optimize groupwise maximum query
Query last N related rows per row
Benchmarks
See separate answer.
| PostgreSQL | 3,800,551 | 2,017 |
What command or short key can I use to exit the PostgreSQL command line utility psql?
| Type \q and then press ENTER to quit psql.
UPDATE: 19-OCT-2018
As of PostgreSQL 11, the keywords "quit" and "exit" in the PostgreSQL command-line interface have been included to help make it easier to leave the command-line tool.
| PostgreSQL | 9,463,318 | 1,967 |
How do I change the password for a PostgreSQL user?
| To log in without a password:
sudo -u user_name psql db_name
To reset the password if you have forgotten:
ALTER USER user_name WITH PASSWORD 'new_password';
| PostgreSQL | 12,720,967 | 1,877 |
In MySQL, I used use database_name;
What's the psql equivalent?
| In PostgreSQL, you can use the \connect meta-command of the client tool psql:
\connect DBNAME
or in short:
\c DBNAME
| PostgreSQL | 3,949,876 | 1,536 |
I'm in a corporate environment (running Debian Linux) and didn't install it myself. I access the databases using Navicat or phpPgAdmin (if that helps). I also don't have shell access to the server running the database.
| Run this query from PostgreSQL:
SELECT version();
| PostgreSQL | 13,733,719 | 1,358 |
I'm getting the error:
FATAL: Peer authentication failed for user "postgres"
when I try to make postgres work with Rails.
Here's my pg_hba.conf, my database.yml, and a dump of the full trace.
I changed authentication to md5 in pg_hba and tried different things, but none seem to work.
I also tried creating a new user and database as per Rails 3.2, FATAL: Peer authentication failed for user (PG::Error)
But they don't show up on pgadmin or even when I run sudo -u postgres psql -l.
Any idea where I'm going wrong?
| The problem is still your pg_hba.conf file*.
This line:
local all postgres peer
Should be:
local all postgres md5
After altering this file, don't forget to restart your PostgreSQL server. If you're on Linux, that would be sudo systemctl restart postgresql (on older systems: sudo service postgresql restart).
Locating hba.conf
Note that the location of this file isn't very consistent.
You can use locate pg_hba.conf or ask PostgreSQL SHOW hba_file; to discover the file location.
Usual locations are /etc/postgresql/[version]/main/pg_hba.conf and /var/lib/pgsql/data/pg_hba.conf.
These are brief descriptions of the peer vs md5 options according to the official PostgreSQL docs on authentication methods.
Peer authentication
The peer authentication method works by obtaining the client's
operating system user name from the kernel and using it as the allowed
database user name (with optional user name mapping). This method is
only supported on local connections.
Password authentication
The password-based authentication methods are md5 and password. These
methods operate similarly except for the way that the password is sent
across the connection, namely MD5-hashed and clear-text respectively.
If you are at all concerned about password "sniffing" attacks then md5
is preferred. Plain password should always be avoided if possible.
However, md5 cannot be used with the db_user_namespace feature. If the
connection is protected by SSL encryption then password can be used
safely (though SSL certificate authentication might be a better choice
if one is depending on using SSL).
| PostgreSQL | 18,664,074 | 1,213 |
Final update:
I had forgotten to run the initdb command.
By running this command
ps auxwww | grep postgres
I see that postgres is not running
> ps auxwww | grep postgres
remcat 1789 0.0 0.0 2434892 480 s000 R+ 11:28PM 0:00.00 grep postgres
This raises the question:
How do I start the PostgreSQL server?
Update:
> pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
server starting
sh: /usr/local/var/postgres/server.log: No such file or directory
Update 2:
The touch was not successful, so I did this instead:
> mkdir /usr/local/var/postgres
> vi /usr/local/var/postgres/server.log
> ls /usr/local/var/postgres/
server.log
But when I try to start the Ruby on Rails server, I still see this:
Is the server running on host "localhost" and accepting
TCP/IP connections on port 5432?
Update 3:
> pg_ctl -D /usr/local/var/postgres status
pg_ctl: no server running
Update 4:
I found that there wasn't any pg_hba.conf file (only file pg_hba.conf.sample), so I modified the sample and renamed it (to remover the .sample). Here are the contents:
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
But I don't understand this:
> pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
server starting
> pg_ctl -D /usr/local/var/postgres status
pg_ctl: no server running
Also:
sudo find / -name postgresql.conf
find: /dev/fd/3: Not a directory
find: /dev/fd/4: Not a directory
Update 5:
sudo pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
Password:
pg_ctl: cannot be run as root
Please log in (using, e.g., "su") as the (unprivileged) user that will own the server process.
Update 6:
This seems odd:
> egrep 'listen|port' /usr/local/var/postgres/postgresql.conf
egrep: /usr/local/var/postgres/postgresql.conf: No such file or directory
Though, I did do this:
>sudo find / -name "*postgresql.conf*"
find: /dev/fd/3: Not a directory
find: /dev/fd/4: Not a directory
/usr/local/Cellar/postgresql/9.0.4/share/postgresql/postgresql.conf.sample
/usr/share/postgresql/postgresql.conf.sample
So I did this:
egrep 'listen|port' /usr/local/Cellar/postgresql/9.0.4/share/postgresql/postgresql.conf.sample
#listen_addresses = 'localhost' # what IP address(es) to listen on;
#port = 5432 # (change requires restart)
# supported by the operating system:
# %r = remote host and port
So I tried this:
> cp /usr/local/Cellar/postgresql/9.0.4/share/postgresql/postgresql.conf.sample /usr/local/Cellar/postgresql/9.0.4/share/postgresql/postgresql.conf
> cp /usr/share/postgresql/postgresql.conf.sample /usr/share/postgresql/postgresql.conf
I am still getting the same "Is the server running?" message.
| The Homebrew package manager includes launchctl plists to start automatically. For more information, run brew info postgres.
Start manually
pg_ctl -D /usr/local/var/postgres start
Stop manually
pg_ctl -D /usr/local/var/postgres stop
Start automatically
"To have launchd start postgresql now and restart at login:"
brew services start postgresql
What is the result of pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start?
What is the result of pg_ctl -D /usr/local/var/postgres status?
Are there any error messages in the server.log?
Make sure tcp localhost connections are enabled in pg_hba.conf:
# IPv4 local connections:
host all all 127.0.0.1/32 trust
Check the listen_addresses and port in postgresql.conf:
egrep 'listen|port' /usr/local/var/postgres/postgresql.conf
#listen_addresses = 'localhost' # What IP address(es) to listen on;
#port = 5432 # (change requires restart)
Cleaning up
PostgreSQL was most likely installed via Homebrew, Fink, MacPorts or the EnterpriseDB installer.
Check the output of the following commands to determine which package manager it was installed with:
brew && brew list|grep postgres
fink && fink list|grep postgres
port && port installed|grep postgres
| PostgreSQL | 7,975,556 | 1,186 |
What is the easiest way to save PL/pgSQL output from a PostgreSQL database to a CSV file?
I'm using PostgreSQL 8.4 with pgAdmin III and PSQL plugin where I run queries from.
| Do you want the resulting file on the server, or on the client?
Server side
If you want something easy to re-use or automate, you can use Postgresql's built in COPY command. e.g.
Copy (Select * From foo) To '/tmp/test.csv' With CSV DELIMITER ',' HEADER;
This approach runs entirely on the remote server - it can't write to your local PC. It also needs to be run as a Postgres "superuser" (normally called "root") because Postgres can't stop it doing nasty things with that machine's local filesystem.
That doesn't actually mean you have to be connected as a superuser (automating that would be a security risk of a different kind), because you can use the SECURITY DEFINER option to CREATE FUNCTION to make a function which runs as though you were a superuser.
The crucial part is that your function is there to perform additional checks, not just by-pass the security - so you could write a function which exports the exact data you need, or you could write something which can accept various options as long as they meet a strict whitelist. You need to check two things:
Which files should the user be allowed to read/write on disk? This might be a particular directory, for instance, and the filename might have to have a suitable prefix or extension.
Which tables should the user be able to read/write in the database? This would normally be defined by GRANTs in the database, but the function is now running as a superuser, so tables which would normally be "out of bounds" will be fully accessible. You probably don’t want to let someone invoke your function and add rows on the end of your “users” table…
I've written a blog post expanding on this approach, including some examples of functions that export (or import) files and tables meeting strict conditions.
Client side
The other approach is to do the file handling on the client side, i.e. in your application or script. The Postgres server doesn't need to know what file you're copying to, it just spits out the data and the client puts it somewhere.
The underlying syntax for this is the COPY TO STDOUT command, and graphical tools like pgAdmin will wrap it for you in a nice dialog.
The psql command-line client has a special "meta-command" called \copy, which takes all the same options as the "real" COPY, but is run inside the client:
\copy (Select * From foo) To '/tmp/test.csv' With CSV DELIMITER ',' HEADER
Note that there is no terminating ;, because meta-commands are terminated by newline, unlike SQL commands.
From the docs:
Do not confuse COPY with the psql instruction \copy. \copy invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, file accessibility and access rights depend on the client rather than the server when \copy is used.
Your application programming language may also have support for pushing or fetching the data, but you cannot generally use COPY FROM STDIN/TO STDOUT within a standard SQL statement, because there is no way of connecting the input/output stream. PHP's PostgreSQL handler (not PDO) includes very basic pg_copy_from and pg_copy_to functions which copy to/from a PHP array, which may not be efficient for large data sets.
| PostgreSQL | 1,517,635 | 1,127 |
I'm setting up my PostgreSQL 9.1. I can't do anything with PostgreSQL: can't createdb, can't createuser; all operations return the error message
Fatal: role h9uest does not exist
h9uest is my account name, and I sudo apt-get install PostgreSQL 9.1 under this account.
Similar error persists for the root account.
| Use the operating system user postgres to create your database - as long as you haven't set up a database role with the necessary privileges that corresponds to your operating system user of the same name (h9uest in your case):
sudo -u postgres -i
As recommended here or here.
Then try again. Type exit when done with operating as system user postgres.
Or execute the single command createuser as postgres with sudo, like demonstrated by
dsrees in another answer.
The point is to use the operating system user matching the database role of the same name to be granted access via ident authentication. postgres is the default operating system user to have initialized the database cluster. The manual:
In order to bootstrap the database system, a freshly initialized
system always contains one predefined role. This role is always a
“superuser”, and by default (unless altered when running initdb) it
will have the same name as the operating system user that initialized
the database cluster. Customarily, this role will be named postgres.
In order to create more roles you first have to connect as this
initial role.
I have heard of odd setups with non-standard user names or where the operating system user does not exist. You'd need to adapt your strategy there.
Read about database roles and client authentication in the manual.
| PostgreSQL | 11,919,391 | 1,075 |
What's the difference between the text data type and the character varying (varchar) data types?
According to the documentation
If character varying is used without length specifier, the type accepts strings of any size. The latter is a PostgreSQL extension.
and
In addition, PostgreSQL provides the text type, which stores strings of any length. Although the type text is not in the SQL standard, several other SQL database management systems have it as well.
So what's the difference?
| There is no difference, under the hood it's all varlena (variable length array).
Check this article from Depesz: http://www.depesz.com/index.php/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/
A couple of highlights:
To sum it all up:
char(n) – takes too much space when dealing with values shorter than n (pads them to n), and can lead to subtle errors because of adding trailing
spaces, plus it is problematic to change the limit
varchar(n) – it's problematic to change the limit in live environment (requires exclusive lock while altering table)
varchar – just like text
text – for me a winner – over (n) data types because it lacks their problems, and over varchar – because it has distinct name
The article does detailed testing to show that the performance of inserts and selects for all 4 data types are similar. It also takes a detailed look at alternate ways on constraining the length when needed. Function based constraints or domains provide the advantage of instant increase of the length constraint, and on the basis that decreasing a string length constraint is rare, depesz concludes that one of them is usually the best choice for a length limit.
| PostgreSQL | 4,848,964 | 1,029 |
I'm using Postgres.app for Mac. I've used it in the past on other machines but it's giving me some trouble when installing on my MacBook. I've installed the application and I ran:
psql -h localhost
It returns:
psql: FATAL: database "<user>" does not exist
It seems I can't even run the console to create the database that it's attempting to find. The same thing happens when I just run:
psql
or if I launch psql from the application drop down menu:
Machine stats:
OSX 10.8.4
psql (PostgreSQL) 9.2.4
Any help is appreciated.
I've also attempted to install PostgreSQL via Homebrew and I'm getting the same issue. I've also read the application's documentation that states:
When Postgres.app first starts up, it creates the $USER database,
which is the default database for psql when none is specified. The
default user is $USER, with no password.
So it would seem the application is not creating $USER. However, I've installed->uninstalled-reinstalled several times now so it must be something with my machine.
I found the answer but I'm not sure exactly how it works as the user who answered on this thread -> Getting Postgresql Running In Mac: Database "postgres" does not exist didn't follow up. I used the following command to get psql to open:
psql -d template1
| It appears that your package manager failed to create the database named $user for you. The reason that
psql -d template1
works for you is that template1 is a database created by postgres itself, and is present on all installations.
You are apparently able to log in to template1, so you must have some rights assigned to you by the database. Try this at a shell prompt:
createdb
and then see if you can log in again with
psql -h localhost
This will simply create a database for your login user, which I think is what you are looking for. If createdb fails, then you don't have enough rights to make your own database, and you will have to figure out how to fix the homebrew package.
| PostgreSQL | 17,633,422 | 1,018 |
I am using the Ruby on Rails 3.1 pre version. I like to use PostgreSQL, but the problem is installing the pg gem. It gives me the following error:
$ gem install pg
Building native extensions. This could take a while...
ERROR: Error installing pg:
ERROR: Failed to build gem native extension.
/home/u/.rvm/rubies/ruby-1.9.2-p0/bin/ruby extconf.rb
checking for pg_config... no
No pg_config... trying anyway. If building fails, please try again with
--with-pg-config=/path/to/pg_config
checking for libpq-fe.h... no
Can't find the 'libpq-fe.h header
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers. Check the mkmf.log file for more
details. You may need configuration options.
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/home/u/.rvm/rubies/ruby-1.9.2-p0/bin/ruby
--with-pg
--without-pg
--with-pg-dir
--without-pg-dir
--with-pg-include
--without-pg-include=${pg-dir}/include
--with-pg-lib
--without-pg-lib=${pg-dir}/lib
--with-pg-config
--without-pg-config
--with-pg_config
--without-pg_config
Gem files will remain installed in /home/u/.rvm/gems/ruby-1.9.2-p0/gems/pg-0.11.0 for inspection.
Results logged to /home/u/.rvm/gems/ruby-1.9.2-p0/gems/pg-0.11.0/ext/gem_make.out
How do I solve this problem?
| It looks like in Ubuntu that header is part of the libpq-dev package (at least in the following Ubuntu versions:
11.04 (Natty Narwhal), 10.04 (Lucid Lynx), 11.10 (Oneiric Ocelot), 12.04 (Precise Pangolin), 14.04 (Trusty Tahr) and 18.04 (Bionic Beaver)):
...
/usr/include/postgresql/libpq-fe.h
...
So try installing libpq-dev or its equivalent for your OS:
For Ubuntu/Debian systems: sudo apt-get install libpq-dev
On Red Hat Linux (RHEL) systems: yum install postgresql-devel
For Mac Homebrew: brew install postgresql
For Mac MacPorts PostgreSQL: gem install pg -- --with-pg-config=/opt/local/lib/postgresql[version number]/bin/pg_config
For OpenSuse: zypper in postgresql-devel
For ArchLinux: pacman -S postgresql-libs
| PostgreSQL | 6,040,583 | 992 |
What's the correct way to copy entire database (its structure and data) to a new one in pgAdmin?
| Postgres allows the use of any existing database on the server as a template when creating a new database. I'm not sure whether pgAdmin gives you the option on the create database dialog but you should be able to execute the following in a query window if it doesn't:
CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;
Still, you may get:
ERROR: source database "originaldb" is being accessed by other users
To disconnect all other users from the database, you can use this query:
SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'originaldb' AND pid <> pg_backend_pid();
| PostgreSQL | 876,522 | 915 |
I need to write a script that will drop a PostgreSQL database. There may be a lot of connections to it, but the script should ignore that.
The standard DROP DATABASE db_name query doesn't work when there are open connections.
How can I solve the problem?
| This will drop existing connections except for yours:
Query pg_stat_activity and get the pid values you want to kill, then issue SELECT pg_terminate_backend(pid int) to them.
PostgreSQL 9.2 and above:
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'TARGET_DB' -- ← change this to your DB
AND pid <> pg_backend_pid();
PostgreSQL 9.1 and below:
SELECT pg_terminate_backend(pg_stat_activity.procpid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'TARGET_DB' -- ← change this to your DB
AND procpid <> pg_backend_pid();
Once you disconnect everyone you will have to disconnect and issue the DROP DATABASE command from a connection from another database aka not the one your trying to drop.
Note the renaming of the procpid column to pid. See this mailing list thread.
| PostgreSQL | 5,408,156 | 853 |
Basically, I want to do this:
update vehicles_vehicle v
join shipments_shipment s on v.shipment_id=s.id
set v.price=s.price_per_vehicle;
I'm pretty sure that would work in MySQL (my background), but it doesn't seem to work in postgres. The error I get is:
ERROR: syntax error at or near "join"
LINE 1: update vehicles_vehicle v join shipments_shipment s on v.shi...
^
Surely there's an easy way to do this, but I can't find the proper syntax. So, how would I write this In PostgreSQL?
| The UPDATE syntax is:
[ WITH [ RECURSIVE ] with_query [, ...] ]
UPDATE [ ONLY ] table [ [ AS ] alias ]
SET { column = { expression | DEFAULT } |
( column [, ...] ) = ( { expression | DEFAULT } [, ...] ) } [, ...]
[ FROM from_list ]
[ WHERE condition | WHERE CURRENT OF cursor_name ]
[ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ]
In your case I think you want this:
UPDATE vehicles_vehicle AS v
SET price = s.price_per_vehicle
FROM shipments_shipment AS s
WHERE v.shipment_id = s.id
Or if you need to join on two or more tables:
UPDATE table_1 t1
SET foo = 'new_value'
FROM table_2 t2
JOIN table_3 t3 ON t3.id = t2.t3_id
WHERE
t2.id = t1.t2_id
AND t3.bar = True;
| PostgreSQL | 7,869,592 | 817 |
Several months ago I learned from an answer on Stack Overflow how to perform multiple updates at once in MySQL using the following syntax:
INSERT INTO table (id, field, field2) VALUES (1, A, X), (2, B, Y), (3, C, Z)
ON DUPLICATE KEY UPDATE field=VALUES(Col1), field2=VALUES(Col2);
I've now switched over to PostgreSQL and apparently this is not correct. It's referring to all the correct tables so I assume it's a matter of different keywords being used but I'm not sure where in the PostgreSQL documentation this is covered.
To clarify, I want to insert several things and if they already exist to update them.
| PostgreSQL since version 9.5 has UPSERT syntax, with ON CONFLICT clause. with the following syntax (similar to MySQL)
INSERT INTO the_table (id, column_1, column_2)
VALUES (1, 'A', 'X'), (2, 'B', 'Y'), (3, 'C', 'Z')
ON CONFLICT (id) DO UPDATE
SET column_1 = excluded.column_1,
column_2 = excluded.column_2;
Searching postgresql's email group archives for "upsert" leads to finding an example of doing what you possibly want to do, in the manual:
Example 38-2. Exceptions with UPDATE/INSERT
This example uses exception handling to perform either UPDATE or INSERT, as appropriate:
CREATE TABLE db (a INT PRIMARY KEY, b TEXT);
CREATE FUNCTION merge_db(key INT, data TEXT) RETURNS VOID AS
$$
BEGIN
LOOP
-- first try to update the key
-- note that "a" must be unique
UPDATE db SET b = data WHERE a = key;
IF found THEN
RETURN;
END IF;
-- not there, so try to insert the key
-- if someone else inserts the same key concurrently,
-- we could get a unique-key failure
BEGIN
INSERT INTO db(a,b) VALUES (key, data);
RETURN;
EXCEPTION WHEN unique_violation THEN
-- do nothing, and loop to try the UPDATE again
END;
END LOOP;
END;
$$
LANGUAGE plpgsql;
SELECT merge_db(1, 'david');
SELECT merge_db(1, 'dennis');
There's possibly an example of how to do this in bulk, using CTEs in 9.1 and above, in the hackers mailing list:
WITH foos AS (SELECT (UNNEST(%foo[])).*)
updated as (UPDATE foo SET foo.a = foos.a ... RETURNING foo.id)
INSERT INTO foo SELECT foos.* FROM foos LEFT JOIN updated USING(id)
WHERE updated.id IS NULL;
See a_horse_with_no_name's answer for a clearer example.
| PostgreSQL | 1,109,061 | 808 |
I have some .sql files with thousands of INSERT statements in them and need to run these inserts on my PostgreSQL database in order to add them to a table. The files are that large that it is impossible to open them and copy the INSERT statements into an editor window and run them there. I found on the Internet that you can use the following by navigating to the bin folder of your PostgreSQL install:
psql -d myDataBase -a -f myInsertFile
In my case:
psql -d HIGHWAYS -a -f CLUSTER_1000M.sql
I am then asked for a password for my user, but I cannot enter anything and when I hit enter I get this error:
psql: FATAL: password authentication failed for user "myUsername"
Why won't it let me enter a password. Is there a way round this as it is critical that I can run these scripts?
I got around this issue by adding a new entry in my pg_hba.conf file with the following structure:
# IPv6 local connections:
host myDbName myUserName ::1/128 trust
The pg_hba.conf file can usually be found in the 'data' folder of your PostgreSQL install.
| Of course, you will get a fatal error for authenticating, because you do not include a user name...
Try this one, it is OK for me :)
psql -U username -d myDataBase -a -f myInsertFile
If the database is remote, use the same command with host
psql -h host -U username -d myDataBase -a -f myInsertFile
| PostgreSQL | 9,736,085 | 785 |
I have a table test(id,name).
I need to insert values like: user's log, 'my user', customer's.
insert into test values (1,'user's log');
insert into test values (2,''my users'');
insert into test values (3,'customer's');
I am getting an error if I run any of the above statements.
If there is any method to do this correctly please share. I don't want any prepared statements.
Is it possible using sql escaping mechanism?
|
String literals
Escaping single quotes ' by doubling them up → '' is the standard way and works of course:
'user's log' -- incorrect syntax (unbalanced quote)
'user''s log'
Plain single quotes (ASCII / UTF-8 code 39), mind you, not backticks `, which have no special purpose in Postgres (unlike certain other RDBMS) and not double-quotes ", used for identifiers.
In old versions or if you still run with standard_conforming_strings = off or, generally, if you prepend your string with E to declare Posix escape string syntax, you can also escape with the backslash \:
E'user\'s log'
Backslash itself is escaped with another backslash. But that's generally not preferable.
If you have to deal with many single quotes or multiple layers of escaping, you can avoid quoting hell in PostgreSQL with dollar-quoted strings:
'escape '' with '''''
$$escape ' with ''$$
To further avoid confusion among dollar-quotes, add a unique token to each pair:
$token$escape ' with ''$token$
Which can be nested any number of levels:
$token2$Inner string: $token1$escape ' with ''$token1$ is nested$token2$
Pay attention if the $ character should have special meaning in your client software. You may have to escape it in addition. This is not the case with standard PostgreSQL clients like psql or pgAdmin.
That is all very useful for writing PL/pgSQL functions or ad-hoc SQL commands. It cannot alleviate the need to use prepared statements or some other method to safeguard against SQL injection in your application when user input is possible, though. @Craig's answer has more on that. More details:
SQL injection in Postgres functions vs prepared queries
Values inside Postgres
When dealing with values inside the database, there are a couple of useful functions to quote strings properly:
quote_literal() or quote_nullable() - the latter outputs the unquoted string NULL for null input.
There is also quote_ident() to double-quote strings where needed to get valid SQL identifiers.
format() with the format specifier %L is equivalent to quote_nullable().
Like: format('%L', string_var)
concat() or concat_ws() are typically no good for this purpose as those do not escape nested single quotes and backslashes.
| PostgreSQL | 12,316,953 | 768 |
I'm a postgres novice.
I installed the postgres.app for mac. I was playing around with the psql commands and I accidentally dropped the postgres database. I don't know what was in it.
I'm currently working on a tutorial: http://www.rosslaird.com/blog/building-a-project-with-mezzanine/
And I'm stuck at sudo -u postgres psql postgres
ERROR MESSAGE: psql: FATAL: role "postgres" does not exist
$ which psql
/Applications/Postgres.app/Contents/MacOS/bin/psql
This is what prints out of psql -l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
------------+------------+----------+---------+-------+---------------------------
user | user | UTF8 | en_US | en_US |
template0 | user | UTF8 | en_US | en_US | =c/user +
| | | | | user =CTc/user
template1 | user | UTF8 | en_US | en_US | =c/user +
| | | | | user =CTc/user
(3 rows)
So what are the steps I should take? Delete an everything related to psql and reinstall everything?
Thanks for the help guys!
| NOTE: If you installed postgres using homebrew, see the comments from @user3402754 and @originalhat below.
Note that the error message does NOT talk about a missing database, it talks about a missing role. Later in the login process it might also stumble over the missing database.
But the first step is to check the missing role: What is the output within psql of the command \du ? On my Ubuntu system the relevant line looks like this:
List of roles
Role name | Attributes | Member of
-----------+-----------------------------------+-----------
postgres | Superuser, Create role, Create DB | {}
If there is not at least one role with superuser, then you have a problem :-)
If there is one, you can use that to login. And looking at the output of your \l command: The permissions for user on the template0 and template1 databases are the same as on my Ubuntu system for the superuser postgres. So I think your setup simple uses user as the superuser. So you could try this command to login:
sudo -u user psql user
If user is really the DB superuser you can create another DB superuser and a private, empty database for him:
CREATE USER postgres SUPERUSER;
CREATE DATABASE postgres WITH OWNER postgres;
But since your postgres.app setup does not seem to do this, you also should not. Simple adapt the tutorial.
| PostgreSQL | 15,301,826 | 751 |
In postgres, how do I change an existing user to be a superuser? I don't want to delete the existing user, for various reasons.
# alter user myuser ...?
| ALTER USER myuser WITH SUPERUSER;
You can read more at the Documentation for ALTER USER
| PostgreSQL | 10,757,431 | 739 |
I ran into the problem that my primary key sequence is not in sync with my table rows.
That is, when I insert a new row I get a duplicate key error because the sequence implied in the serial datatype returns a number that already exists.
It seems to be caused by import/restores not maintaining the sequence properly.
| -- Login to psql and run the following
-- What is the result?
SELECT MAX(id) FROM your_table;
-- Then run...
-- This should be higher than the last result.
SELECT nextval('your_table_id_seq');
-- If it's not higher... run this set the sequence last to your highest id.
-- (wise to run a quick pg_dump first...)
BEGIN;
-- protect against concurrent inserts while you update the counter
LOCK TABLE your_table IN EXCLUSIVE MODE;
-- Update the sequence
SELECT setval('your_table_id_seq', COALESCE((SELECT MAX(id)+1 FROM your_table), 1), false);
COMMIT;
Source - Ruby Forum
| PostgreSQL | 244,243 | 737 |
We are switching hosts and the old one provided a SQL dump of the PostgreSQL database of our site.
Now, I'm trying to set this up on a local WAMP server to test this.
The only problem is that I don't have an idea how to import this database in the PostgreSQL 9 that I have set up.
I tried pgAdmin III but I can't seem to find an 'import' function. So I just opened the SQL editor and pasted the contents of the dump there and executed it, it creates the tables but it keeps giving me errors when it tries to put the data in it.
ERROR: syntax error at or near "t"
LINE 474: t 2011-05-24 16:45:01.768633 2011-05-24 16:45:01.768633 view...
The lines:
COPY tb_abilities (active, creation, modtime, id, lang, title, description) FROM stdin;
t 2011-05-24 16:45:01.768633 2011-05-24 16:45:01.768633 view nl ...
I've also tried to do this with the command prompt but I can't find the command that I need.
If I do
psql mydatabase < C:/database/db-backup.sql;
I get the error
ERROR: syntax error at or near "psql"
LINE 1: psql mydatabase < C:/database/db-backu...
^
What's the best way to import the database?
| psql --username=<db_user_name> databasename < data_base_dump
That's the command you are looking for.
Beware: databasename must be created before importing.
Have a look at the PostgreSQL Docs Chapter 23. Backup and Restore.
| PostgreSQL | 6,842,393 | 710 |
I am trying to automate database creation process with a shell script and one thing I've hit a road block with passing a password to psql.
Here is a bit of code from the shell script:
psql -U $DB_USER -h localhost -c"$DB_RECREATE_SQL"
How do I pass a password to psql in a non-interactive way?
| Set the PGPASSWORD environment variable inside the script before calling psql
PGPASSWORD=pass1234 psql -U MyUsername myDatabaseName
For reference, see http://www.postgresql.org/docs/current/static/libpq-envars.html
Edit
Since Postgres 9.2 there is also the option to specify a connection string or URI that can contain the username and password. Syntax is:
$ psql postgresql://[user[:password]@][host][:port][,...][/dbname][?param1=value1&...]
Using that is a security risk because the password is visible in plain text when looking at the command line of a running process e.g. using ps (Linux), ProcessExplorer (Windows) or similar tools, by other users.
See also this question on Database Administrators
| PostgreSQL | 6,405,127 | 709 |
I'm switching from MySQL to PostgreSQL and I was wondering how can I have an INT column with AUTO INCREMENT. I saw in the PostgreSQL docs a datatype called SERIAL, but I get syntax errors when using it.
| Yes, SERIAL is the equivalent function.
CREATE TABLE foo (
id SERIAL,
bar varchar
);
INSERT INTO foo (bar) VALUES ('blah');
INSERT INTO foo (bar) VALUES ('blah');
SELECT * FROM foo;
+----------+
| 1 | blah |
+----------+
| 2 | blah |
+----------+
SERIAL is just a create table time macro around sequences. You can not alter SERIAL onto an existing column.
| PostgreSQL | 787,722 | 696 |
My question is rather simple. I'm aware of the concept of a UUID and I want to generate one to refer to each 'item' from a 'store' in my DB with. Seems reasonable right?
The problem is the following line returns an error:
honeydb=# insert into items values(
uuid_generate_v4(), 54.321, 31, 'desc 1', 31.94);
ERROR: function uuid_generate_v4() does not exist
LINE 2: uuid_generate_v4(), 54.321, 31, 'desc 1', 31.94);
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
I've read the page at: http://www.postgresql.org/docs/current/static/uuid-ossp.html
I'm running Postgres 8.4 on Ubuntu 10.04 x64.
| uuid-ossp is a contrib module, so it isn't loaded into the server by default. You must load it into your database to use it.
For modern PostgreSQL versions (9.1 and newer) that's easy:
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
but for 9.0 and below you must instead run the SQL script to load the extension. See the documentation for contrib modules in 8.4.
For Pg 9.1 and newer instead read the current contrib docs and CREATE EXTENSION. These features do not exist in 9.0 or older versions, like your 8.4.
If you're using a packaged version of PostgreSQL you might need to install a separate package containing the contrib modules and extensions. Search your package manager database for 'postgres' and 'contrib'.
| PostgreSQL | 12,505,158 | 663 |
After I did brew update and brew upgrade, my postgres got some problem. I tried to uninstall postgres and install it again, but it didn't work as well.
This is the error message. (I also got this error message when I try to do rake db:migrate)
$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
How can I solve it?
Mac version: Mountain lion.
homebrew version: 0.9.3
postgres version: psql (PostgreSQL) 9.2.1
And this is what I did:
$ brew uninstall postgresql
Uninstalling /usr/local/Cellar/postgresql/9.2.1...
$ brew uninstall postgresql
Uninstalling /usr/local/Cellar/postgresql/9.1.4...
$ psql --version
bash: /usr/local/bin/psql: No such file or directory
$ brew install postgresql
==> Downloading http://ftp.postgresql.org/pub/source/v9.2.1/postgresql-9.2.1.tar.bz2
Already downloaded: /Library/Caches/Homebrew/postgresql-9.2.1.tar.bz2
......
......
==> Summary
/usr/local/Cellar/postgresql/9.2.1: 2814 files, 38M, built in 2.7 minutes
$ initdb /usr/local/var/postgres -E utf8
The files belonging to this database system will be owned by user "laigary".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.UTF-8".
The default text search configuration will be set to "english".
initdb: directory "/usr/local/var/postgres" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/usr/local/var/postgres" or run initdb
with an argument other than "/usr/local/var/postgres".
$ mkdir -p ~/Library/LaunchAgents
$ cp /usr/local/Cellar/postgresql/9.2.1/homebrew.mxcl.postgresql.plist ~/Library/LaunchAgents/
$ launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist
homebrew.mxcl.postgresql: Already loaded
$ pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
server starting
$ env ARCHFLAGS="-arch x86_64" gem install pg
Building native extensions. This could take a while...
Successfully installed pg-0.14.1
1 gem installed
$ psql --version
psql (PostgreSQL) 9.2.1
$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
Now, after I reinstalled homebrew, when I use $ psql, it doesn't show any error message.
But I run rake db:migrate in my Rails app, it shows:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `initialize'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `new'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `connect'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:329:in `initialize'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `new'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `postgresql_connection'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:309:in `new_connection'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:319:in `checkout_new_connection'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:241:in `block (2 levels) in checkout'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:236:in `loop'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:236:in `block in checkout'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:233:in `checkout'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:96:in `block in connection'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:95:in `connection'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:404:in `retrieve_connection'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_specification.rb:170:in `retrieve_connection'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_specification.rb:144:in `connection'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:107:in `rescue in create_database'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:51:in `create_database'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `block (3 levels) in <top (required)>'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `each'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/gems/1.9.1/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `block (2 levels) in <top (required)>'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/task.rb:205:in `call'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/task.rb:205:in `block in execute'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/task.rb:200:in `each'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/task.rb:200:in `execute'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/task.rb:158:in `block in invoke_with_call_chain'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/task.rb:151:in `invoke_with_call_chain'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/task.rb:144:in `invoke'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/application.rb:116:in `invoke_task'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/application.rb:94:in `block (2 levels) in top_level'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/application.rb:94:in `each'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/application.rb:94:in `block in top_level'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/application.rb:133:in `standard_exception_handling'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/application.rb:88:in `top_level'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/application.rb:66:in `block in run'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/application.rb:133:in `standard_exception_handling'
/usr/local/Cellar/ruby/1.9.3-p327/lib/ruby/1.9.1/rake/application.rb:63:in `run'
/usr/local/bin/rake:32:in `<main>'
Couldn't create database for {"adapter"=>"postgresql", "encoding"=>"unicode", "database"=>"riy_development", "pool"=>5, "username"=>nil, "password"=>nil}
Finally I've found a solution.
sudo mkdir /var/pgsql_socket/
sudo ln -s /private/tmp/.s.PGSQL.5432 /var/pgsql_socket/
This solution is a little tricky, but it works. Hope anyone has a better solution
Update
This works for me as well.
rm /usr/local/var/postgres/postmaster.pid
| Had a similar problem; a pid file was blocking postgres from starting up. To fix it:
$ rm /usr/local/var/postgres/postmaster.pid
$ brew services restart postgresql
and then all is well.
UPDATE:
For Apple M1 (Big Sur) users, do this instead:
$ rm /opt/homebrew/var/postgres/postmaster.pid
$ brew services restart postgresql
UPDATE (Nov 15 2022):
For Apple M1 / M2 (Ventura) users, just do this:
$ bundle install (just to make sure pg is installed)
$ brew services restart postgresql
| PostgreSQL | 13,410,686 | 646 |
I'm looking to copy a production PostgreSQL database to a development server. What's the quickest, easiest way to go about doing this?
| You don't need to create an intermediate file. You can do
pg_dump -C -h localhost -U localuser dbname | psql -h remotehost -U remoteuser dbname
or
pg_dump -C -h remotehost -U remoteuser dbname | psql -h localhost -U localuser dbname
using psql or pg_dump to connect to a remote host.
With a big database or a slow connection, dumping a file and transfering the file compressed may be faster.
As Kornel said there is no need to dump to a intermediate file, if you want to work compressed you can use a compressed tunnel
pg_dump -C dbname | bzip2 | ssh remoteuser@remotehost "bunzip2 | psql dbname"
or
pg_dump -C dbname | ssh -C remoteuser@remotehost "psql dbname"
but this solution also requires to get a session in both ends.
Note: pg_dump is for backing up and psql is for restoring. So, the first command in this answer is to copy from local to remote and the second one is from remote to local. More -> https://www.postgresql.org/docs/9.6/app-pgdump.html
| PostgreSQL | 1,237,725 | 645 |
I'd like to create a user in PostgreSQL that can only do SELECTs from a particular database. In MySQL the command would be:
GRANT SELECT ON mydb.* TO 'xxx'@'%' IDENTIFIED BY 'yyy';
What is the equivalent command or series of commands in PostgreSQL?
I tried...
postgres=# CREATE ROLE xxx LOGIN PASSWORD 'yyy';
postgres=# GRANT SELECT ON DATABASE mydb TO xxx;
But it appears that the only things you can grant on a database are CREATE, CONNECT, TEMPORARY, and TEMP.
| Grant usage/select to a single table
If you only grant CONNECT to a database, the user can connect but has no other privileges. You have to grant USAGE on namespaces (schemas) and SELECT on tables and views individually like so:
GRANT CONNECT ON DATABASE mydb TO xxx;
-- This assumes you're actually connected to mydb..
GRANT USAGE ON SCHEMA public TO xxx;
GRANT SELECT ON mytable TO xxx;
Multiple tables/views (PostgreSQL 9.0+)
In the latest versions of PostgreSQL, you can grant permissions on all tables/views/etc in the schema using a single command rather than having to type them one by one:
GRANT SELECT ON ALL TABLES IN SCHEMA public TO xxx;
This only affects tables that have already been created. More powerfully, you can automatically have default roles assigned to new objects in future:
ALTER DEFAULT PRIVILEGES IN SCHEMA public
GRANT SELECT ON TABLES TO xxx;
Note that by default this will only affect objects (tables) created by the user that issued this command: although it can also be set on any role that the issuing user is a member of. However, you don't pick up default privileges for all roles you're a member of when creating new objects... so there's still some faffing around. If you adopt the approach that a database has an owning role, and schema changes are performed as that owning role, then you should assign default privileges to that owning role. IMHO this is all a bit confusing and you may need to experiment to come up with a functional workflow.
Multiple tables/views (PostgreSQL versions before 9.0)
To avoid errors in lengthy, multi-table changes, it is recommended to use the following 'automatic' process to generate the required GRANT SELECT to each table/view:
SELECT 'GRANT SELECT ON ' || relname || ' TO xxx;'
FROM pg_class JOIN pg_namespace ON pg_namespace.oid = pg_class.relnamespace
WHERE nspname = 'public' AND relkind IN ('r', 'v', 'S');
This should output the relevant GRANT commands to GRANT SELECT on all tables, views, and sequences in public, for copy-n-paste love. Naturally, this will only be applied to tables that have already been created.
| PostgreSQL | 760,210 | 629 |
I am beginner to PostgreSQL.
I want to connect to another database from the query editor of Postgres - like the USE command of MySQL or MS SQL Server.
I found \c databasename by searching the Internet, but its runs only on psql. When I try it from the PostgreSQL query editor I get a syntax error.
I have to change the database by pgscripting. Does anyone know how to do it?
| When you get a connection to PostgreSQL it is always to a particular database. To access a different database, you must get a new connection.
Using \c in psql closes the old connection and acquires a new one, using the specified database and/or credentials. You get a whole new back-end process and everything.
Example:
yourUser=# \c newDatabaseName
You are now connected to database "newDatabaseName" as user "yourUser".
| PostgreSQL | 10,335,561 | 612 |
I have recently installed PostgreSQL on Ubuntu with the EnterpriseDB package. I can connect to the database locally, but I can't configure it because I can't find config files. I searched through entire hard drive and found only samples like pg_hba.conf.sample
Where are the PostgreSQL .conf files?
| Or ask your database:
$ psql -U postgres -c 'SHOW config_file'
or, if logged in as the ubuntu user:
$ sudo -u postgres psql -c 'SHOW config_file'
| PostgreSQL | 3,602,450 | 609 |
I have installed PostgreSQL 8.4, Postgres client and Pgadmin 3. Authentication failed for user "postgres" for both console client and Pgadmin. I have typed user as "postgres" and password "postgres", because it worked before. But now authentication is failed. I did it before a couple of times without this problem. What should I do? And what happens?
psql -U postgres -h localhost -W
Password for user postgres:
psql: FATAL: password authentication failed for user "postgres"
FATAL: password authentication failed for user "postgres"
| If I remember correctly the user postgres has no DB password set on Ubuntu by default. That means, that you can login to that account only by using the postgres OS user account.
Assuming, that you have root access on the box you can do:
sudo -u postgres psql
If that fails with a database "postgres" does not exists error, then you are most likely not on a Ubuntu or Debian server :-) In this case simply add template1 to the command:
sudo -u postgres psql template1
If any of those commands fail with an error psql: FATAL: password authentication failed for user "postgres" then check the file /etc/postgresql/8.4/main/pg_hba.conf: There must be a line like this as the first non-comment line:
local all postgres ident
For newer versions of PostgreSQL ident actually might be peer. That's OK also.
Inside the psql shell you can give the DB user postgres a password:
ALTER USER postgres PASSWORD 'newPassword';
You can leave the psql shell by typing CtrlD or with the command \q.
Now you should be able to give pgAdmin a valid password for the DB superuser and it will be happy too. :-)
| PostgreSQL | 7,695,962 | 596 |
Is there any way to write case-insensitive queries in PostgreSQL, E.g. I want that following 3 queries return same result.
SELECT id FROM groups where name='administrator'
SELECT id FROM groups where name='ADMINISTRATOR'
SELECT id FROM groups where name='Administrator'
| Use LOWER function to convert the strings to lower case before comparing.
Try this:
SELECT id
FROM groups
WHERE LOWER(name)=LOWER('Administrator')
| PostgreSQL | 7,005,302 | 596 |
I'm looking for a way to find the row count for all my tables in Postgres. I know I can do this one table at a time with:
SELECT count(*) FROM table_name;
but I'd like to see the row count for all the tables and then order by that to get an idea of how big all my tables are.
| There's three ways to get this sort of count, each with their own tradeoffs.
If you want a true count, you have to execute the SELECT statement like the one you used against each table. This is because PostgreSQL keeps row visibility information in the row itself, not anywhere else, so any accurate count can only be relative to some transaction. You're getting a count of what that transaction sees at the point in time when it executes. You could automate this to run against every table in the database, but you probably don't need that level of accuracy or want to wait that long.
WITH tbl AS
(SELECT table_schema,
TABLE_NAME
FROM information_schema.tables
WHERE TABLE_NAME not like 'pg_%'
AND table_schema in ('public'))
SELECT table_schema,
TABLE_NAME,
(xpath('/row/c/text()', query_to_xml(format('select count(*) as c from %I.%I', table_schema, TABLE_NAME), FALSE, TRUE, '')))[1]::text::int AS rows_n
FROM tbl
ORDER BY rows_n DESC;
The second approach notes that the statistics collector tracks roughly how many rows are "live" (not deleted or obsoleted by later updates) at any time. This value can be off by a bit under heavy activity, but is generally a good estimate:
SELECT schemaname,relname,n_live_tup
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC;
That can also show you how many rows are dead, which is itself an interesting number to monitor.
The third way is to note that the system ANALYZE command, which is executed by the autovacuum process regularly as of PostgreSQL 8.3 to update table statistics, also computes a row estimate. You can grab that one like this:
SELECT
nspname AS schemaname,relname,reltuples
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE
nspname NOT IN ('pg_catalog', 'information_schema') AND
relkind='r'
ORDER BY reltuples DESC;
Which of these queries is better to use is hard to say. Normally I make that decision based on whether there's more useful information I also want to use inside of pg_class or inside of pg_stat_user_tables. For basic counting purposes just to see how big things are in general, either should be accurate enough.
| PostgreSQL | 2,596,670 | 593 |
I need to retrieve all rows from a table where 2 columns combined are all different. So I want all the sales that do not have any other sales that happened on the same day for the same price. The sales that are unique based on day and price will get updated to an active status.
So I'm thinking:
UPDATE sales
SET status = 'ACTIVE'
WHERE id IN (SELECT DISTINCT (saleprice, saledate), id, count(id)
FROM sales
HAVING count = 1)
But my brain hurts going any farther than that.
| SELECT DISTINCT a,b,c FROM t
is roughly equivalent to:
SELECT a,b,c FROM t GROUP BY a,b,c
It's a good idea to get used to the GROUP BY syntax, as it's more powerful.
For your query, I'd do it like this:
UPDATE sales
SET status='ACTIVE'
WHERE id IN
(
SELECT id
FROM sales S
INNER JOIN
(
SELECT saleprice, saledate
FROM sales
GROUP BY saleprice, saledate
HAVING COUNT(*) = 1
) T
ON S.saleprice=T.saleprice AND s.saledate=T.saledate
)
| PostgreSQL | 54,418 | 591 |
How can I kill all my postgresql connections?
I'm trying a rake db:drop but I get:
ERROR: database "database_name" is being accessed by other users
DETAIL: There are 1 other session(s) using the database.
I've tried shutting down the processes I see from a ps -ef | grep postgres but this doesn't work either:
kill: kill 2358 failed: operation not permitted
| You can use pg_terminate_backend() to kill a connection. You have to be superuser to use this function. This works on all operating systems the same.
SELECT
pg_terminate_backend(pid)
FROM
pg_stat_activity
WHERE
-- don't kill my own connection!
pid <> pg_backend_pid()
-- don't kill the connections to other databases
AND datname = 'database_name'
;
Before executing this query, you have to REVOKE the CONNECT privileges to avoid new connections:
REVOKE CONNECT ON DATABASE dbname FROM PUBLIC, username;
If you're using Postgres 8.4-9.1 use procpid instead of pid
SELECT
pg_terminate_backend(procpid)
FROM
pg_stat_activity
WHERE
-- don't kill my own connection!
procpid <> pg_backend_pid()
-- don't kill the connections to other databases
AND datname = 'database_name'
;
| PostgreSQL | 5,108,876 | 569 |
Locally, I use pgadmin3. On the remote server, however, I have no such luxury.
I've already created the backup of the database and copied it over, but is there a way to restore a backup from the command line? I only see things related to GUI or to pg_dumps.
| There are two tools to look at, depending on how you created the dump file.
Your first source of reference should be the man page pg_dump as that is what creates the dump itself. It says:
Dumps can be output in script or
archive file formats. Script dumps are
plain-text files containing the SQL
commands required to reconstruct
the database to the state it was
in at the time it was saved. To
restore from such a script, feed it to
psql(1). Script files can be used
to reconstruct the database even
on other machines and other
architectures; with some modifications
even on other SQL database products.
The alternative archive file formats
must be used with pg_restore(1) to
rebuild the database. They allow
pg_restore to be selective about what
is restored, or even to reorder the
items prior to being restored. The
archive file formats are designed to
be portable across architectures.
So depends on the way it was dumped out. If using Linux/Unix, you can probably figure it out using the excellent file(1) command - if it mentions ASCII text and/or SQL, it should be restored with psql otherwise you should probably use pg_restore.
Restoring is pretty easy:
psql -U username -d dbname < filename.sql
-- For Postgres versions 9.0 or earlier
psql -U username -d dbname -1 -f filename.sql
or
pg_restore -U username -d dbname -1 filename.dump
Check out their respective manpages - there's quite a few options that affect how the restore works. You may have to clean out your "live" databases or recreate them from template0 (as pointed out in a comment) before restoring, depending on how the dumps were generated.
| PostgreSQL | 2,732,474 | 560 |
I have this table in a postgres 8.4 database:
CREATE TABLE public.dummy
(
address_id SERIAL,
addr1 character(40),
addr2 character(40),
city character(25),
state character(2),
zip character(5),
customer boolean,
supplier boolean,
partner boolean
)
WITH (
OIDS=FALSE
);
I want to update the table. Initially i tested my query using this insert statement:
insert into address customer,supplier,partner
SELECT
case when cust.addr1 is not null then TRUE else FALSE end customer,
case when suppl.addr1 is not null then TRUE else FALSE end supplier,
case when partn.addr1 is not null then TRUE else FALSE end partner
from (
SELECT *
from address) pa
left outer join cust_original cust
on (pa.addr1=cust.addr1 and pa.addr2=cust.addr2 and pa.city=cust.city
and pa.state=cust.state and substring(cust.zip,1,5) = pa.zip )
left outer join supp_original suppl
on (pa.addr1=suppl.addr1 and pa.addr2=suppl.addr2 and pa.city=suppl.city
and pa.state=suppl.state and pa.zip = substring(suppl.zip,1,5))
left outer join partner_original partn
on (pa.addr1=partn.addr1 and pa.addr2=partn.addr2 and pa.city=partn.city
and pa.state=partn.state and pa.zip = substring(partn.zip,1,5) )
where pa.address_id = address_id
How can I convert this to an update statement, i.e., update existing rows using values returned from a select statement?
| Postgres allows:
UPDATE dummy
SET customer=subquery.customer,
address=subquery.address,
partn=subquery.partn
FROM (SELECT address_id, customer, address, partn
FROM /* big hairy SQL */ ...) AS subquery
WHERE dummy.address_id=subquery.address_id;
This syntax is not standard SQL, but it is much more convenient for this type of query than standard SQL. I believe Oracle (at least) accepts something similar.
| PostgreSQL | 6,256,610 | 555 |
When I do a \dt in psql I only get a listing of tables in the current schema (public by default).
How can I get a list of all tables in all schemas or a particular schema?
| In all schemas:
=> \dt *.*
In a particular schema:
=> \dt public.*
It is possible to use regular expressions with some restrictions
\dt (public|s).(s|t)
List of relations
Schema | Name | Type | Owner
--------+------+-------+-------
public | s | table | cpn
public | t | table | cpn
s | t | table | cpn
Advanced users can use regular-expression notations such as character classes, for example [0-9] to match any digit. All regular expression special characters work as specified in Section 9.7.3, except for . which is taken as a separator as mentioned above, * which is translated to the regular-expression notation .*, ? which is translated to ., and $ which is matched literally. You can emulate these pattern characters at need by writing ? for ., (R+|) for R*, or (R|) for R?. $ is not needed as a regular-expression character since the pattern must match the whole name, unlike the usual interpretation of regular expressions (in other words, $ is automatically appended to your pattern). Write * at the beginning and/or end if you don't wish the pattern to be anchored. Note that within double quotes, all regular expression special characters lose their special meanings and are matched literally. Also, the regular expression special characters are matched literally in operator name patterns (i.e., the argument of \do).
| PostgreSQL | 15,644,152 | 549 |
How do I modify the owner of all tables in a PostgreSQL database?
I tried ALTER TABLE * OWNER TO new_owner but it doesn't support the asterisk syntax.
| You can use the REASSIGN OWNED command.
Synopsis:
REASSIGN OWNED BY old_role [, ...] TO new_role
This changes all objects owned by old_role to the new role. You don't have to think about what kind of objects that the user has, they will all be changed. Note that it only applies to objects inside a single database. It does not alter the owner of the database itself either.
It is available back to at least 8.2. Their online documentation only goes that far back.
| PostgreSQL | 1,348,126 | 522 |
I want a random selection of rows in PostgreSQL, I tried this:
select * from table where random() < 0.01;
But some other recommend this:
select * from table order by random() limit 1000;
I have a very large table with 500 Million rows, I want it to be fast.
Which approach is better? What are the differences? What is the best way to select random rows?
| Fast ways
Given your specifications (plus additional info in the comments):
You have a numeric ID column (integer numbers) with only a few (or moderately few) gaps.
Obviously no or few write operations.
Your ID column has to be indexed! A primary key serves nicely.
The query below does not need a sequential scan of the big table, only an index scan.
First, get estimates for the main query:
SELECT count(*) AS ct -- optional
, min(id) AS min_id
, max(id) AS max_id
, max(id) - min(id) AS id_span
FROM big;
The only possibly expensive part is the count(*) (for huge tables). Given the above specifications, you don't need it. An estimate to replace the full count will do just fine, available at almost no cost:
SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint AS ct
FROM pg_class
WHERE oid = 'big'::regclass; -- your table name
Detailed explanation:
Fast way to discover the row count of a table in PostgreSQL
As long as ct isn't much smaller than id_span, the query will outperform other approaches.
WITH params AS (
SELECT 1 AS min_id -- minimum id <= current min id
, 5100000 AS id_span -- rounded up. (max_id - min_id + buffer)
)
SELECT *
FROM (
SELECT p.min_id + trunc(random() * p.id_span)::integer AS id
FROM params p
, generate_series(1, 1100) g -- 1000 + buffer
GROUP BY 1 -- trim duplicates
) r
JOIN big USING (id)
LIMIT 1000; -- trim surplus
Generate random numbers in the id space. You have "few gaps", so add 10 % (enough to easily cover the blanks) to the number of rows to retrieve.
Each id can be picked multiple times by chance (though very unlikely with a big ID space), so group the generated numbers (or use DISTINCT).
Join the ids to the big table. This should be very fast with the index in place.
Finally trim surplus ids that have not been eaten by dupes and gaps. Every row has a completely equal chance to be picked.
Short version
You can simplify this query. The CTE in the query above is just for educational purposes:
SELECT *
FROM (
SELECT DISTINCT 1 + trunc(random() * 5100000)::integer AS id
FROM generate_series(1, 1100) g
) r
JOIN big USING (id)
LIMIT 1000;
Refine with rCTE
Especially if you are not so sure about gaps and estimates.
WITH RECURSIVE random_pick AS (
SELECT *
FROM (
SELECT 1 + trunc(random() * 5100000)::int AS id
FROM generate_series(1, 1030) -- 1000 + few percent - adapt to your needs
LIMIT 1030 -- hint for query planner
) r
JOIN big b USING (id) -- eliminate miss
UNION -- eliminate dupe
SELECT b.*
FROM (
SELECT 1 + trunc(random() * 5100000)::int AS id
FROM random_pick r -- plus 3 percent - adapt to your needs
LIMIT 999 -- less than 1000, hint for query planner
) r
JOIN big b USING (id) -- eliminate miss
)
TABLE random_pick
LIMIT 1000; -- actual limit
We can work with a smaller surplus in the base query. If there are too many gaps so we don't find enough rows in the first iteration, the rCTE continues to iterate with the recursive term. We still need relatively few gaps in the ID space or the recursion may run dry before the limit is reached - or we have to start with a large enough buffer which defies the purpose of optimizing performance.
Duplicates are eliminated by the UNION in the rCTE.
The outer LIMIT makes the CTE stop as soon as we have enough rows.
This query is carefully drafted to use the available index, generate actually random rows and not stop until we fulfill the limit (unless the recursion runs dry). There are a number of pitfalls here if you are going to rewrite it.
Wrap into function
For repeated use with the same table with varying parameters:
CREATE OR REPLACE FUNCTION f_random_sample(_limit int = 1000, _gaps real = 1.03)
RETURNS SETOF big
LANGUAGE plpgsql VOLATILE ROWS 1000 AS
$func$
DECLARE
_surplus int := _limit * _gaps;
_estimate int := ( -- get current estimate from system
SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint
FROM pg_class
WHERE oid = 'big'::regclass);
BEGIN
RETURN QUERY
WITH RECURSIVE random_pick AS (
SELECT *
FROM (
SELECT 1 + trunc(random() * _estimate)::int
FROM generate_series(1, _surplus) g
LIMIT _surplus -- hint for query planner
) r (id)
JOIN big USING (id) -- eliminate misses
UNION -- eliminate dupes
SELECT *
FROM (
SELECT 1 + trunc(random() * _estimate)::int
FROM random_pick -- just to make it recursive
LIMIT _limit -- hint for query planner
) r (id)
JOIN big USING (id) -- eliminate misses
)
TABLE random_pick
LIMIT _limit;
END
$func$;
Call:
SELECT * FROM f_random_sample();
SELECT * FROM f_random_sample(500, 1.05);
Generic function
We can make this generic to work for any table with a unique integer column (typically the PK): Pass the table as polymorphic type and (optionally) the name of the PK column and use EXECUTE:
CREATE OR REPLACE FUNCTION f_random_sample(_tbl_type anyelement
, _id text = 'id'
, _limit int = 1000
, _gaps real = 1.03)
RETURNS SETOF anyelement
LANGUAGE plpgsql VOLATILE ROWS 1000 AS
$func$
DECLARE
-- safe syntax with schema & quotes where needed
_tbl text := pg_typeof(_tbl_type)::text;
_estimate int := (SELECT (reltuples / relpages
* (pg_relation_size(oid) / 8192))::bigint
FROM pg_class -- get current estimate from system
WHERE oid = _tbl::regclass);
BEGIN
RETURN QUERY EXECUTE format(
$$
WITH RECURSIVE random_pick AS (
SELECT *
FROM (
SELECT 1 + trunc(random() * $1)::int
FROM generate_series(1, $2) g
LIMIT $2 -- hint for query planner
) r(%2$I)
JOIN %1$s USING (%2$I) -- eliminate misses
UNION -- eliminate dupes
SELECT *
FROM (
SELECT 1 + trunc(random() * $1)::int
FROM random_pick -- just to make it recursive
LIMIT $3 -- hint for query planner
) r(%2$I)
JOIN %1$s USING (%2$I) -- eliminate misses
)
TABLE random_pick
LIMIT $3;
$$
, _tbl, _id
)
USING _estimate -- $1
, (_limit * _gaps)::int -- $2 ("surplus")
, _limit -- $3
;
END
$func$;
Call with defaults (important!):
SELECT * FROM f_random_sample(null::big); --!
Or more specifically:
SELECT * FROM f_random_sample(null::"my_TABLE", 'oDD ID', 666, 1.15);
About the same performance as the static version.
Related:
Refactor a PL/pgSQL function to return the output of various SELECT queries - chapter "Various complete table types"
Return SETOF rows from PostgreSQL function
Format specifier for integer variables in format() for EXECUTE?
INSERT with dynamic table name in trigger function
This is safe against SQL injection. See:
Table name as a PostgreSQL function parameter
SQL injection in Postgres functions vs prepared queries
Possible alternative
If your requirements allow identical sets for repeated calls (and we are talking about repeated calls) consider a MATERIALIZED VIEW. Execute above query once and write the result to a table. Users get a quasi random selection at lightening speed. Refresh your random pick at intervals or events of your choosing.
Postgres 9.5 introduces TABLESAMPLE SYSTEM (n)
Where n is a percentage. The manual:
The BERNOULLI and SYSTEM sampling methods each accept a single
argument which is the fraction of the table to sample, expressed as a
percentage between 0 and 100. This argument can be any real-valued expression.
Bold emphasis mine. It's very fast, but the result is not exactly random. The manual again:
The SYSTEM method is significantly faster than the BERNOULLI method
when small sampling percentages are specified, but it may return a
less-random sample of the table as a result of clustering effects.
The number of rows returned can vary wildly. For our example, to get roughly 1000 rows:
SELECT * FROM big TABLESAMPLE SYSTEM ((1000 * 100) / 5100000.0);
Related:
Fast way to discover the row count of a table in PostgreSQL
Or install the additional module tsm_system_rows to get the number of requested rows exactly (if there are enough) and allow for the more convenient syntax:
SELECT * FROM big TABLESAMPLE SYSTEM_ROWS(1000);
See Evan's answer for details.
But that's still not exactly random.
| PostgreSQL | 8,674,718 | 510 |
I have a timezone aware timestamptz field in PostgreSQL. When I pull data from the table, I then want to subtract the time right now so I can get it's age.
The problem I'm having is that both datetime.datetime.now() and datetime.datetime.utcnow() seem to return timezone unaware timestamps, which results in me getting this error:
TypeError: can't subtract offset-naive and offset-aware datetimes
Is there a way to avoid this (preferably without a third-party module being used).
EDIT: Thanks for the suggestions, however trying to adjust the timezone seems to give me errors.. so I'm just going to use timezone unaware timestamps in PG and always insert using:
NOW() AT TIME ZONE 'UTC'
That way all my timestamps are UTC by default (even though it's more annoying to do this).
| Have you tried to remove the timezone awareness?
From http://pytz.sourceforge.net/
naive = dt.replace(tzinfo=None)
may have to add time zone conversion as well.
edit: Please be aware the age of this answer. An answer involving ADDing the timezone info instead of removing it in python 3 is below. https://stackoverflow.com/a/25662061/93380
| PostgreSQL | 796,008 | 503 |
I'm trying to export a PostgreSQL table with headings to a CSV file via command line, however I get it to export to CSV file, but without headings.
My code looks as follows:
COPY products_273 to '/tmp/products_199.csv' delimiters',';
| COPY products_273 TO '/tmp/products_199.csv' WITH (FORMAT CSV, HEADER);
as described in the manual.
| PostgreSQL | 1,120,109 | 499 |
How to enable logging of all SQL executed by PostgreSQL 8.3?
Edited (more info)
I changed these lines :
log_directory = 'pg_log'
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_statement = 'all'
And restart PostgreSQL service... but no log was created...
I'm using Windows Server 2003.
Any ideas?
| In your data/postgresql.conf file, change the log_statement setting to 'all'.
Edit
Looking at your new information, I'd say there may be a few other settings to verify:
make sure you have turned on the log_destination variable
make sure you turn on the logging_collector
also make sure that the log_directory directory already exists inside of the data directory, and that the postgres user can write to it.
| PostgreSQL | 722,221 | 497 |
I'm interested in learning some (ideally) database agnostic ways of selecting the nth row from a database table. It would also be interesting to see how this can be achieved using the native functionality of the following databases:
SQL Server
MySQL
PostgreSQL
SQLite
Oracle
I am currently doing something like the following in SQL Server 2005, but I'd be interested in seeing other's more agnostic approaches:
WITH Ordered AS (
SELECT ROW_NUMBER() OVER (ORDER BY OrderID) AS RowNumber, OrderID, OrderDate
FROM Orders)
SELECT *
FROM Ordered
WHERE RowNumber = 1000000
Credit for the above SQL: Firoz Ansari's Weblog
Update: See Troels Arvin's answer regarding the SQL standard. Troels, have you got any links we can cite?
| There are ways of doing this in optional parts of the standard, but a lot of databases support their own way of doing it.
A really good site that talks about this and other things is http://troels.arvin.dk/db/rdbms/#select-limit.
Basically, PostgreSQL and MySQL supports the non-standard:
SELECT...
LIMIT y OFFSET x
Oracle, DB2 and MSSQL supports the standard windowing functions:
SELECT * FROM (
SELECT
ROW_NUMBER() OVER (ORDER BY key ASC) AS rownumber,
columns
FROM tablename
) AS foo
WHERE rownumber <= n
(which I just copied from the site linked above since I never use those DBs)
Update: As of PostgreSQL 8.4 the standard windowing functions are supported, so expect the second example to work for PostgreSQL as well.
Update: SQLite added window functions support in version 3.25.0 on 2018-09-15 so both forms also work in SQLite.
| PostgreSQL | 16,568 | 495 |
I'm trying to create a cronjob to back up my database every night before something catastrophic happens. It looks like this command should meet my needs:
0 3 * * * pg_dump dbname | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz
Except after running that, it expects me to type in a password. I can't do that if I run it from cron. How can I pass one in automatically?
| Create a .pgpass file in the home directory of the account that pg_dump will run as.
The format is:
hostname:port:database:username:password
Then, set the file's mode to 0600. Otherwise, it will be ignored.
chmod 600 ~/.pgpass
See the Postgresql documentation libpq-pgpass for more details.
| PostgreSQL | 2,893,954 | 492 |
Does Postgres automatically put indexes on foreign keys and primary keys? How can I tell? Is there a command that will return all indexes on a table?
| PostgreSQL automatically creates indexes on primary keys and unique constraints, but not on the referencing side of foreign key relationships.
When Pg creates an implicit index it will emit a NOTICE-level message that you can see in psql and/or the system logs, so you can see when it happens. Automatically created indexes are visible in \d output for a table, too.
The documentation on unique indexes says:
PostgreSQL automatically creates an index for each unique constraint and primary key constraint to enforce uniqueness. Thus, it is not necessary to create an index explicitly for primary key columns.
and the documentation on constraints says:
Since a DELETE of a row from the referenced table or an UPDATE of a
referenced column will require a scan of the referencing table for
rows matching the old value, it is often a good idea to index the
referencing columns. Because this is not always needed, and there are
many choices available on how to index, declaration of a foreign key
constraint does not automatically create an index on the referencing
columns.
Therefore you have to create indexes on foreign-keys yourself if you want them.
Note that if you use primary-foreign-keys, like 2 FK's as a PK in a M-to-N table, you will have an index on the PK and probably don't need to create any extra indexes.
While it's usually a good idea to create an index on (or including) your referencing-side foreign key columns, it isn't required. Each index you add slows DML operations down slightly, so you pay a performance cost on every INSERT, UPDATE or DELETE. If the index is rarely used it may not be worth having.
| PostgreSQL | 970,562 | 489 |
I am looking for a way to concatenate the strings of a field within a group by query. So for example, I have a table:
ID
COMPANY_ID
EMPLOYEE
1
1
Anna
2
1
Bill
3
2
Carol
4
2
Dave
and I wanted to group by company_id to get something like:
COMPANY_ID
EMPLOYEE
1
Anna, Bill
2
Carol, Dave
There is a built-in function in mySQL to do this group_concat
| PostgreSQL 9.0 or later:
Modern Postgres (since 2010) has the string_agg(expression, delimiter) function which will do exactly what the asker was looking for:
SELECT company_id, string_agg(employee, ', ')
FROM mytable
GROUP BY company_id;
Postgres 9 also added the ability to specify an ORDER BY clause in any aggregate expression; otherwise you have to order all your results or deal with an undefined order. So you can now write:
SELECT company_id, string_agg(employee, ', ' ORDER BY employee)
FROM mytable
GROUP BY company_id;
PostgreSQL 8.4.x:
Please note that support for Postgres 8.4 ended in 2014, so you should probably upgrade for more important reasons than string aggregation.
PostgreSQL 8.4 (in 2009) introduced the aggregate function array_agg(expression) which collects the values in an array. Then array_to_string() can be used to give the desired result:
SELECT company_id, array_to_string(array_agg(employee), ', ')
FROM mytable
GROUP BY company_id;
PostgreSQL 8.3.x and older:
When this question was originally posed, there was no built-in aggregate function to concatenate strings. The simplest custom implementation (suggested by Vajda Gabo in this mailing list post, among many others) is to use the built-in textcat function:
CREATE AGGREGATE textcat_all(
basetype = text,
sfunc = textcat,
stype = text,
initcond = ''
);
Here is the CREATE AGGREGATE documentation.
This simply glues all the strings together, with no separator. In order to get a ", " inserted in between them without having it at the end, you might want to make your own concatenation function and substitute it for the "textcat" above. Here is one I put together and tested on 8.3.12:
CREATE FUNCTION commacat(acc text, instr text) RETURNS text AS $$
BEGIN
IF acc IS NULL OR acc = '' THEN
RETURN instr;
ELSE
RETURN acc || ', ' || instr;
END IF;
END;
$$ LANGUAGE plpgsql;
This version will output a comma even if the value in the row is null or empty, so you get output like this:
a, b, c, , e, , g
If you would prefer to remove extra commas to output this:
a, b, c, e, g
Then add an ELSIF check to the function like this:
CREATE FUNCTION commacat_ignore_nulls(acc text, instr text) RETURNS text AS $$
BEGIN
IF acc IS NULL OR acc = '' THEN
RETURN instr;
ELSIF instr IS NULL OR instr = '' THEN
RETURN acc;
ELSE
RETURN acc || ', ' || instr;
END IF;
END;
$$ LANGUAGE plpgsql;
| PostgreSQL | 43,870 | 487 |
How do I declare a variable for use in a PostgreSQL 8.3 query?
In MS SQL Server I can do this:
DECLARE @myvar INT;
SET @myvar = 5/
SELECT * FROM somewhere WHERE something = @myvar;
How do I do the same in PostgreSQL? According to the documentation variables are declared simply as "name type;", but this gives me a syntax error:
myvar INTEGER;
Could someone give me an example of the correct syntax?
| I accomplished the same goal by using a WITH clause, it's nowhere near as elegant but can do the same thing. Though for this example it's really overkill. I also don't particularly recommend this.
WITH myconstants (var1, var2) as (
values (5, 'foo')
)
SELECT *
FROM somewhere, myconstants
WHERE something = var1
OR something_else = var2;
| PostgreSQL | 1,490,942 | 482 |
I am using PostgreSQL 8.4 on Ubuntu. I have a table with columns c1 through cN. The columns are wide enough that selecting all columns causes a row of query results to wrap multiple times. Consequently, the output is hard to read.
When the query results constitute just a few rows, it would be convenient if I could view the query results such that each column of each row is on a separate line, e.g.
c1: <value of row 1's c1>
c2: <value of row 1's c1>
...
cN: <value of row 1's cN>
---- some kind of delimiter ----
c1: <value of row 2's c1>
etc.
I am running these queries on a server where I would prefer not to install any additional software. Is there a psql setting that will let me do something like that?
| I just needed to spend more time staring at the documentation. This command:
\x on
will do exactly what I wanted. Here is some sample output:
select * from dda where u_id=24 and dda_is_deleted='f';
-[ RECORD 1 ]------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
dda_id | 1121
u_id | 24
ab_id | 10304
dda_type | CHECKING
dda_status | PENDING_VERIFICATION
dda_is_deleted | f
dda_verify_op_id | 44938
version | 2
created | 2012-03-06 21:37:50.585845
modified | 2012-03-06 21:37:50.593425
c_id |
dda_nickname |
dda_account_name |
cu_id | 1
abd_id |
See also:
man psql => \x
man psql => --expanded
man psql => \pset => expanded
| PostgreSQL | 9,604,723 | 481 |
In PostgreSQL, how do I get the last id inserted into a table?
In MS SQL there is SCOPE_IDENTITY().
Please do not advise me to use something like this:
select max(id) from table
| ( tl;dr : goto option 3: INSERT with RETURNING )
Recall that in postgresql there is no "id" concept for tables, just sequences (which are typically but not necessarily used as default values for surrogate primary keys, with the SERIAL pseudo-type).
If you are interested in getting the id of a newly inserted row, there are several ways:
Option 1: CURRVAL(<sequence name>);.
For example:
INSERT INTO persons (lastname,firstname) VALUES ('Smith', 'John');
SELECT currval('persons_id_seq');
The name of the sequence must be known, it's really arbitrary; in this example we assume that the table persons has an id column created with the SERIAL pseudo-type. To avoid relying on this and to feel more clean, you can use instead pg_get_serial_sequence:
INSERT INTO persons (lastname,firstname) VALUES ('Smith', 'John');
SELECT currval(pg_get_serial_sequence('persons','id'));
Caveat: currval() only works after an INSERT (which has executed nextval() ), in the same session.
Option 2: LASTVAL();
This is similar to the previous, only that you don't need to specify the sequence name: it looks for the most recent modified sequence (always inside your session, same caveat as above).
Both CURRVAL and LASTVAL are totally concurrent safe. The behaviour of sequence in PG is designed so that different session will not interfere, so there is no risk of race conditions (if another session inserts another row between my INSERT and my SELECT, I still get my correct value).
However they do have a subtle potential problem. If the database has some TRIGGER (or RULE) that, on insertion into persons table, makes some extra insertions in other tables... then LASTVAL will probably give us the wrong value. The problem can even happen with CURRVAL, if the extra insertions are done intto the same persons table (this is much less usual, but the risk still exists).
Option 3: INSERT with RETURNING
INSERT INTO persons (lastname,firstname) VALUES ('Smith', 'John') RETURNING id;
This is the most clean, efficient and safe way to get the id. It doesn't have any of the risks of the previous.
Drawbacks? Almost none: you might need to modify the way you call your INSERT statement (in the worst case, perhaps your API or DB layer does not expect an INSERT to return a value); it's not standard SQL (who cares); it's available since Postgresql 8.2 (Dec 2006...)
Conclusion: If you can, go for option 3. Elsewhere, prefer 1.
Note: all these methods are useless if you intend to get the last inserted id globally (not necessarily by your session). For this, you must resort to SELECT max(id) FROM table (of course, this will not read uncommitted inserts from other transactions).
Conversely, you should never use SELECT max(id) FROM table instead one of the 3 options above, to get the id just generated by your INSERT statement, because (apart from performance) this is not concurrent safe: between your INSERT and your SELECT another session might have inserted another record.
| PostgreSQL | 2,944,297 | 474 |
I use postgres from homebrew in my OS X, but when I reboot my system, sometimes the postgres doesn't start after the reboot, and so I manually tried to start it with postgres -D /usr/local/var/postgres, but then the error occurred with the following message: FATAL: could not open directory "pg_tblspc": No such file or directory.
The last time it occurred, I couldn't get it to the original state, so I decided to uninstall the whole postgres system and then re-installed it and created users, tables, datasets, etc... It was so disgusting, but it frequently occurs on my system, say once in a few months.
So why does it lose the pg_tblspc file frequently? And is there anything that I can do to avoid the loss of the file?
I haven't upgraded my homebrew and postgres to the latest version (i.e. I've been using the same version). Also, all the things that I did on the postgres database is delete the table and populate the new data every day. I haven't changed the user, password, etc...
EDIT (mbannert):
I felt the need to add this, since the thread is the top hit on google for this issue and for many the symptom is different. Homebrewers likely will encounter this error message:
No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
So, if you just experienced this after the Yosemite upgrade you now you're covered for now reading this thread.
| Solved... in part.
Apparently, Installing the latest versions of OS X (e.g. Yosemite or El Capitan) removes some directories in /usr/local/var/postgres.
To fix this you simply recreate the missing directories:
mkdir -p /usr/local/var/postgres/pg_commit_ts
mkdir -p /usr/local/var/postgres/pg_dynshmem
mkdir -p /usr/local/var/postgres/pg_logical/mappings
mkdir -p /usr/local/var/postgres/pg_logical/snapshots
mkdir -p /usr/local/var/postgres/pg_replslot
mkdir -p /usr/local/var/postgres/pg_serial
mkdir -p /usr/local/var/postgres/pg_snapshots
mkdir -p /usr/local/var/postgres/pg_stat
mkdir -p /usr/local/var/postgres/pg_stat_tmp
mkdir -p /usr/local/var/postgres/pg_tblspc
mkdir -p /usr/local/var/postgres/pg_twophase
Or, more concisely (thanks to Nate):
mkdir -p /usr/local/var/postgres/{{pg_commit_ts,pg_dynshmem,pg_replslot,pg_serial,pg_snapshots,pg_stat,pg_stat_tmp,pg_tblspc,pg_twophase},pg_logical/{mappings,snapshots}}
Rerunning pg_ctl start -D /usr/local/var/postgres now starts the server normally and, at least for me, without any data loss.
UPDATE
On my system, some of those directories are empty even when Postgres is running. Maybe, as part of some "cleaning" operation, Yosemite removes any empty directories? In any case, I went ahead and created a '.keep' file in each directory to prevent future deletion.
touch /usr/local/var/postgres/{{pg_commit_ts,pg_dynshmem,pg_replslot,pg_serial,pg_snapshots,pg_stat,pg_stat_tmp,pg_tblspc,pg_twophase},pg_logical/{mappings,snapshots}}/.keep
Note: Creating the .keep file in those directories will create some noise in your logfile, but doesn't appear to negatively affect anything else.
| PostgreSQL | 25,970,132 | 470 |
How can I accomplish this using Postgres? I've tried the code below but it doesn't work:
ALTER TABLE mytable ALTER COLUMN mycolumn BIGINT NULL;
| From the fine manual:
ALTER TABLE mytable ALTER COLUMN mycolumn DROP NOT NULL;
There's no need to specify the type when you're just changing the nullability.
| PostgreSQL | 4,812,933 | 468 |
I have installed PostgreSQL and pgAdminIII on my Ubuntu Karmic box.
I am able to use pgAdminIII successfully (i.e. connect/log on), however when I try to login to the server using the same username/pwd on the command line (using psql), I get the error:
psql: FATAL: Ident authentication failed for user "postgres"
Does anyone now how to resolve this issue?
| The following steps work for a fresh install of postgres 9.1 on Ubuntu 12.04. (Worked for postgres 9.3.9 on Ubuntu 14.04 too.)
By default, postgres creates a user named 'postgres'. We log in as her, and give her a password.
$ sudo -u postgres psql
\password
Enter password: ...
...
Logout of psql by typing \q or ctrl+d. Then we connect as 'postgres'. The -h localhost part is important: it tells the psql client that we wish to connect using a TCP connection (which is configured to use password authentication), and not by a PEER connection (which does not care about the password).
$ psql -U postgres -h localhost
| PostgreSQL | 2,942,485 | 460 |
I tried to run simple SQL command:
SELECT * FROM site_adzone;
And I got this error:
ERROR: permission denied for relation site_adzone
What could be the problem here?
I tried also to do select for other tables and got same issue. I also tried to do this:
GRANT ALL PRIVILEGES ON DATABASE jerry TO tom;
But I got this response from console:
WARNING: no privileges were granted for "jerry"
Does anyone have any idea what can be wrong?
| GRANT on the database is not what you need. Grant on the tables directly.
Granting privileges on the database mostly is used to grant or revoke connect privileges. This allows you to specify who may do stuff in the database if they have sufficient other permissions.
You want instead:
GRANT ALL PRIVILEGES ON TABLE side_adzone TO jerry;
This will take care of this issue.
| PostgreSQL | 15,520,361 | 454 |
I'm using psql's \dt to list all tables in a database and I need to save the results.
What is the syntax to export the results of a psql command to a file?
| From psql's help (\?):
\o [FILE] send all query results to file or |pipe
The sequence of commands will look like this:
[wist@scifres ~]$ psql db
Welcome to psql 8.3.6, the PostgreSQL interactive terminal
db=>\o out.txt
db=>\dt
Then any db operation output will be written to out.txt.
Enter '\o' to revert the output back to console.
db=>\o
| PostgreSQL | 5,331,320 | 442 |
Official page do not mention such case. But many users need only psql without a local database (I have it on AWS). Brew do not have psql.
| You could also use homebrew to install libpq.
brew install libpq
This would give you psql, pg_dump and a whole bunch of other client utilities without installing Postgres.
Unfortunately since it provides some of the same utilities as are included in the full postgresql package, brew installs it "keg-only" which means it isn't in the PATH by default. Homebrew will spit out some information on how to add it to your PATH after installation. In my case it was this:
echo 'export PATH="/usr/local/opt/libpq/bin:$PATH"' >> ~/.zshrc
Alternatively, you can create symlinks for the utilities you need. E.g.:
ln -s /usr/local/Cellar/libpq/10.3/bin/psql /usr/local/bin/psql
Note: used installed version instead of 10.3.
Alternatively, you could instruct homebrew to "link all of its binaries to the PATH anyway"
brew link --force libpq
but then you'd be unable to install the postgresql package later.
| PostgreSQL | 44,654,216 | 441 |
Is there a command in PostgreSQL to select active connections to a given database?
psql states that I can't drop one of my databases because there are active connections to it, so I would like to see what the connections are (and from which machines)
| Oh, I just found that command on PostgreSQL forum:
SELECT * FROM pg_stat_activity;
To limit to just one database:
SELECT * FROM pg_stat_activity WHERE datname = 'dbname';
| PostgreSQL | 27,435,839 | 438 |
How do I list all extensions that are already installed in a database or schema from psql?
See also
Finding a list of available extensions that PostgreSQL ships with
| In psql that would be
\dx
See the manual of psql for details.
Doing it in plain SQL it would be a select on pg_extension:
SELECT *
FROM pg_extension;
| PostgreSQL | 21,799,956 | 425 |
I would like to force the auto increment field of a table to some value, I tried with this:
ALTER TABLE product AUTO_INCREMENT = 1453
AND
ALTER SEQUENCE product RESTART WITH 1453;
ERROR: relation "your_sequence_name" does not exist
I have a table product with Id and name field
| If you created the table product with an id column, then the sequence is not simply called product, but rather product_id_seq (that is, ${table}_${column}_seq).
This is the ALTER SEQUENCE command you need:
ALTER SEQUENCE product_id_seq RESTART WITH 1453
You can see the sequences in your database using the \ds command in psql. If you do \d product and look at the default constraint for your column, the nextval(...) call will specify the sequence name too.
| PostgreSQL | 5,342,440 | 424 |
I have a table column that uses an enum type. I wish to update that enum type to have an additional possible value. I don't want to delete any existing values, just add the new value. What is the simplest way to do this?
| PostgreSQL 9.1 introduces ability to ALTER Enum types:
ALTER TYPE enum_type ADD VALUE 'new_value'; -- appends to list
ALTER TYPE enum_type ADD VALUE 'new_value' BEFORE 'old_value';
ALTER TYPE enum_type ADD VALUE 'new_value' AFTER 'old_value';
| PostgreSQL | 1,771,543 | 423 |
I am trying to copy an entire table from one database to another in Postgres. Any suggestions?
| Extract the table and pipe it directly to the target database:
pg_dump -t table_to_copy source_db | psql target_db
Note: If the other database already has the table set up, you should use the -a flag to import data only, else you may see weird errors like "Out of memory":
pg_dump -a -t table_to_copy source_db | psql target_db
| PostgreSQL | 3,195,125 | 422 |
I have a table with this layout:
CREATE TABLE Favorites (
FavoriteId uuid NOT NULL PRIMARY KEY,
UserId uuid NOT NULL,
RecipeId uuid NOT NULL,
MenuId uuid
);
I want to create a unique constraint similar to this:
ALTER TABLE Favorites
ADD CONSTRAINT Favorites_UniqueFavorite UNIQUE(UserId, MenuId, RecipeId);
However, this will allow multiple rows with the same (UserId, RecipeId), if MenuId IS NULL. I want to allow NULL in MenuId to store a favorite that has no associated menu, but I only want at most one of these rows per user/recipe pair.
The ideas I have so far are:
Use some hard-coded UUID (such as all zeros) instead of null.
However, MenuId has a FK constraint on each user's menus, so I'd then have to create a special "null" menu for every user which is a hassle.
Check for existence of a null entry using a trigger instead.
I think this is a hassle and I like avoiding triggers wherever possible. Plus, I don't trust them to guarantee my data is never in a bad state.
Just forget about it and check for the previous existence of a null entry in the middle-ware or in a insert function, and don't have this constraint.
I'm using Postgres 9.0. Is there any method I'm overlooking?
| Postgres 15 or newer
Postgres 15 adds the clause NULLS NOT DISTINCT. The release notes:
Allow unique constraints and indexes to treat NULL values as not distinct (Peter Eisentraut)
Previously NULL values were always indexed as distinct values, but
this can now be changed by creating constraints and indexes using
UNIQUE NULLS NOT DISTINCT.
With this clause null is treated like just another value, and a UNIQUE constraint does not allow more than one row with the same null value. The task is simple now:
ALTER TABLE favorites
ADD CONSTRAINT favo_uni UNIQUE NULLS NOT DISTINCT (user_id, menu_id, recipe_id);
There are examples in the manual chapter "Unique Constraints".
The clause switches behavior for all keys of the same index. You can't treat null as equal for one key, but not for another.
NULLS DISTINCT remains the default (in line with standard SQL) and does not have to be spelled out.
The same clause works for a UNIQUE index, too:
CREATE UNIQUE INDEX favo_uni_idx
ON favorites (user_id, menu_id, recipe_id) NULLS NOT DISTINCT;
Note the position of the new clause after the key fields.
Postgres 14 or older
Create two partial indexes:
CREATE UNIQUE INDEX favo_3col_uni_idx ON favorites (user_id, menu_id, recipe_id)
WHERE menu_id IS NOT NULL;
CREATE UNIQUE INDEX favo_2col_uni_idx ON favorites (user_id, recipe_id)
WHERE menu_id IS NULL;
This way, there can only be one combination of (user_id, recipe_id) where menu_id IS NULL, effectively implementing the desired constraint.
Possible drawbacks:
You cannot have a foreign key referencing (user_id, menu_id, recipe_id). (It seems unlikely you'd want a FK reference three columns wide - use the PK column instead!)
You cannot base CLUSTER on a partial index.
Queries without a matching WHERE condition cannot use the partial index.
If you need a complete index, you can alternatively drop the WHERE condition from favo_3col_uni_idx and your requirements are still enforced.
The index, now comprising the whole table, overlaps with the other one and gets bigger. Depending on typical queries and the percentage of null values, this may or may not be useful. In extreme situations it may even help to maintain all three indexes (the two partial ones and a total on top).
This is a good solution for a single nullable column, maybe for two. But it gets out of hands quickly for more as you need a separate partial index for every combination of nullable columns, so the number grows binomially. For multiple nullable columns, see instead:
Why doesn't my UNIQUE constraint trigger?
Aside: I advise not to use mixed case identifiers in PostgreSQL.
| PostgreSQL | 8,289,100 | 419 |
I'm trying to drop my database and create a new one through the command line. I log in using psql -U username and then do a \connect template1, followed by a DROP DATABASE databasename;.
I get the error
database databasename is being accessed by other users
I shut down Apache and tried this again, but I'm still getting this error. Am I doing something wrong?
| You can run the dropdb command from the command line:
dropdb 'database name'
Note that you have to be a superuser or the database owner to be able to drop it.
You can also check the pg_stat_activity view to see what type of activity is currently taking place against your database, including all idle processes.
SELECT * FROM pg_stat_activity WHERE datname='database name';
Note that from PostgreSQL v13 on, you can disconnect the users automatically with
DROP DATABASE dbname WITH (FORCE);
or
dropdb -f dbname
| PostgreSQL | 7,073,773 | 414 |
How can I call psql so that it doesn't prompt for a password?
This is what I have:
psql -Umyuser < myscript.sql
However, I couldn't find the argument that passes the password, and so psql always prompts for it.
| You may wish to read a summary of the ways to authenticate to PostgreSQL.
To answer your question, there are several ways provide a password for password-based authentication:
Via the password prompt. Example:
psql -h uta.biocommons.org -U foo
Password for user foo:
In a pgpass file. See libpq-pgpass. Format:
<host>:<port>:<database>:<user>:<password>
With the PGPASSWORD environment variable. See libpq-envars. Example:
export PGPASSWORD=yourpass
psql ...
# Or in one line for this invocation only:
PGPASSWORD=yourpass psql ...
In the connection string The password and other options may be specified in the connection string/URI. See app-psql. Example:
psql postgresql://username:password@dbmaster:5433/mydb?sslmode=require
| PostgreSQL | 6,523,019 | 413 |
My docker compose file has three containers, web, nginx, and postgres. Postgres looks like this:
postgres:
container_name: postgres
restart: always
image: postgres:latest
volumes:
- ./database:/var/lib/postgresql
ports:
- 5432:5432
My goal is to mount a volume which corresponds to a local folder called ./database inside the postgres container as /var/lib/postgres. When I start these containers and insert data into postgres, I verify that /var/lib/postgres/data/base/ is full of the data I'm adding (in the postgres container), but in my local system, ./database only gets a data folder in it, i.e. ./database/data is created, but it's empty. Why?
Notes:
This suggests my above file should work.
This person is using docker services which is interesting
UPDATE 1
Per Nick's suggestion, I did a docker inspect and found:
"Mounts": [
{
"Source": "/Users/alex/Documents/MyApp/database",
"Destination": "/var/lib/postgresql",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Name": "e5bf22471215db058127109053e72e0a423d97b05a2afb4824b411322efd2c35",
"Source": "/var/lib/docker/volumes/e5bf22471215db058127109053e72e0a423d97b05a2afb4824b411322efd2c35/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
Which makes it seem like the data is being stolen by another volume I didn't code myself. Not sure why that is. Is the postgres image creating that volume for me? If so, is there some way to use that volume instead of the volume I'm mounting when I restart? Otherwise, is there a good way of disabling that other volume and using my own, ./database?
| Strangely enough, the solution ended up being to change
volumes:
- ./postgres-data:/var/lib/postgresql
to
volumes:
- ./postgres-data:/var/lib/postgresql/data
| PostgreSQL | 41,637,505 | 412 |
I want to extract just the date part from a timestamp in PostgreSQL.
I need it to be a postgresql DATE type so I can insert it into another table that expects a DATE value.
For example, if I have 2011/05/26 09:00:00, I want 2011/05/26
I tried casting, but I only get 2011:
timestamp:date
cast(timestamp as date)
I tried to_char() with to_date():
SELECT to_date(to_char(timestamp, 'YYYY/MM/DD'), 'YYYY/MM/DD')
FROM val3 WHERE id=1;
I tried to make it a function:
CREATE OR REPLACE FUNCTION testing() RETURNS void AS '
DECLARE i_date DATE;
BEGIN
SELECT to_date(to_char(val1, "YYYY/MM/DD"),"YYYY/MM/DD")
INTO i_date FROM exampTable WHERE id=1;
INSERT INTO foo(testd) VALUES (i);
END
What is the best way to extract date (yyyy/mm/dd) from a timestamp in PostgreSQL?
| You can cast your timestamp to a date by suffixing it with ::date. Here, in psql, is a timestamp:
# select '2010-01-01 12:00:00'::timestamp;
timestamp
---------------------
2010-01-01 12:00:00
Now we'll cast it to a date:
wconrad=# select '2010-01-01 12:00:00'::timestamp::date;
date
------------
2010-01-01
On the other hand you can use date_trunc function. The difference between them is that the latter returns the same data type like timestamptz keeping your time zone intact (if you need it).
=> select date_trunc('day', now());
date_trunc
------------------------
2015-12-15 00:00:00+02
(1 row)
| PostgreSQL | 6,133,107 | 404 |
I have created a table in postgreSQL. I want to look at the SQL statement used to create the table but cannot figure it out.
How do I get the create table SQL statement for an existing table in Postgres via commandline or SQL statement?
| pg_dump -t 'schema-name.table-name' --schema-only database-name
More info - in the manual.
| PostgreSQL | 2,593,803 | 399 |
A very frequently asked question here is how to do an upsert, which is what MySQL calls INSERT ... ON DUPLICATE UPDATE and the standard supports as part of the MERGE operation.
Given that PostgreSQL doesn't support it directly (before pg 9.5), how do you do this? Consider the following:
CREATE TABLE testtable (
id integer PRIMARY KEY,
somedata text NOT NULL
);
INSERT INTO testtable (id, somedata) VALUES
(1, 'fred'),
(2, 'bob');
Now imagine that you want to "upsert" the tuples (2, 'Joe'), (3, 'Alan'), so the new table contents would be:
(1, 'fred'),
(2, 'Joe'), -- Changed value of existing tuple
(3, 'Alan') -- Added new tuple
That's what people are talking about when discussing an upsert. Crucially, any approach must be safe in the presence of multiple transactions working on the same table - either by using explicit locking, or otherwise defending against the resulting race conditions.
This topic is discussed extensively at Insert, on duplicate update in PostgreSQL?, but that's about alternatives to the MySQL syntax, and it's grown a fair bit of unrelated detail over time. I'm working on definitive answers.
These techniques are also useful for "insert if not exists, otherwise do nothing", i.e. "insert ... on duplicate key ignore".
| 9.5 and newer:
PostgreSQL 9.5 and newer support INSERT ... ON CONFLICT (key) DO UPDATE (and ON CONFLICT (key) DO NOTHING), i.e. upsert.
Comparison with ON DUPLICATE KEY UPDATE.
Quick explanation.
For usage see the manual - specifically the conflict_action clause in the syntax diagram, and the explanatory text.
Unlike the solutions for 9.4 and older that are given below, this feature works with multiple conflicting rows and it doesn't require exclusive locking or a retry loop.
The commit adding the feature is here and the discussion around its development is here.
If you're on 9.5 and don't need to be backward-compatible you can stop reading now.
9.4 and older:
PostgreSQL doesn't have any built-in UPSERT (or MERGE) facility, and doing it efficiently in the face of concurrent use is very difficult.
This article discusses the problem in useful detail.
In general you must choose between two options:
Individual insert/update operations in a retry loop; or
Locking the table and doing batch merge
Individual row retry loop
Using individual row upserts in a retry loop is the reasonable option if you want many connections concurrently trying to perform inserts.
The PostgreSQL documentation contains a useful procedure that'll let you do this in a loop inside the database. It guards against lost updates and insert races, unlike most naive solutions. It will only work in READ COMMITTED mode and is only safe if it's the only thing you do in the transaction, though. The function won't work correctly if triggers or secondary unique keys cause unique violations.
This strategy is very inefficient. Whenever practical you should queue up work and do a bulk upsert as described below instead.
Many attempted solutions to this problem fail to consider rollbacks, so they result in incomplete updates. Two transactions race with each other; one of them successfully INSERTs; the other gets a duplicate key error and does an UPDATE instead. The UPDATE blocks waiting for the INSERT to rollback or commit. When it rolls back, the UPDATE condition re-check matches zero rows, so even though the UPDATE commits it hasn't actually done the upsert you expected. You have to check the result row counts and re-try where necessary.
Some attempted solutions also fail to consider SELECT races. If you try the obvious and simple:
-- THIS IS WRONG. DO NOT COPY IT. It's an EXAMPLE.
BEGIN;
UPDATE testtable
SET somedata = 'blah'
WHERE id = 2;
-- Remember, this is WRONG. Do NOT COPY IT.
INSERT INTO testtable (id, somedata)
SELECT 2, 'blah'
WHERE NOT EXISTS (SELECT 1 FROM testtable WHERE testtable.id = 2);
COMMIT;
then when two run at once there are several failure modes. One is the already discussed issue with an update re-check. Another is where both UPDATE at the same time, matching zero rows and continuing. Then they both do the EXISTS test, which happens before the INSERT. Both get zero rows, so both do the INSERT. One fails with a duplicate key error.
This is why you need a re-try loop. You might think that you can prevent duplicate key errors or lost updates with clever SQL, but you can't. You need to check row counts or handle duplicate key errors (depending on the chosen approach) and re-try.
Please don't roll your own solution for this. Like with message queuing, it's probably wrong.
Bulk upsert with lock
Sometimes you want to do a bulk upsert, where you have a new data set that you want to merge into an older existing data set. This is vastly more efficient than individual row upserts and should be preferred whenever practical.
In this case, you typically follow the following process:
CREATE a TEMPORARY table
COPY or bulk-insert the new data into the temp table
LOCK the target table IN EXCLUSIVE MODE. This permits other transactions to SELECT, but not make any changes to the table.
Do an UPDATE ... FROM of existing records using the values in the temp table;
Do an INSERT of rows that don't already exist in the target table;
COMMIT, releasing the lock.
For example, for the example given in the question, using multi-valued INSERT to populate the temp table:
BEGIN;
CREATE TEMPORARY TABLE newvals(id integer, somedata text);
INSERT INTO newvals(id, somedata) VALUES (2, 'Joe'), (3, 'Alan');
LOCK TABLE testtable IN EXCLUSIVE MODE;
UPDATE testtable
SET somedata = newvals.somedata
FROM newvals
WHERE newvals.id = testtable.id;
INSERT INTO testtable
SELECT newvals.id, newvals.somedata
FROM newvals
LEFT OUTER JOIN testtable ON (testtable.id = newvals.id)
WHERE testtable.id IS NULL;
COMMIT;
Related reading
UPSERT wiki page
UPSERTisms in Postgres
Insert, on duplicate update in PostgreSQL?
http://petereisentraut.blogspot.com/2010/05/merge-syntax.html
Upsert with a transaction
Is SELECT or INSERT in a function prone to race conditions?
SQL MERGE on the PostgreSQL wiki
Most idiomatic way to implement UPSERT in Postgresql nowadays
What about MERGE?
SQL-standard MERGE actually has poorly defined concurrency semantics and is not suitable for upserting without locking a table first.
It's a really useful OLAP statement for data merging, but it's not actually a useful solution for concurrency-safe upsert. There's lots of advice to people using other DBMSes to use MERGE for upserts, but it's actually wrong.
Other DBs:
INSERT ... ON DUPLICATE KEY UPDATE in MySQL
MERGE from MS SQL Server (but see above about MERGE problems)
MERGE from Oracle (but see above about MERGE problems)
| PostgreSQL | 17,267,417 | 396 |
With postgresql 9.3 I can SELECT specific fields of a JSON data type, but how do you modify them using UPDATE? I can't find any examples of this in the postgresql documentation, or anywhere online. I have tried the obvious:
postgres=# create table test (data json);
CREATE TABLE
postgres=# insert into test (data) values ('{"a":1,"b":2}');
INSERT 0 1
postgres=# select data->'a' from test where data->>'b' = '2';
?column?
----------
1
(1 row)
postgres=# update test set data->'a' = to_json(5) where data->>'b' = '2';
ERROR: syntax error at or near "->"
LINE 1: update test set data->'a' = to_json(5) where data->>'b' = '2...
| Update: With PostgreSQL 9.5, there are some jsonb manipulation functionality within PostgreSQL itself (but none for json; casts are required to manipulate json values).
Merging 2 (or more) JSON objects (or concatenating arrays):
SELECT jsonb '{"a":1}' || jsonb '{"b":2}', -- will yield jsonb '{"a":1,"b":2}'
jsonb '["a",1]' || jsonb '["b",2]' -- will yield jsonb '["a",1,"b",2]'
So, setting a simple key can be done using:
SELECT jsonb '{"a":1}' || jsonb_build_object('<key>', '<value>')
Where <key> should be string, and <value> can be whatever type to_jsonb() accepts.
For setting a value deep in a JSON hierarchy, the jsonb_set() function can be used:
SELECT jsonb_set('{"a":[null,{"b":[]}]}', '{a,1,b,0}', jsonb '{"c":3}')
-- will yield jsonb '{"a":[null,{"b":[{"c":3}]}]}'
Full parameter list of jsonb_set():
jsonb_set(target jsonb,
path text[],
new_value jsonb,
create_missing boolean default true)
path can contain JSON array indexes too & negative integers that appear there count from the end of JSON arrays. However, a non-existing, but positive JSON array index will append the element to the end of the array:
SELECT jsonb_set('{"a":[null,{"b":[1,2]}]}', '{a,1,b,1000}', jsonb '3', true)
-- will yield jsonb '{"a":[null,{"b":[1,2,3]}]}'
For inserting into JSON array (while preserving all of the original values), the jsonb_insert() function can be used (in 9.6+; this function only, in this section):
SELECT jsonb_insert('{"a":[null,{"b":[1]}]}', '{a,1,b,0}', jsonb '2')
-- will yield jsonb '{"a":[null,{"b":[2,1]}]}', and
SELECT jsonb_insert('{"a":[null,{"b":[1]}]}', '{a,1,b,0}', jsonb '2', true)
-- will yield jsonb '{"a":[null,{"b":[1,2]}]}'
Full parameter list of jsonb_insert():
jsonb_insert(target jsonb,
path text[],
new_value jsonb,
insert_after boolean default false)
Again, negative integers that appear in path count from the end of JSON arrays.
So, f.ex. appending to an end of a JSON array can be done with:
SELECT jsonb_insert('{"a":[null,{"b":[1,2]}]}', '{a,1,b,-1}', jsonb '3', true)
-- will yield jsonb '{"a":[null,{"b":[1,2,3]}]}', and
However, this function is working slightly differently (than jsonb_set()) when the path in target is a JSON object's key. In that case, it will only add a new key-value pair for the JSON object when the key is not used. If it's used, it will raise an error:
SELECT jsonb_insert('{"a":[null,{"b":[1]}]}', '{a,1,c}', jsonb '[2]')
-- will yield jsonb '{"a":[null,{"b":[1],"c":[2]}]}', but
SELECT jsonb_insert('{"a":[null,{"b":[1]}]}', '{a,1,b}', jsonb '[2]')
-- will raise SQLSTATE 22023 (invalid_parameter_value): cannot replace existing key
Deleting a key (or an index) from a JSON object (or, from an array) can be done with the - operator:
SELECT jsonb '{"a":1,"b":2}' - 'a', -- will yield jsonb '{"b":2}'
jsonb '["a",1,"b",2]' - 1 -- will yield jsonb '["a","b",2]'
Deleting, from deep in a JSON hierarchy can be done with the #- operator:
SELECT '{"a":[null,{"b":[3.14]}]}' #- '{a,1,b,0}'
-- will yield jsonb '{"a":[null,{"b":[]}]}'
For 9.4, you can use a modified version of the original answer (below), but instead of aggregating a JSON string, you can aggregate into a json object directly with json_object_agg().
Original answer: It is possible (without plpython or plv8) in pure SQL too (but needs 9.3+, will not work with 9.2)
CREATE OR REPLACE FUNCTION "json_object_set_key"(
"json" json,
"key_to_set" TEXT,
"value_to_set" anyelement
)
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT concat('{', string_agg(to_json("key") || ':' || "value", ','), '}')::json
FROM (SELECT *
FROM json_each("json")
WHERE "key" <> "key_to_set"
UNION ALL
SELECT "key_to_set", to_json("value_to_set")) AS "fields"
$function$;
SQLFiddle
Edit:
A version, which sets multiple keys & values:
CREATE OR REPLACE FUNCTION "json_object_set_keys"(
"json" json,
"keys_to_set" TEXT[],
"values_to_set" anyarray
)
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT concat('{', string_agg(to_json("key") || ':' || "value", ','), '}')::json
FROM (SELECT *
FROM json_each("json")
WHERE "key" <> ALL ("keys_to_set")
UNION ALL
SELECT DISTINCT ON ("keys_to_set"["index"])
"keys_to_set"["index"],
CASE
WHEN "values_to_set"["index"] IS NULL THEN 'null'::json
ELSE to_json("values_to_set"["index"])
END
FROM generate_subscripts("keys_to_set", 1) AS "keys"("index")
JOIN generate_subscripts("values_to_set", 1) AS "values"("index")
USING ("index")) AS "fields"
$function$;
Edit 2: as @ErwinBrandstetter noted these functions above works like a so-called UPSERT (updates a field if it exists, inserts if it does not exist). Here is a variant, which only UPDATE:
CREATE OR REPLACE FUNCTION "json_object_update_key"(
"json" json,
"key_to_set" TEXT,
"value_to_set" anyelement
)
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT CASE
WHEN ("json" -> "key_to_set") IS NULL THEN "json"
ELSE (SELECT concat('{', string_agg(to_json("key") || ':' || "value", ','), '}')
FROM (SELECT *
FROM json_each("json")
WHERE "key" <> "key_to_set"
UNION ALL
SELECT "key_to_set", to_json("value_to_set")) AS "fields")::json
END
$function$;
Edit 3: Here is recursive variant, which can set (UPSERT) a leaf value (and uses the first function from this answer), located at a key-path (where keys can only refer to inner objects, inner arrays not supported):
CREATE OR REPLACE FUNCTION "json_object_set_path"(
"json" json,
"key_path" TEXT[],
"value_to_set" anyelement
)
RETURNS json
LANGUAGE sql
IMMUTABLE
STRICT
AS $function$
SELECT CASE COALESCE(array_length("key_path", 1), 0)
WHEN 0 THEN to_json("value_to_set")
WHEN 1 THEN "json_object_set_key"("json", "key_path"[l], "value_to_set")
ELSE "json_object_set_key"(
"json",
"key_path"[l],
"json_object_set_path"(
COALESCE(NULLIF(("json" -> "key_path"[l])::text, 'null'), '{}')::json,
"key_path"[l+1:u],
"value_to_set"
)
)
END
FROM array_lower("key_path", 1) l,
array_upper("key_path", 1) u
$function$;
Updated: Added function for replacing an existing json field's key by another given key. Can be in handy for updating data types in migrations or other scenarios like data structure amending.
CREATE OR REPLACE FUNCTION json_object_replace_key(
json_value json,
existing_key text,
desired_key text)
RETURNS json AS
$BODY$
SELECT COALESCE(
(
SELECT ('{' || string_agg(to_json(key) || ':' || value, ',') || '}')
FROM (
SELECT *
FROM json_each(json_value)
WHERE key <> existing_key
UNION ALL
SELECT desired_key, json_value -> existing_key
) AS "fields"
-- WHERE value IS NOT NULL (Actually not required as the string_agg with value's being null will "discard" that entry)
),
'{}'
)::json
$BODY$
LANGUAGE sql IMMUTABLE STRICT
COST 100;
Update: functions are compacted now.
| PostgreSQL | 18,209,625 | 390 |
I would like to set up a table in PostgreSQL such that two columns together must be unique. There can be multiple values of either value, so long as there are not two that share both.
For instance:
CREATE TABLE someTable (
id int PRIMARY KEY AUTOINCREMENT,
col1 int NOT NULL,
col2 int NOT NULL
)
So, col1 and col2 can repeat, but not at the same time. So, this would be allowed (Not including the id)
1 1
1 2
2 1
2 2
but not this:
1 1
1 2
1 1 -- would reject this insert for violating constraints
| CREATE TABLE someTable (
id serial PRIMARY KEY,
col1 int NOT NULL,
col2 int NOT NULL,
UNIQUE (col1, col2)
)
autoincrement is not postgresql. You want a integer primary key generated always as identity (or serial if you use PG 9 or lower. serial was soft-deprecated in PG 10).
If col1 and col2 make a unique and can't be null then they make a good primary key:
CREATE TABLE someTable (
col1 int NOT NULL,
col2 int NOT NULL,
PRIMARY KEY (col1, col2)
)
| PostgreSQL | 14,221,775 | 390 |
I'm trying to backup/restore a PostgreSQL database as is explained on the Docker website, but the data is not restored.
The volumes used by the database image are:
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
and the CMD is:
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
I create the DB container with this command:
docker run -it --name "$DB_CONTAINER_NAME" -d "$DB_IMAGE_NAME"
Then I connect another container to insert some data manually:
docker run -it --rm --link "$DB_CONTAINER_NAME":db "$DB_IMAGE_NAME" sh -c 'exec bash'
psql -d test -h $DB_PORT_5432_TCP_ADDR
# insert some data in the db
<CTRL-D>
<CTRL-D>
The tar archive is then created:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /etc/postgresql /var/log/postgresql /var/lib/postgresql
Now I remove the container used for the db and create another one, with the same name, and try to restore the data inserted before:
$ sudo docker run --volumes-from "$DB_CONTAINER_NAME" --rm -v $(pwd):/backup ubuntu tar xvf /backup/backup.tar
But the tables are empty, why is the data not properly restored ?
| Backup your databases
docker exec -t your-db-container pg_dumpall -c -U postgres > dump_`date +%Y-%m-%d"_"%H_%M_%S`.sql
Creates filename like dump_2023-12-25_09_15_26.sql
If you want a smaller file size, use gzip:
docker exec -t your-db-container pg_dumpall -c -U postgres | gzip > dump_`date +%Y-%m-%d"_"%H_%M_%S`.sql.gz
If you want even smaller file sizes use brotli or bzip2:
docker exec -t your-db-container pg_dumpall -c -U postgres | brotli --best > dump_`date +%Y-%m-%d"_"%H_%M_%S`.sql.br
or
docker exec -t your-db-container pg_dumpall -c -U postgres | bzip2 --best > dump_`date +%Y-%m-%d"_"%H_%M_%S`.sql.bz2
Restore your databases
cat your_dump.sql | docker exec -i your-db-container psql -U postgres
| PostgreSQL | 24,718,706 | 385 |
I have been trying to set up a container for a development postgres instance by creating a custom user & database. I am using the official postgres docker image. In the documentation it instructs you to insert a bash script inside of the /docker-entrypoint-initdb.d/ folder to set up the database with any custom parameters.
My bash script: make_db.sh
su postgres -c "createuser -w -d -r -s docker"
su postgres -c "createdb -O docker docker"
Dockerfile
FROM library/postgres
RUN ["mkdir", "/docker-entrypoint-initdb.d"]
ADD make_db.sh /docker-entrypoint-initdb.d/
The error I get from the docker logs -f db (db is my container name) is:
createuser: could not connect to database postgres: could not connect to server: No such file or directory
It seems that the commands inside of the /docker-entrypoint-initdb.d/ folder are being executed before postgres is started. My question is, how do I set up a user/database programmatically using the official postgres container? Is there any way to do this with a script?
| EDIT - since Jul 23, 2015
The official postgres docker image will run .sql scripts found in the /docker-entrypoint-initdb.d/ folder.
So all you need is to create the following sql script:
init.sql
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
and add it in your Dockerfile:
Dockerfile
FROM library/postgres
COPY init.sql /docker-entrypoint-initdb.d/
But since July 8th, 2015, if all you need is to create a user and database, it is easier to just make use to the POSTGRES_USER, POSTGRES_PASSWORD and POSTGRES_DB environment variables:
docker run -e POSTGRES_USER=docker -e POSTGRES_PASSWORD=docker -e POSTGRES_DB=docker library/postgres
or with a Dockerfile:
FROM library/postgres
ENV POSTGRES_USER docker
ENV POSTGRES_PASSWORD docker
ENV POSTGRES_DB docker
for images older than Jul 23, 2015
From the documentation of the postgres Docker image, it is said that
[...] it will source any *.sh script found in that directory [/docker-entrypoint-initdb.d] to do further initialization before starting the service
What's important here is "before starting the service". This means your script make_db.sh will be executed before the postgres service would be started, hence the error message "could not connect to database postgres".
After that there is another useful piece of information:
If you need to execute SQL commands as part of your initialization, the use of Postgres single user mode is highly recommended.
Agreed this can be a bit mysterious at the first look. What it says is that your initialization script should start the postgres service in single mode before doing its actions. So you could change your make_db.ksh script as follows and it should get you closer to what you want:
NOTE, this has changed recently in the following commit. This will work with the latest change:
export PGUSER=postgres
psql <<- EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL
Previously, the use of --single mode was required:
gosu postgres postgres --single <<- EOSQL
CREATE USER docker;
CREATE DATABASE docker;
GRANT ALL PRIVILEGES ON DATABASE docker TO docker;
EOSQL
| PostgreSQL | 26,598,738 | 382 |
I'm looking to update multiple rows in PostgreSQL in one statement. Is there a way to do something like the following?
UPDATE table
SET
column_a = 1 where column_b = '123',
column_a = 2 where column_b = '345'
| You can also use update ... from syntax and use a mapping table. If you want to update more than one column, it's much more generalizable:
update test as t set
column_a = c.column_a
from (values
('123', 1),
('345', 2)
) as c(column_b, column_a)
where c.column_b = t.column_b;
You can add as many columns as you like:
update test as t set
column_a = c.column_a,
column_c = c.column_c
from (values
('123', 1, '---'),
('345', 2, '+++')
) as c(column_b, column_a, column_c)
where c.column_b = t.column_b;
sql fiddle demo
| PostgreSQL | 18,797,608 | 380 |
I need to programmatically insert tens of millions of records into a Postgres database. Presently, I'm executing thousands of insert statements in a single query.
Is there a better way to do this, some bulk insert statement I do not know about?
| PostgreSQL has a guide on how to best populate a database initially, and they suggest using the COPY command for bulk loading rows. The guide has some other good tips on how to speed up the process, like removing indexes and foreign keys before loading the data (and adding them back afterwards).
| PostgreSQL | 758,945 | 368 |
I have a PostgreSQL database table called "user_links" which currently allows the following duplicate fields:
year, user_id, sid, cid
The unique constraint is currently the first field called "id", however I am now looking to add a constraint to make sure the year, user_id, sid and cid are all unique but I cannot apply the constraint because duplicate values already exist which violate this constraint.
Is there a way to find all duplicates?
| The basic idea will be using a nested query with count aggregation:
select * from yourTable ou
where (select count(*) from yourTable inr
where inr.sid = ou.sid) > 1
You can adjust the where clause in the inner query to narrow the search.
There is another good solution for that mentioned in the comments, (but not everyone reads them):
select Column1, Column2, count(*)
from yourTable
group by Column1, Column2
HAVING count(*) > 1
Or shorter:
SELECT (yourTable.*)::text, count(*)
FROM yourTable
GROUP BY yourTable.*
HAVING count(*) > 1
| PostgreSQL | 28,156,795 | 367 |
After restarting my MacBook Pro I am unable to start the database server:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
I checked the logs and the following line appears over and over again:
FATAL: database files are incompatible with server
DETAIL: The data directory was initialized by PostgreSQL version 9.2, which is not compatible with this version 9.0.4.
9.0.4 was the version that came preinstalled on the mac, 9.2[.4] is the version I installed via Homebrew.
As mentioned, this used to work before the restart, so it can't actually be a compiling issue. I also re-ran initdb /usr/local/var/postgres -E utf8 and the file still exists.
| If you recently upgraded postgres to latest version, you can run the below command to upgrade your postgres data directory retaining all data:
brew postgresql-upgrade-database
The above command is taken from the output of brew info postgres
Note: this won't work for upgrading from 14 to 15 as of recent testing. For postgres version > 14, you should use pg_upgrade directly to upgrade your clusters.
| PostgreSQL | 17,822,974 | 367 |
I got a lot of errors with the message :
"DatabaseError: current transaction is aborted, commands ignored until end of transaction block"
after changed from python-psycopg to python-psycopg2 as Django project's database engine.
The code remains the same, just don't know where those errors are from.
| This is what postgres does when a query produces an error and you try to run another query without first rolling back the transaction. (You might think of it as a safety feature, to keep you from corrupting your data.)
To fix this, you'll want to figure out where in the code that bad query is being executed. It might be helpful to use the log_statement and log_min_error_statement options in your postgresql server.
| PostgreSQL | 2,979,369 | 364 |
I'm trying to set a sequence to a specific value.
SELECT setval('payments_id_seq'), 21, true;
This gives an error:
ERROR: function setval(unknown) does not exist
Using ALTER SEQUENCE doesn't seem to work either?
ALTER SEQUENCE payments_id_seq LASTVALUE 22;
How can this be done?
Ref: https://www.postgresql.org/docs/current/functions-sequence.html
| The parentheses are misplaced:
SELECT setval('payments_id_seq', 21, true); -- next value will be 22
Otherwise you're calling setval with a single argument, while it requires two or three.
This is the same as SELECT setval('payments_id_seq', 21)
| PostgreSQL | 8,745,051 | 360 |
Every time is set up a new SQL table or add a new varchar column to an existing table, I am wondering one thing: what is the best value for the length.
So, lets say, you have a column called name of type varchar. So, you have to choose the length. I cannot think of a name > 20 chars, but you will never know. But instead of using 20, I always round up to the next 2^n number. In this case, I would choose 32 as the length. I do that, because from an computer scientist point of view, a number 2^n looks more even to me than other numbers and I'm just assuming that the architecture underneath can handle those numbers slightly better than others.
On the other hand, MSSQL server for example, sets the default length value to 50, when you choose to create a varchar column. That makes me thinking about it. Why 50? is it just a random number, or based on average column length, or what?
It could also be - or probably is - that different SQL servers implementations (like MySQL, MSSQL, Postgres, ...) have different best column length values.
| No DBMS I know of has any "optimization" that will make a VARCHAR with a 2^n length perform better than one with a max length that is not a power of 2.
I think early SQL Server versions actually treated a VARCHAR with length 255 differently than one with a higher maximum length. I don't know if this is still the case.
For almost all DBMS, the actual storage that is required is only determined by the number of characters you put into it, not the max length you define. So from a storage point of view (and most probably a performance one as well), it does not make any difference whether you declare a column as VARCHAR(100) or VARCHAR(500).
You should see the max length provided for a VARCHAR column as a kind of constraint (or business rule) rather than a technical/physical thing.
For PostgreSQL the best setup is to use text without a length restriction and a CHECK CONSTRAINT that limits the number of characters to whatever your business requires.
If that requirement changes, altering the check constraint is much faster than altering the table (because the table does not need to be re-written)
The same can be applied for Oracle and others - in Oracle it would be VARCHAR(4000) instead of text though.
I don't know if there is a physical storage difference between VARCHAR(max) and e.g. VARCHAR(500) in SQL Server. But apparently there is a performance impact when using varchar(max) as compared to varchar(8000).
See this link (posted by Erwin Brandstetter as a comment)
Edit 2013-09-22
Regarding bigown's comment:
In Postgres versions before 9.2 (which was not available when I wrote the initial answer) a change to the column definition did rewrite the whole table, see e.g. here. Since 9.2 this is no longer the case and a quick test confirmed that increasing the column size for a table with 1.2 million rows indeed only took 0.5 seconds.
For Oracle this seems to be true as well, judging by the time it takes to alter a big table's varchar column. But I could not find any reference for that.
For MySQL the manual says "In most cases, ALTER TABLE makes a temporary copy of the original table". And my own tests confirm that: running an ALTER TABLE on a table with 1.2 million rows (the same as in my test with Postgres) to increase the size of a column took 1.5 minutes. In MySQL however you can not use the "workaround" to use a check constraint to limit the number of characters in a column.
For SQL Server I could not find a clear statement on this but the execution time to increase the size of a varchar column (again the 1.2 million rows table from above) indicates that no rewrite takes place.
Edit 2017-01-24
Seems I was (at least partially) wrong about SQL Server. See this answer from Aaron Bertrand that shows that the declared length of a nvarchar or varchar columns makes a huge difference for the performance.
| PostgreSQL | 8,295,131 | 359 |
I have a table and I'd like to pull one row per id with field values concatenated.
In my table, for example, I have this:
TM67 | 4 | 32556
TM67 | 9 | 98200
TM67 | 72 | 22300
TM99 | 2 | 23009
TM99 | 3 | 11200
And I'd like to output:
TM67 | 4,9,72 | 32556,98200,22300
TM99 | 2,3 | 23009,11200
In MySQL I was able to use the aggregate function GROUP_CONCAT, but that doesn't seem to work here... Is there an equivalent for PostgreSQL, or another way to accomplish this?
| Since 9.0 this is even easier:
SELECT id,
string_agg(some_column, ',')
FROM the_table
GROUP BY id
| PostgreSQL | 2,560,946 | 357 |
Are timestamp values stored differently in PostgreSQL when the data type is WITH TIME ZONE versus WITHOUT TIME ZONE? Can the differences be illustrated with simple test cases?
| The differences are covered at the PostgreSQL documentation for date/time types. Yes, the treatment of TIME or TIMESTAMP differs between one WITH TIME ZONE or WITHOUT TIME ZONE. It doesn't affect how the values are stored; it affects how they are interpreted.
The effects of time zones on these data types is covered specifically in the docs. The difference arises from what the system can reasonably know about the value:
With a time zone as part of the value, the value can be rendered as a local time in the client.
Without a time zone as part of the value, the obvious default time zone is UTC, so it is rendered for that time zone.
The behaviour differs depending on at least three factors:
The timezone setting in the client.
The data type (i.e. WITH TIME ZONE or WITHOUT TIME ZONE) of the value.
Whether the value is specified with a particular time zone.
Here are examples covering the combinations of those factors:
foo=> SET TIMEZONE TO 'Japan';
SET
foo=> SELECT '2011-01-01 00:00:00'::TIMESTAMP;
timestamp
---------------------
2011-01-01 00:00:00
(1 row)
foo=> SELECT '2011-01-01 00:00:00'::TIMESTAMP WITH TIME ZONE;
timestamptz
------------------------
2011-01-01 00:00:00+09
(1 row)
foo=> SELECT '2011-01-01 00:00:00+03'::TIMESTAMP;
timestamp
---------------------
2011-01-01 00:00:00
(1 row)
foo=> SELECT '2011-01-01 00:00:00+03'::TIMESTAMP WITH TIME ZONE;
timestamptz
------------------------
2011-01-01 06:00:00+09
(1 row)
foo=> SET TIMEZONE TO 'Australia/Melbourne';
SET
foo=> SELECT '2011-01-01 00:00:00'::TIMESTAMP;
timestamp
---------------------
2011-01-01 00:00:00
(1 row)
foo=> SELECT '2011-01-01 00:00:00'::TIMESTAMP WITH TIME ZONE;
timestamptz
------------------------
2011-01-01 00:00:00+11
(1 row)
foo=> SELECT '2011-01-01 00:00:00+03'::TIMESTAMP;
timestamp
---------------------
2011-01-01 00:00:00
(1 row)
foo=> SELECT '2011-01-01 00:00:00+03'::TIMESTAMP WITH TIME ZONE;
timestamptz
------------------------
2011-01-01 08:00:00+11
(1 row)
| PostgreSQL | 5,876,218 | 355 |
I'm trying to run the following PHP script to do a simple database query:
$db_host = "localhost";
$db_name = "showfinder";
$username = "user";
$password = "password";
$dbconn = pg_connect("host=$db_host dbname=$db_name user=$username password=$password")
or die('Could not connect: ' . pg_last_error());
$query = 'SELECT * FROM sf_bands LIMIT 10';
$result = pg_query($query) or die('Query failed: ' . pg_last_error());
This produces the following error:
Query failed: ERROR: relation "sf_bands" does not exist
In all the examples I can find where someone gets an error stating the relation does not exist, it's because they use uppercase letters in their table name. My table name does not have uppercase letters. Is there a way to query my table without including the database name, i.e. showfinder.sf_bands?
| This error means that you're not referencing the table name correctly. One common reason is that the table is defined with a mixed-case spelling, and you're trying to query it with all lower-case.
In other words, the following fails:
CREATE TABLE "SF_Bands" ( ... );
SELECT * FROM sf_bands; -- ERROR!
Use double-quotes to delimit identifiers so you can use the specific mixed-case spelling as the table is defined.
SELECT * FROM "SF_Bands";
Re your comment, you can add a schema to the "search_path" so that when you reference a table name without qualifying its schema, the query will match that table name by checked each schema in order. Just like PATH in the shell or include_path in PHP, etc. You can check your current schema search path:
SHOW search_path
"$user",public
You can change your schema search path:
SET search_path TO showfinder,public;
Read more about the search_path here: https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH
| PostgreSQL | 695,289 | 353 |
I have the following UPSERT in PostgreSQL 9.5:
INSERT INTO chats ("user", "contact", "name")
VALUES ($1, $2, $3),
($2, $1, NULL)
ON CONFLICT("user", "contact") DO NOTHING
RETURNING id;
If there are no conflicts it returns something like this:
----------
| id |
----------
1 | 50 |
----------
2 | 51 |
----------
But if there are conflicts it doesn't return any rows:
----------
| id |
----------
I want to return the new id columns if there are no conflicts or return the existing id columns of the conflicting columns.
Can this be done? If so, how?
| The currently accepted answer seems ok for a single conflict target, few conflicts, small tuples and no triggers. It avoids concurrency issue 1 (see below) with brute force. The simple solution has its appeal, the side effects may be less important.
For all other cases, though, do not update identical rows without need. Even if you see no difference on the surface, there are various side effects:
It might fire triggers that should not be fired.
It write-locks "innocent" rows, possibly incurring costs for concurrent transactions.
It might make the row seem new, though it's old (transaction timestamp).
Most importantly, with PostgreSQL's MVCC model UPDATE writes a new row version for every target row, no matter whether the row data changed. This incurs a performance penalty for the UPSERT itself, table bloat, index bloat, performance penalty for subsequent operations on the table, VACUUM cost. A minor effect for few duplicates, but massive for mostly dupes.
Plus, sometimes it is not practical or even possible to use ON CONFLICT DO UPDATE. The manual:
For ON CONFLICT DO UPDATE, a conflict_target must be provided.
A single "conflict target" is not possible if multiple indexes / constraints are involved.
Related solution for multiple, mutually exclusive partial indexes:
UPSERT based on UNIQUE constraint with NULL values
Or a way to deal with multiple unique constraints:
Use multiple conflict_target in ON CONFLICT clause
Back on the topic, you can achieve (almost) the same without empty updates and side effects. Some of the following solutions also work with ON CONFLICT DO NOTHING (no "conflict target"), to catch all possible conflicts that might arise - which may or may not be desirable.
Without concurrent write load
WITH input_rows(usr, contact, name) AS (
VALUES
(text 'foo1', text 'bar1', text 'bob1') -- type casts in first row
, ('foo2', 'bar2', 'bob2')
-- more?
)
, ins AS (
INSERT INTO chats (usr, contact, name)
SELECT * FROM input_rows
ON CONFLICT (usr, contact) DO NOTHING
RETURNING id --, usr, contact -- return more columns?
)
SELECT 'i' AS source -- 'i' for 'inserted'
, id --, usr, contact -- return more columns?
FROM ins
UNION ALL
SELECT 's' AS source -- 's' for 'selected'
, c.id --, usr, contact -- return more columns?
FROM input_rows
JOIN chats c USING (usr, contact); -- columns of unique index
The source column is an optional addition to demonstrate how this works. You may actually need it to tell the difference between both cases (another advantage over empty writes).
The final JOIN chats works because newly inserted rows from an attached data-modifying CTE are not yet visible in the underlying table. (All parts of the same SQL statement see the same snapshots of underlying tables.)
Since the VALUES expression is free-standing (not directly attached to an INSERT) Postgres cannot derive data types from the target columns and you may have to add explicit type casts. The manual:
When VALUES is used in INSERT, the values are all automatically
coerced to the data type of the corresponding destination column. When
it's used in other contexts, it might be necessary to specify the
correct data type. If the entries are all quoted literal constants,
coercing the first is sufficient to determine the assumed type for all.
The query itself (not counting the side effects) may be a bit more expensive for few dupes, due to the overhead of the CTE and the additional SELECT (which should be cheap since the perfect index is there by definition - a unique constraint is implemented with an index).
May be (much) faster for many duplicates. The effective cost of additional writes depends on many factors.
But there are fewer side effects and hidden costs in any case. It's most probably cheaper overall.
Attached sequences are still advanced, since default values are filled in before testing for conflicts.
About CTEs:
Are SELECT type queries the only type that can be nested?
Deduplicate SELECT statements in relational division
With concurrent write load
Assuming default READ COMMITTED transaction isolation. Related:
Concurrent transactions result in race condition with unique constraint on insert
The best strategy to defend against race conditions depends on exact requirements, the number and size of rows in the table and in the UPSERTs, the number of concurrent transactions, the likelihood of conflicts, available resources and other factors ...
Concurrency issue 1
If a concurrent transaction has written to a row which your transaction now tries to UPSERT, your transaction has to wait for the other one to finish.
If the other transaction ends with ROLLBACK (or any error, i.e. automatic ROLLBACK), your transaction can proceed normally. Minor possible side effect: gaps in sequential numbers. But no missing rows.
If the other transaction ends normally (implicit or explicit COMMIT), your INSERT will detect a conflict (the UNIQUE index / constraint is absolute) and DO NOTHING, hence also not return the row. (Also cannot lock the row as demonstrated in concurrency issue 2 below, since it's not visible.) The SELECT sees the same snapshot from the start of the query and also cannot return the yet invisible row.
Any such rows are missing from the result set (even though they exist in the underlying table)!
This may be ok as is. Especially if you are not returning rows like in the example and are satisfied knowing the row is there. If that's not good enough, there are various ways around it.
You can check the row count of the output and repeat the statement if it does not match the row count of the input. May be good enough for the rare case. The point is to start a new query (can be in the same transaction), which will then see the newly committed rows.
Or check for missing result rows within the same query and overwrite those with the brute force trick demonstrated in Alextoni's answer.
WITH input_rows(usr, contact, name) AS ( ... ) -- see above
, ins AS (
INSERT INTO chats AS c (usr, contact, name)
SELECT * FROM input_rows
ON CONFLICT (usr, contact) DO NOTHING
RETURNING id, usr, contact -- we need unique columns for later join
)
, sel AS (
SELECT 'i'::"char" AS source -- 'i' for 'inserted'
, id, usr, contact
FROM ins
UNION ALL
SELECT 's'::"char" AS source -- 's' for 'selected'
, c.id, usr, contact
FROM input_rows
JOIN chats c USING (usr, contact)
)
, ups AS ( -- RARE corner case
INSERT INTO chats AS c (usr, contact, name) -- another UPSERT, not just UPDATE
SELECT i.*
FROM input_rows i
LEFT JOIN sel s USING (usr, contact) -- columns of unique index
WHERE s.usr IS NULL -- missing!
ON CONFLICT (usr, contact) DO UPDATE -- we've asked nicely the 1st time ...
SET name = c.name -- ... this time we overwrite with old value
-- SET name = EXCLUDED.name -- alternatively overwrite with *new* value
RETURNING 'u'::"char" AS source -- 'u' for updated
, id --, usr, contact -- return more columns?
)
SELECT source, id FROM sel
UNION ALL
TABLE ups;
It's like the query above, but we add one more step with the CTE ups, before we return the complete result set. That last CTE will do nothing most of the time. Only if rows go missing from the returned result, we use brute force.
More overhead, yet. The more conflicts with pre-existing rows, the more likely this will outperform the simple approach.
One side effect: the 2nd UPSERT writes rows out of order, so it re-introduces the possibility of deadlocks (see below) if three or more transactions writing to the same rows overlap. If that's a problem, you need a different solution - like repeating the whole statement as mentioned above.
Concurrency issue 2
If concurrent transactions can write to involved columns of affected rows, and you have to make sure the rows you found are still there at a later stage in the same transaction, you can lock existing rows cheaply in the CTE ins (which would otherwise go unlocked) with:
...
ON CONFLICT (usr, contact) DO UPDATE
SET name = name WHERE FALSE -- never executed, but still locks the row
...
And add a locking clause to the SELECT as well, like FOR UPDATE.
This makes competing write operations wait till the end of the transaction, when all locks are released. So be brief.
More details and explanation:
How to include excluded rows in RETURNING from INSERT ... ON CONFLICT
Is SELECT or INSERT in a function prone to race conditions?
Deadlocks?
Defend against deadlocks by inserting rows in consistent order. See:
Deadlock with multi-row INSERTs despite ON CONFLICT DO NOTHING
Data types and casts
Existing table as template for data types ...
Explicit type casts for the first row of data in the free-standing VALUES expression may be inconvenient. There are ways around it. You can use any existing relation (table, view, ...) as row template. The target table is the obvious choice for the use case. Input data is coerced to appropriate types automatically, like in the VALUES clause of an INSERT:
WITH input_rows AS (
(SELECT usr, contact, name FROM chats LIMIT 0) -- only copies column names and types
UNION ALL
VALUES
('foo1', 'bar1', 'bob1') -- no type casts here
, ('foo2', 'bar2', 'bob2')
)
...
This does not work for some data types. See:
Casting NULL type when updating multiple rows
... and names
This also works for all data types.
While inserting into all (leading) columns of the table, you can omit column names. Assuming table chats in the example only consists of the 3 columns used in the UPSERT:
WITH input_rows AS (
SELECT * FROM (
VALUES
((NULL::chats).*) -- copies whole row definition
('foo1', 'bar1', 'bob1') -- no type casts needed
, ('foo2', 'bar2', 'bob2')
) sub
OFFSET 1
)
...
Aside: don't use reserved words like "user" as identifier. That's a loaded footgun. Use legal, lower-case, unquoted identifiers. I replaced it with usr.
| PostgreSQL | 34,708,509 | 352 |
I'm adding a new, "NOT NULL" column to my Postgresql database using the following query (sanitized for the Internet):
ALTER TABLE mytable ADD COLUMN mycolumn character varying(50) NOT NULL;
Each time I run this query, I receive the following error message:
ERROR: column "mycolumn" contains null values
I'm stumped. Where am I going wrong?
NOTE: I'm using pgAdmin III (1.8.4) primarily, but I received the same error when I ran the SQL from within Terminal.
| You have to set a default value.
ALTER TABLE mytable ADD COLUMN mycolumn character varying(50) NOT NULL DEFAULT 'foo';
... some work (set real values as you want)...
ALTER TABLE mytable ALTER COLUMN mycolumn DROP DEFAULT;
| PostgreSQL | 512,451 | 350 |
I have a table in PostgreSQL with many columns, and I want to add an auto increment primary key.
I tried to create a column called id of type BIGSERIAL but pgadmin responded with an error:
ERROR: sequence must have same owner as table it is linked to.
Does anyone know how to fix this issue? How do I add or create an auto-incrementing primary key in PostgreSQL without recreating the table?
| Try this command:
ALTER TABLE your_table ADD COLUMN key_column BIGSERIAL PRIMARY KEY;
Try it with the same DB-user as the one you have created the table.
| PostgreSQL | 7,718,585 | 348 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.