question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I'm trying to delete orphan entries in a mysql table. I have 2 tables like this: Table files: id .... 1 .... 2 .... 7 .... 9 .... table blob: fileid .... 1 .... 2 .... 3 .... 4 .... 4 .... 4 .... 9 .... The fileid and id columns can be used to join the tables together. I want to delete all rows in table blob where fileid cannot be found in the table files.id. So using the example above that would delete rows: 3 & 4(s) in the blob table.
Using LEFT JOIN/IS NULL: DELETE b FROM BLOB b LEFT JOIN FILES f ON f.id = b.fileid WHERE f.id IS NULL Using NOT EXISTS: DELETE FROM BLOB WHERE NOT EXISTS(SELECT NULL FROM FILES f WHERE f.id = fileid) Using NOT IN: DELETE FROM BLOB WHERE fileid NOT IN (SELECT f.id FROM FILES f) Warning Whenever possible, perform DELETEs within a transaction (assuming supported - IE: Not on MyISAM) so you can use rollback to revert changes in case of problems.
MySQL
3,384,127
219
I'm trying to finish this query; my tag field is set to UNIQUE and I simply want the database to ignore any duplicate tag. INSERT INTO table_tags (tag) VALUES ('tag_a'),('tab_b'),('tag_c') ON DUPLICATE KEY IGNORE '*the offending tag and carry on*' or even this would be acceptable INSERT INTO table_tags (tag) VALUES ('tag_a'),('tab_b'),('tag_c') ON DUPLICATE KEY UPDATE '*the offending tag and carry on*'
Would suggest NOT using INSERT IGNORE as it ignores ALL errors (ie its a sloppy global ignore). Instead, since in your example tag is the unique key, use: INSERT INTO table_tags (tag) VALUES ('tag_a'),('tab_b'),('tag_c') ON DUPLICATE KEY UPDATE tag=tag; on duplicate key produces: Query OK, 0 rows affected (0.07 sec)
MySQL
2,366,813
219
Is it possible to somehow get structure of MySQL database, or just some table with simple query? Or is there another way, how can I do it?
I think that what you're after is DESCRIBE DESCRIBE table; You can also use SHOW TABLES SHOW TABLES; to get a list of the tables in your database.
MySQL
898,688
219
I need to delete all rows from a table but when I add a new row, I want the primary key ID, which has an auto increment, to start again from 0 respectively from 1.
Do not delete, use truncate: Truncate table XXX The table handler does not remember the last used AUTO_INCREMENT value, but starts counting from the beginning. This is true even for MyISAM and InnoDB, which normally do not reuse sequence values. Source.
MySQL
12,651,867
218
How can I make a copy values from one column to another? I have: Database name: list ------------------- number | test ------------------- 123456 | somedata 123486 | somedata1 232344 | 34 I want to have: Database name: list ---------------- number | test ---------------- 123456 | 123456 123486 | 123486 232344 | 232344 What MySQL query should I have?
Short answer for the code in question is: UPDATE `table` SET test=number Here table is the table name and it's surrounded by grave accent (aka back-ticks `) as this is MySQL convention to escape keywords (and TABLE is a keyword in that case). BEWARE! This is pretty dangerous query which will wipe everything in column test in every row of your table replacing it by the number (regardless of it's value) It is more common to use WHERE clause to limit your query to only specific set of rows: UPDATE `products` SET `in_stock` = true WHERE `supplier_id` = 10
MySQL
9,001,939
218
How to calculate the difference between two dates, in the format YYYY-MM-DD hh: mm: ss and to get the result in seconds or milliseconds?
SELECT TIMEDIFF('2007-12-31 10:02:00','2007-12-30 12:01:01'); -- result: 22:00:59, the difference in HH:MM:SS format SELECT TIMESTAMPDIFF(SECOND,'2007-12-30 12:01:01','2007-12-31 10:02:00'); -- result: 79259 the difference in seconds So, you can use TIMESTAMPDIFF for your purpose.
MySQL
4,759,248
218
What is the equivalent of varchar(max) in MySQL?
The max length of a varchar is subject to the max row size in MySQL, which is 64KB (not counting BLOBs): VARCHAR(65535) However, note that the limit is lower if you use a multi-byte character set: VARCHAR(21844) CHARACTER SET utf8 Here are some examples: The maximum row size is 65535, but a varchar also includes a byte or two to encode the length of a given string. So you actually can't declare a varchar of the maximum row size, even if it's the only column in the table. mysql> CREATE TABLE foo ( v VARCHAR(65534) ); ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs But if we try decreasing lengths, we find the greatest length that works: mysql> CREATE TABLE foo ( v VARCHAR(65532) ); Query OK, 0 rows affected (0.01 sec) Now if we try to use a multibyte charset at the table level, we find that it counts each character as multiple bytes. UTF8 strings don't necessarily use multiple bytes per string, but MySQL can't assume you'll restrict all your future inserts to single-byte characters. mysql> CREATE TABLE foo ( v VARCHAR(65532) ) CHARSET=utf8; ERROR 1074 (42000): Column length too big for column 'v' (max = 21845); use BLOB or TEXT instead In spite of what the last error told us, InnoDB still doesn't like a length of 21845. mysql> CREATE TABLE foo ( v VARCHAR(21845) ) CHARSET=utf8; ERROR 1118 (42000): Row size too large. The maximum row size for the used table type, not counting BLOBs, is 65535. This includes storage overhead, check the manual. You have to change some columns to TEXT or BLOBs This makes perfect sense, if you calculate that 21845*3 = 65535, which wouldn't have worked anyway. Whereas 21844*3 = 65532, which does work. mysql> CREATE TABLE foo ( v VARCHAR(21844) ) CHARSET=utf8; Query OK, 0 rows affected (0.32 sec)
MySQL
332,798
218
In the footer of my page, I would like to add something like "last updated the xx/xx/200x" with this date being the last time a certain mySQL table has been updated. What is the best way to do that? Is there a function to retrieve the last updated date? Should I access to the database every time I need this value?
In later versions of MySQL you can use the information_schema database to tell you when another table was updated: SELECT UPDATE_TIME FROM information_schema.tables WHERE TABLE_SCHEMA = 'dbname' AND TABLE_NAME = 'tabname' This does of course mean opening a connection to the database. An alternative option would be to "touch" a particular file whenever the MySQL table is updated: On database updates: Open your timestamp file in O_RDRW mode close it again or alternatively use touch(), the PHP equivalent of the utimes() function, to change the file timestamp. On page display: use stat() to read back the file modification time.
MySQL
307,438
218
I want to find an SQL query to find rows where field1 does not contain $x. How can I do this?
What kind of field is this? The IN operator cannot be used with a single field, but is meant to be used in subqueries or with predefined lists: -- subquery SELECT a FROM x WHERE x.b NOT IN (SELECT b FROM y); -- predefined list SELECT a FROM x WHERE x.b NOT IN (1, 2, 3, 6); If you are searching a string, go for the LIKE operator (but this will be slow): -- Finds all rows where a does not contain "text" SELECT * FROM x WHERE x.a NOT LIKE '%text%'; If you restrict it so that the string you are searching for has to start with the given string, it can use indices (if there is an index on that field) and be reasonably fast: -- Finds all rows where a does not start with "text" SELECT * FROM x WHERE x.a NOT LIKE 'text%';
MySQL
232,935
218
When I execute this command in MySQL: SET FOREIGN_KEY_CHECKS=0; Does it affect the whole engine or it is only my current transaction?
It is session-based, when set the way you did in your question. https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html According to this, FOREIGN_KEY_CHECKS is "Both" for scope. This means it can be set for session: SET FOREIGN_KEY_CHECKS=0; or globally: SET GLOBAL FOREIGN_KEY_CHECKS=0;
MySQL
8,538,636
217
Error message on MySql: Illegal mix of collations (utf8_unicode_ci,IMPLICIT) and (utf8_general_ci,IMPLICIT) for operation '=' I have gone through several other posts and was not able to solve this problem. The part affected is something similar to this: CREATE TABLE users ( userID INT UNSIGNED NOT NULL AUTO_INCREMENT, firstName VARCHAR(24) NOT NULL, lastName VARCHAR(24) NOT NULL, username VARCHAR(24) NOT NULL, password VARCHAR(40) NOT NULL, PRIMARY KEY (userid) ) ENGINE = INNODB CHARACTER SET utf8 COLLATE utf8_unicode_ci; CREATE TABLE products ( productID INT UNSIGNED NOT NULL AUTO_INCREMENT, title VARCHAR(104) NOT NULL, picturePath VARCHAR(104) NULL, pictureThumb VARCHAR(104) NULL, creationDate DATE NOT NULL, closeDate DATE NULL, deleteDate DATE NULL, varPath VARCHAR(104) NULL, isPublic TINYINT(1) UNSIGNED NOT NULL DEFAULT '1', PRIMARY KEY (productID) ) ENGINE = INNODB CHARACTER SET utf8 COLLATE utf8_unicode_ci; CREATE TABLE productUsers ( productID INT UNSIGNED NOT NULL, userID INT UNSIGNED NOT NULL, permission VARCHAR(16) NOT NULL, PRIMARY KEY (productID,userID), FOREIGN KEY (productID) REFERENCES products (productID) ON DELETE RESTRICT ON UPDATE NO ACTION, FOREIGN KEY (userID) REFERENCES users (userID) ON DELETE RESTRICT ON UPDATE NO ACTION ) ENGINE = INNODB CHARACTER SET utf8 COLLATE utf8_unicode_ci; The stored procedure I'm using is this: CREATE PROCEDURE updateProductUsers (IN rUsername VARCHAR(24),IN rProductID INT UNSIGNED,IN rPerm VARCHAR(16)) BEGIN UPDATE productUsers INNER JOIN users ON productUsers.userID = users.userID SET productUsers.permission = rPerm WHERE users.username = rUsername AND productUsers.productID = rProductID; END I was testing with php, but the same error is given with SQLyog. I have also tested recreating the entire DB but to no good. Any help will be much appreciated.
The default collation for stored procedure parameters is utf8_general_ci and you can't mix collations, so you have four options: Option 1: add COLLATE to your input variable: SET @rUsername = ‘aname’ COLLATE utf8_unicode_ci; -- COLLATE added CALL updateProductUsers(@rUsername, @rProductID, @rPerm); Option 2: add COLLATE to the WHERE clause: CREATE PROCEDURE updateProductUsers( IN rUsername VARCHAR(24), IN rProductID INT UNSIGNED, IN rPerm VARCHAR(16)) BEGIN UPDATE productUsers INNER JOIN users ON productUsers.userID = users.userID SET productUsers.permission = rPerm WHERE users.username = rUsername COLLATE utf8_unicode_ci -- COLLATE added AND productUsers.productID = rProductID; END Option 3: add it to the IN parameter definition (pre-MySQL 5.7): CREATE PROCEDURE updateProductUsers( IN rUsername VARCHAR(24) COLLATE utf8_unicode_ci, -- COLLATE added IN rProductID INT UNSIGNED, IN rPerm VARCHAR(16)) BEGIN UPDATE productUsers INNER JOIN users ON productUsers.userID = users.userID SET productUsers.permission = rPerm WHERE users.username = rUsername AND productUsers.productID = rProductID; END Option 4: alter the field itself: ALTER TABLE users CHARACTER SET utf8 COLLATE utf8_general_ci; Unless you need to sort data in Unicode order, I would suggest altering all your tables to use utf8_general_ci collation, as it requires no code changes, and will speed sorts up slightly. UPDATE: utf8mb4/utf8mb4_unicode_ci is now the preferred character set/collation method. utf8_general_ci is advised against, as the performance improvement is negligible. See https://stackoverflow.com/a/766996/1432614
MySQL
11,770,074
216
Is there a measurable performance difference between using INT vs. VARCHAR as a primary key in MySQL? I'd like to use VARCHAR as the primary key for reference lists (think US States, Country Codes) and a coworker won't budge on the INT AUTO_INCREMENT as a primary key for all tables. My argument, as detailed here, is that the performance difference between INT and VARCHAR is negligible, since every INT foreign key reference will require a JOIN to make sense of the reference, a VARCHAR key will directly present the information. So, does anyone have experience with this particular use-case and the performance concerns associated with it?
I was a bit annoyed by the lack of benchmarks for this online, so I ran a test myself. Note though that I don't do it on a regular basic, so please check my setup and steps for any factors that could have influenced the results unintentionally, and post your concerns in comments. The setup was as follows: Intel® Core™ i7-7500U CPU @ 2.70GHz × 4 15.6 GiB RAM, of which I ensured around 8 GB was free during the test. 148.6 GB SSD drive, with plenty of free space. Ubuntu 16.04 64-bit MySQL Ver 14.14 Distrib 5.7.20, for Linux (x86_64) The tables: create table jan_int (data1 varchar(255), data2 int(10), myindex tinyint(4)) ENGINE=InnoDB; create table jan_int_index (data1 varchar(255), data2 int(10), myindex tinyint(4), INDEX (myindex)) ENGINE=InnoDB; create table jan_char (data1 varchar(255), data2 int(10), myindex char(6)) ENGINE=InnoDB; create table jan_char_index (data1 varchar(255), data2 int(10), myindex char(6), INDEX (myindex)) ENGINE=InnoDB; create table jan_varchar (data1 varchar(255), data2 int(10), myindex varchar(63)) ENGINE=InnoDB; create table jan_varchar_index (data1 varchar(255), data2 int(10), myindex varchar(63), INDEX (myindex)) ENGINE=InnoDB; Then, I filled 10 million rows in each table with a PHP script whose essence is like this: $pdo = get_pdo(); $keys = [ 'alabam', 'massac', 'newyor', 'newham', 'delawa', 'califo', 'nevada', 'texas_', 'florid', 'ohio__' ]; for ($k = 0; $k < 10; $k++) { for ($j = 0; $j < 1000; $j++) { $val = ''; for ($i = 0; $i < 1000; $i++) { $val .= '("' . generate_random_string() . '", ' . rand (0, 10000) . ', "' . ($keys[rand(0, 9)]) . '"),'; } $val = rtrim($val, ','); $pdo->query('INSERT INTO jan_char VALUES ' . $val); } echo "\n" . ($k + 1) . ' millon(s) rows inserted.'; } For int tables, the bit ($keys[rand(0, 9)]) was replaced with just rand(0, 9), and for varchar tables, I used full US state names, without cutting or extending them to 6 characters. generate_random_string() generates a 10-character random string. Then I ran in MySQL: SET SESSION query_cache_type=0; For jan_int table: SELECT count(*) FROM jan_int WHERE myindex = 5; SELECT BENCHMARK(1000000000, (SELECT count(*) FROM jan_int WHERE myindex = 5)); For other tables, same as above, with myindex = 'califo' for char tables and myindex = 'california' for varchar tables. Times of the BENCHMARK query on each table: jan_int: 21.30 sec jan_int_index: 18.79 sec jan_char: 21.70 sec jan_char_index: 18.85 sec jan_varchar: 21.76 sec jan_varchar_index: 18.86 sec Regarding table & index sizes, here's the output of show table status from janperformancetest; (w/ a few columns not shown): |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Collation | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | jan_int | InnoDB | 10 | Dynamic | 9739094 | 43 | 422510592 | 0 | 0 | 4194304 | NULL | utf8mb4_unicode_520_ci | | jan_int_index | InnoDB | 10 | Dynamic | 9740329 | 43 | 420413440 | 0 | 132857856 | 7340032 | NULL | utf8mb4_unicode_520_ci | | jan_char | InnoDB | 10 | Dynamic | 9726613 | 51 | 500170752 | 0 | 0 | 5242880 | NULL | utf8mb4_unicode_520_ci | | jan_char_index | InnoDB | 10 | Dynamic | 9719059 | 52 | 513802240 | 0 | 202342400 | 5242880 | NULL | utf8mb4_unicode_520_ci | | jan_varchar | InnoDB | 10 | Dynamic | 9722049 | 53 | 521142272 | 0 | 0 | 7340032 | NULL | utf8mb4_unicode_520_ci | | jan_varchar_index | InnoDB | 10 | Dynamic | 9738381 | 49 | 486539264 | 0 | 202375168 | 7340032 | NULL | utf8mb4_unicode_520_ci | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| My conclusion is that there's no performance difference for this particular use case.
MySQL
332,300
216
I have two columns in table users namely registerDate and lastVisitDate which consist of datetime data type. I would like to do the following. Set registerDate defaults value to MySQL NOW() Set lastVisitDate default value to 0000-00-00 00:00:00 Instead of null which it uses by default. Because the table already exists and has existing records, I would like to use Modify table. I've tried using the two piece of code below, but neither works. ALTER TABLE users MODIFY registerDate datetime DEFAULT NOW() ALTER TABLE users MODIFY registerDate datetime DEFAULT CURRENT_TIMESTAMP; It gives me Error : ERROR 1067 (42000): Invalid default value for 'registerDate' Is it possible for me to set the default datetime value to NOW() in MySQL?
As of MySQL 5.6.5, you can use the DATETIME type with a dynamic default value: CREATE TABLE foo ( creation_time DATETIME DEFAULT CURRENT_TIMESTAMP, modification_time DATETIME ON UPDATE CURRENT_TIMESTAMP ) Or even combine both rules: modification_time DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP Reference: http://dev.mysql.com/doc/refman/5.7/en/timestamp-initialization.html http://optimize-this.blogspot.com/2012/04/datetime-default-now-finally-available.html Prior to 5.6.5, you need to use the TIMESTAMP data type, which automatically updates whenever the record is modified. Unfortunately, however, only one auto-updated TIMESTAMP field can exist per table. CREATE TABLE mytable ( mydate TIMESTAMP ) See: http://dev.mysql.com/doc/refman/5.1/en/create-table.html If you want to prevent MySQL from updating the timestamp value on UPDATE (so that it only triggers on INSERT) you can change the definition to: CREATE TABLE mytable ( mydate TIMESTAMP DEFAULT CURRENT_TIMESTAMP )
MySQL
5,818,423
215
I am wondering if there is any difference with regards to performance between the following SELECT ... FROM ... WHERE someFIELD IN(1,2,3,4) SELECT ... FROM ... WHERE someFIELD between 0 AND 5 SELECT ... FROM ... WHERE someFIELD = 1 OR someFIELD = 2 OR someFIELD = 3 ... or will MySQL optimize the SQL in the same way compilers optimize code? EDIT Changed the AND's to OR's for the reason stated in the comments.
I needed to know this for sure, so I benchmarked both methods. I consistenly found IN to be much faster than using OR. Do not believe people who give their "opinion", science is all about testing and evidence. I ran a loop of 1000x the equivalent queries (for consistency, I used sql_no_cache): IN: 2.34969592094s OR: 5.83781504631s Update: (I don't have the source code for the original test, as it was 6 years ago, though it returns a result in the same range as this test) In request for some sample code to test this, here is the simplest possible use case. Using Eloquent for syntax simplicity, raw SQL equivalent executes the same. $t = microtime(true); for($i=0; $i<10000; $i++): $q = DB::table('users')->where('id',1) ->orWhere('id',2) ->orWhere('id',3) ->orWhere('id',4) ->orWhere('id',5) ->orWhere('id',6) ->orWhere('id',7) ->orWhere('id',8) ->orWhere('id',9) ->orWhere('id',10) ->orWhere('id',11) ->orWhere('id',12) ->orWhere('id',13) ->orWhere('id',14) ->orWhere('id',15) ->orWhere('id',16) ->orWhere('id',17) ->orWhere('id',18) ->orWhere('id',19) ->orWhere('id',20)->get(); endfor; $t2 = microtime(true); echo $t."\n".$t2."\n".($t2-$t)."\n"; 1482080514.3635 1482080517.3713 3.0078368186951 $t = microtime(true); for($i=0; $i<10000; $i++): $q = DB::table('users')->whereIn('id',[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20])->get(); endfor; $t2 = microtime(true); echo $t."\n".$t2."\n".($t2-$t)."\n"; 1482080534.0185 1482080536.178 2.1595389842987
MySQL
782,915
215
I am using prepared statements to execute mysql database queries. And I want to implement a search functionality based on a keyword of sorts. For that I need to use LIKE keyword, that much I know. And I have also used prepared statements before, but I do not know how to use it with LIKE because from the following code where would I add the 'keyword%'? Can I directly use it in the pstmt.setString(1, notes) as (1, notes+"%") or something like that. I see a lot of posts on this on the web but no good answer anywhere. PreparedStatement pstmt = con.prepareStatement( "SELECT * FROM analysis WHERE notes like ?"); pstmt.setString(1, notes); ResultSet rs = pstmt.executeQuery();
You need to set it in the value itself, not in the prepared statement SQL string. So, this should do for a prefix-match: notes = notes .replace("!", "!!") .replace("%", "!%") .replace("_", "!_") .replace("[", "!["); PreparedStatement pstmt = con.prepareStatement( "SELECT * FROM analysis WHERE notes LIKE ? ESCAPE '!'"); pstmt.setString(1, notes + "%"); or a suffix-match: pstmt.setString(1, "%" + notes); or a global match: pstmt.setString(1, "%" + notes + "%");
MySQL
8,247,970
214
What is the upper limit of records for MySQL database table. I'm wondering about autoincrement field. What would happen if I add milions of records? How to handle this kind of situations? Thx!
The greatest value of an integer has little to do with the maximum number of rows you can store in a table. It's true that if you use an int or bigint as your primary key, you can only have as many rows as the number of unique values in the data type of your primary key, but you don't have to make your primary key an integer, you could make it a CHAR(100). You could also declare the primary key over more than one column. There are other constraints on table size besides number of rows. For instance you could use an operating system that has a file size limitation. Or you could have a 300GB hard drive that can store only 300 million rows if each row is 1KB in size. The limits of database size is really high: http://dev.mysql.com/doc/refman/5.1/en/source-configuration-options.html The MyISAM storage engine supports 232 rows per table, but you can build MySQL with the --with-big-tables option to make it support up to 264 rows per table. http://dev.mysql.com/doc/refman/5.1/en/innodb-restrictions.html The InnoDB storage engine has an internal 6-byte row ID per table, so there are a maximum number of rows equal to 248 or 281,474,976,710,656. An InnoDB tablespace also has a limit on table size of 64 terabytes. How many rows fits into this depends on the size of each row. The 64TB limit assumes the default page size of 16KB. You can increase the page size, and therefore increase the tablespace up to 256TB. But I think you'd find other performance factors make this inadvisable long before you grow a table to that size.
MySQL
2,716,232
214
with the following statement: mysqldump --complete-insert --lock-all-tables --no-create-db --no-create-info --extended-insert --password=XXX -u XXX --dump-date yyy > yyy_dataOnly.sql I get INSERT statements like the following: INSERT INTO `table` VALUES (1,'something'),(2,'anything'),(3,'everything'); What I need in my case is something like this: INSERT INTO `table` VALUES (1,'something'); INSERT INTO `table` VALUES (2,'anything'); INSERT INTO `table` VALUES (3,'everything'); Is there a way to tell "mysqldump" to create a new INSERT statement for each row? Thanks for your help!
Use: mysqldump --extended-insert=FALSE Be aware that multiple inserts will be slower than one big insert.
MySQL
12,439,353
213
I am working on a project where i need to create a database with 300 tables for each user who wants to see the demo application. it was working fine but today when i was testing with a new user to see a demo it showed me this error message 1030 Got error 28 from storage engine After spending some time googling i found it is an error that is related to space of database or temporary files. I tried to fix it but i failed. now i am not even able to start mysql. How can i fix this and i would also like to increase the size to maximum so that i won't face the same issue again and again.
Mysql error "28 from storage engine" - means "not enough disk space". To show disc space use command below. myServer# df -h Results must be like this. Filesystem Size Used Avail Capacity Mounted on /dev/vdisk 13G 13G 46M 100% / devfs 1.0k 1.0k 0B 100% /dev
MySQL
10,631,387
213
I cant get my mysql to start on os x 10.7. It is located in /usr/local/mysql/bin/mysql I get command not found when I type mysql --version in the terminal. I tried this can't access mysql from command line mac but still get command not found. I tried installing mysql with the dmg and i have tried to install it through homebrew and I cant get it to work. When I try to start mysql with homebrew I get MySQL won't start This is my $PATH: /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/local/git/bin:/Users/Victoria/bin:/usr/local//usr/local/mysql/bin/private/var/mysql/private/var/mysql/bin
This is the problem with your $PATH: /usr/local//usr/local/mysql/bin/private/var/mysql/private/var/mysql/bin. $PATH is where the shell searches for command files. Folders to search in need to be separated with a colon. And so you want /usr/local/mysql/bin/ in your path but instead it searches in /usr/local//usr/local/mysql/bin/private/var/mysql/private/var/mysql/bin, which probably doesn't exist. Instead you want ${PATH}:/usr/local/mysql/bin. So do export PATH=${PATH}:/usr/local/mysql/bin. If you want this to be run every time you open terminal put it in the file .bash_profile, which is run when Terminal opens.
MySQL
10,577,374
212
Possible Duplicate: Why would someone use WHERE 1=1 AND <conditions> in a SQL clause? I saw some people use a statement to query a table in a MySQL database like the following: select * from car_table where 1=1 and value="TOYOTA" But what does 1=1 mean here?
It's usually when folks build up SQL statements. When you add and value = "Toyota" you don't have to worry about whether there is a condition before or just WHERE. The optimiser should ignore it No magic, just practical Example Code: commandText = "select * from car_table where 1=1"; if (modelYear <> 0) commandText += " and year="+modelYear if (manufacturer <> "") commandText += " and value="+QuotedStr(manufacturer) if (color <> "") commandText += " and color="+QuotedStr(color) if (california) commandText += " and hasCatalytic=1" Otherwise you would have to have a complicated set of logic: commandText = "select * from car_table" whereClause = ""; if (modelYear <> 0) { if (whereClause <> "") whereClause = whereClause + " and "; commandText += "year="+modelYear; } if (manufacturer <> "") { if (whereClause <> "") whereClause = whereClause + " and "; commandText += "value="+QuotedStr(manufacturer) } if (color <> "") { if (whereClause <> "") whereClause = whereClause + " and "; commandText += "color="+QuotedStr(color) } if (california) { if (whereClause <> "") whereClause = whereClause + " and "; commandText += "hasCatalytic=1" } if (whereClause <> "") commandText = commandText + "WHERE "+whereClause;
MySQL
8,149,142
212
We're using Doctrine, a PHP ORM. I am creating a query like this: $q = Doctrine_Query::create()->select('id')->from('MyTable'); and then in the function I'm adding in various where clauses and things as appropriate, like this $q->where('normalisedname = ? OR name = ?', array($string, $originalString)); Later on, before execute()-ing that query object, I want to print out the raw SQL in order to examine it, and do this: $q->getSQLQuery(); However that only prints out the prepared statement, not the full query. I want to see what it is sending to the MySQL, but instead it is printing out a prepared statement, including ?'s. Is there some way to see the 'full' query?
Doctrine is not sending a "real SQL query" to the database server : it is actually using prepared statements, which means : Sending the statement, for it to be prepared (this is what is returned by $query->getSql()) And, then, sending the parameters (returned by $query->getParameters()) and executing the prepared statements This means there is never a "real" SQL query on the PHP side — so, Doctrine cannot display it.
MySQL
2,095,394
212
Which is the valid syntax of this query in MySQL? SELECT * FROM courses WHERE (now() + 2 hours) > start_time note: start_time is a field of courses table
SELECT * FROM courses WHERE DATE_ADD(NOW(), INTERVAL 2 HOUR) > start_time See Date and Time Functions for other date/time manipulation.
MySQL
589,652
212
While importing the database in mysql, I have got following error: 1418 (HY000) at line 10185: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable) I don't know which things i need to change. Can any one help me how to resolve this?
There are two ways to fix this: Execute the following in the MySQL console: SET GLOBAL log_bin_trust_function_creators = 1; Add the following to the mysql.ini configuration file: log_bin_trust_function_creators = 1; The setting relaxes the checking for non-deterministic functions. Non-deterministic functions are functions that modify data (i.e. have update, insert or delete statement(s)). For more info, see here. Please note, if binary logging is NOT enabled, this setting does not apply. Binary Logging of Stored Programs If binary logging is not enabled, log_bin_trust_function_creators does not apply. log_bin_trust_function_creators This variable applies when binary logging is enabled. The best approach is a better understanding and use of deterministic declarations for stored functions. These declarations are used by MySQL to optimize the replication and it is a good thing to choose them carefully to have a healthy replication. DETERMINISTIC A routine is considered “deterministic” if it always produces the same result for the same input parameters and NOT DETERMINISTIC otherwise. This is mostly used with string or math processing, but not limited to that. NOT DETERMINISTIC Opposite of "DETERMINISTIC". "If neither DETERMINISTIC nor NOT DETERMINISTIC is given in the routine definition, the default is NOT DETERMINISTIC. To declare that a function is deterministic, you must specify DETERMINISTIC explicitly.". So it seems that if no statement is made, MySQl will treat the function as "NOT DETERMINISTIC". This statement from manual is in contradiction with other statement from another area of manual which tells that: " When you create a stored function, you must declare either that it is deterministic or that it does not modify data. Otherwise, it may be unsafe for data recovery or replication. By default, for a CREATE FUNCTION statement to be accepted, at least one of DETERMINISTIC, NO SQL, or READS SQL DATA must be specified explicitly. Otherwise an error occurs" I personally got error in MySQL 5.5 if there is no declaration, so i always put at least one declaration of "DETERMINISTIC", "NOT DETERMINISTIC", "NO SQL" or "READS SQL DATA" regardless other declarations i may have. READS SQL DATA This explicitly tells to MySQL that the function will ONLY read data from databases, thus, it does not contain instructions that modify data, but it contains SQL instructions that read data (e.q. SELECT). MODIFIES SQL DATA This indicates that the routine contains statements that may write data (for example, it contain UPDATE, INSERT, DELETE or ALTER instructions). NO SQL This indicates that the routine contains no SQL statements. CONTAINS SQL This indicates that the routine contains SQL instructions, but does not contain statements that read or write data. This is the default if none of these characteristics is given explicitly. Examples of such statements are SELECT NOW(), SELECT 10+@b, SET @x = 1 or DO RELEASE_LOCK('abc'), which execute but neither read nor write data. Note that there are MySQL functions that are not deterministic safe, such as: NOW(), UUID(), etc, which are likely to produce different results on different machines, so a user function that contains such instructions must be declared as NOT DETERMINISTIC. Also, a function that reads data from an unreplicated schema is clearly NONDETERMINISTIC. * Assessment of the nature of a routine is based on the “honesty” of the creator: MySQL does not check that a routine declared DETERMINISTIC is free of statements that produce nondeterministic results. However, misdeclaring a routine might affect results or affect performance. Declaring a nondeterministic routine as DETERMINISTIC might lead to unexpected results by causing the optimizer to make incorrect execution plan choices. Declaring a deterministic routine as NONDETERMINISTIC might diminish performance by causing available optimizations not to be used.
MySQL
26,015,160
211
I want to move away from PHP a little and learn Python. In order to do web development with Python I'll need a framework to help with templating and other things. I have a non-production server that I use to test all of web development stuff on. It is a Debian 7.1 LAMP stack that runs MariaDB instead of the common MySQL-server package. Yesterday I installed Django and created my first project called firstweb. I have not changed any settings yet. Here is my first big piece of confusion. In the tutorial I followed the guy installed Django, started his first project, restarted Apache, and Django just worked from then on. He went to his browser and went to the Django default page with no problems. Me however, I have to cd into my firstweb folder and run python manage.py runserver myip:port And it works. No problem. But I'm wondering if it is supposed to work like this, and if this will cause problems down the line? My second question is that I want to set it up so it uses my MySQL database. I go into my settings.py under /firstweb/firstweb and I see ENGINE and NAME but I'm not sure what to put here. And then in the USER, PASSWORD, and HOST areas is this my database and its credentials? If I am using localhost can I just put localhost in the HOST area?
MySQL support is simple to add. In your DATABASES dictionary, you will have an entry like this: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'DB_NAME', 'USER': 'DB_USER', 'PASSWORD': 'DB_PASSWORD', 'HOST': 'localhost', # Or an IP Address that your DB is hosted on 'PORT': '3306', } } You also have the option of utilizing MySQL option files, as of Django 1.7. You can accomplish this by setting your DATABASES array like so: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'OPTIONS': { 'read_default_file': '/path/to/my.cnf', }, } } You also need to create the /path/to/my.cnf file with similar settings from above [client] database = DB_NAME host = localhost user = DB_USER password = DB_PASSWORD default-character-set = utf8 With this new method of connecting in Django 1.7, it is important to know the order connections are established: 1. OPTIONS. 2. NAME, USER, PASSWORD, HOST, PORT 3. MySQL option files. In other words, if you set the name of the database in OPTIONS, this will take precedence over NAME, which would override anything in a MySQL option file. If you are just testing your application on your local machine, you can use python manage.py runserver Adding the ip:port argument allows machines other than your own to access your development application. Once you are ready to deploy your application, I recommend taking a look at the chapter on Deploying Django on the djangobook Mysql default character set is often not utf-8, therefore make sure to create your database using this sql: CREATE DATABASE mydatabase CHARACTER SET utf8 COLLATE utf8_bin If you are using Oracle's MySQL connector your ENGINE line should look like this: 'ENGINE': 'mysql.connector.django', Note that you will first need to install mysql on your OS. brew install mysql (MacOS) Also, the mysql client package has changed for python 3 (MySQL-Client works only for python 2) pip3 install mysqlclient
MySQL
19,189,813
211
In linux I could find the mysql installation directory with the command which mysql. But I could not find any in windows. I tried echo %path% and it resulted many paths along with path to mysql bin. I wanted to find the mysql data directory from command line in windows for use in batch program. I would also like to find mysql data directory from linux command line. Is it possible? or how can we do that? In my case, the mysql data directory is on the installation folder i.e. ..MYSQL\mysql server 5\data It might be installed on any drive however. I want to get it returned from the command line.
You can issue the following query from the command line: mysql -uUSER -p -e 'SHOW VARIABLES WHERE Variable_Name LIKE "%dir"' Output (on Linux): +---------------------------+----------------------------+ | Variable_name | Value | +---------------------------+----------------------------+ | basedir | /usr | | character_sets_dir | /usr/share/mysql/charsets/ | | datadir | /var/lib/mysql/ | | innodb_data_home_dir | | | innodb_log_group_home_dir | ./ | | lc_messages_dir | /usr/share/mysql/ | | plugin_dir | /usr/lib/mysql/plugin/ | | slave_load_tmpdir | /tmp | | tmpdir | /tmp | +---------------------------+----------------------------+ Output (on macOS Sierra): +---------------------------+-----------------------------------------------------------+ | Variable_name | Value | +---------------------------+-----------------------------------------------------------+ | basedir | /usr/local/mysql-5.7.17-macos10.12-x86_64/ | | character_sets_dir | /usr/local/mysql-5.7.17-macos10.12-x86_64/share/charsets/ | | datadir | /usr/local/mysql/data/ | | innodb_data_home_dir | | | innodb_log_group_home_dir | ./ | | innodb_tmpdir | | | lc_messages_dir | /usr/local/mysql-5.7.17-macos10.12-x86_64/share/ | | plugin_dir | /usr/local/mysql/lib/plugin/ | | slave_load_tmpdir | /var/folders/zz/zyxvpxvq6csfxvn_n000009800002_/T/ | | tmpdir | /var/folders/zz/zyxvpxvq6csfxvn_n000009800002_/T/ | +---------------------------+-----------------------------------------------------------+ Or if you want only the data dir use: mysql -uUSER -p -e 'SHOW VARIABLES WHERE Variable_Name = "datadir"' These commands work on Windows too, but you need to invert the single and double quotes. Btw, when executing which mysql in Linux as you told, you'll not get the installation directory on Linux. You'll only get the binary path, which is /usr/bin on Linux, but you see the mysql installation is using multiple folders to store files. If you need the value of datadir as output, and only that, without column headers etc, but you don't have a GNU environment (awk|grep|sed ...) then use the following command line: mysql -s -N -uUSER -p information_schema -e 'SELECT Variable_Value FROM GLOBAL_VARIABLES WHERE Variable_Name = "datadir"' The command will select the value only from mysql's internal information_schema database and disables the tabular output and column headers. Output on Linux: /var/lib/mysql
MySQL
17,968,287
211
I have table name called "Person" with following column names P_Id(int), LastName(varchar), FirstName (varchar). I forgot to give NOT NULL Constraint to P_Id. Now I tried with following query to add NOT NULL Constraint to existing column called P_Id, 1. ALTER TABLE Person MODIFY (P_Id NOT NULL); 2. ALTER TABLE Person ADD CONSTRAINT NOT NULL NOT NULL (P_Id); I am getting syntax error....
Just use an ALTER TABLE... MODIFY... query and add NOT NULL into your existing column definition. For example: ALTER TABLE Person MODIFY P_Id INT(11) NOT NULL; A word of caution: you need to specify the full column definition again when using a MODIFY query. If your column has, for example, a DEFAULT value, or a column comment, you need to specify it in the MODIFY statement along with the data type and the NOT NULL, or it will be lost. The safest practice to guard against such mishaps is to copy the column definition from the output of a SHOW CREATE TABLE YourTable query, modify it to include the NOT NULL constraint, and paste it into your ALTER TABLE... MODIFY... query.
MySQL
6,305,225
211
CREATE TABLE foo SELECT * FROM bar copies the table foo and duplicates it as a new table called bar. How can I copy the schema of foo to a new table called bar without copying over the data as well?
Try CREATE TABLE foo LIKE bar; so the keys and indexes are copied over as, well. Documentation
MySQL
1,834,472
211
Recently my server CPU has been going very high. CPU load averages 13.91 (1 min) 11.72 (5 mins) 8.01 (15 mins) and my site has only had a slight increase in traffic. After running a top command, I saw MySQL was using 160% CPU! Recently I've been optimizing tables and I've switched to persistent connections. Could this be causing MySQL to use high amounts of CPU?
First I'd say you probably want to turn off persistent connections as they almost always do more harm than good. Secondly I'd say you want to double check your MySQL users, just to make sure it's not possible for anyone to be connecting from a remote server. This is also a major security thing to check. Thirdly I'd say you want to turn on the MySQL Slow Query Log to keep an eye on any queries that are taking a long time, and use that to make sure you don't have any queries locking up key tables for too long. Some other things you can check would be to run the following query while the CPU load is high: SHOW PROCESSLIST; This will show you any queries that are currently running or in the queue to run, what the query is and what it's doing (this command will truncate the query if it's too long, you can use SHOW FULL PROCESSLIST to see the full query text). You'll also want to keep an eye on things like your buffer sizes, table cache, query cache and innodb_buffer_pool_size (if you're using innodb tables) as all of these memory allocations can have an affect on query performance which can cause MySQL to eat up CPU. You'll also probably want to give the following a read over as they contain some good information. How MySQL Uses Memory MySQL System Variables It's also a very good idea to use a profiler. Something you can turn on when you want that will show you what queries your application is running, if there's duplicate queries, how long they're taking, etc, etc. An example of something like this is one I've been working on called PHP Profiler but there are many out there. If you're using a piece of software like Drupal, Joomla or Wordpress you'll want to ask around within the community as there's probably modules available for them that allow you to get this information without needing to manually integrate anything.
MySQL
1,282,232
211
I am trying to post on an API with some query params. This is working on PostMan / Insomnia when I am trying to by passing mail and firstname as query parameters : http://localhost:8000/api/mails/users/sendVerificationMail?mail=lol%40lol.com&firstname=myFirstName However, when I am trying to do it with my react native app, I got a 400 error (Invalid Query Parameters). This is the post method : .post(`/mails/users/sendVerificationMail`, { mail, firstname }) .then(response => response.status) .catch(err => console.warn(err)); (my mail and firstname are console.logged as follow: [email protected] and myFirstName). So I don't know how to pass Query Parameters with Axios in my request (because right now, it's passing data: { mail: "[email protected]", firstname: "myFirstName" }.
axios signature for post is axios.post(url[, data[, config]]). So you want to send params object within the third argument: .post(`/mails/users/sendVerificationMail`, null, { params: { mail, firstname }}) .then(response => response.status) .catch(err => console.warn(err)); This will POST an empty body with the two query params: POST http://localhost:8000/api/mails/users/sendVerificationMail?mail=lol%40lol.com&firstname=myFirstName
MySQL
53,501,185
210
Here's how I do it: Table names are lower case, use underscores to separate words, and are singular (e.g. foo, foo_bar, etc. I generally (not always) have an auto increment PK. I use the following convention: tablename_id (e.g. foo_id, foo_bar_id, etc.). When a table contains a column that is a foreign key, I just copy the column name of that key from whatever table it came from. For example, say table foo_bar has the FK foo_id (where foo_id is the PK of foo). When defining FKs to enforce referential integrity, I use the following: tablename_fk_columnname (e.g. furthering example 3, it would be foo_bar_foo_id). Since this is a table name/column name combination, it is guaranteed to be unique within the database. I order the columns like this: PKs, FKs, then the rest of columns alphabetically Is there a better, more standard way to do this?
I would say that first and foremost: be consistent. I reckon you are almost there with the conventions that you have outlined in your question. A couple of comments though: Points 1 and 2 are good I reckon. Point 3 - sadly this is not always possible. Think about how you would cope with a single table foo_bar that has columns foo_id and another_foo_id both of which reference the foo table foo_id column. You might want to consider how to deal with this. This is a bit of a corner case though! Point 4 - Similar to Point 3. You may want to introduce a number at the end of the foreign key name to cater for having more than one referencing column. Point 5 - I would avoid this. It provides you with little and will become a headache when you want to add or remove columns from a table at a later date. Some other points are: Index Naming Conventions You may wish to introduce a naming convention for indexes - this will be a great help for any database metadata work that you might want to carry out. For example you might just want to call an index foo_bar_idx1 or foo_idx1 - totally up to you but worth considering. Singular vs Plural Column Names It might be a good idea to address the thorny issue of plural vs single in your column names as well as your table name(s). This subject often causes big debates in the DB community. I would stick with singular forms for both table names and columns. There. I've said it. The main thing here is of course consistency!
MySQL
7,899,200
210
I have 1-many number of records that need to be entered into a table. What is the best way to do this in a query? Should I just make a loop and insert one record per iteration? Or is there a better way?
From the MySQL manual INSERT statements that use VALUES syntax can insert multiple rows. To do this, include multiple lists of column values, each enclosed within parentheses and separated by commas. Example: INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);
MySQL
5,526,917
210
I'm looking at MySQL procedures and functions. What is the real difference? They seem to be similar, but a function has more limitations. I'm likely wrong, but it seems a procedure can do everything and more than a function can. Why/when would I use a procedure vs a function?
The most general difference between procedures and functions is that they are invoked differently and for different purposes: A procedure does not return a value. Instead, it is invoked with a CALL statement to perform an operation such as modifying a table or processing retrieved records. A function is invoked within an expression and returns a single value directly to the caller to be used in the expression. You cannot invoke a function with a CALL statement, nor can you invoke a procedure in an expression. Syntax for routine creation differs somewhat for procedures and functions: Procedure parameters can be defined as input-only, output-only, or both. This means that a procedure can pass values back to the caller by using output parameters. These values can be accessed in statements that follow the CALL statement. Functions have only input parameters. As a result, although both procedures and functions can have parameters, procedure parameter declaration differs from that for functions. Functions return value, so there must be a RETURNS clause in a function definition to indicate the data type of the return value. Also, there must be at least one RETURN statement within the function body to return a value to the caller. RETURNS and RETURN do not appear in procedure definitions. To invoke a stored procedure, use the CALL statement. To invoke a stored function, refer to it in an expression. The function returns a value during expression evaluation. A procedure is invoked using a CALL statement, and can only pass back values using output variables. A function can be called from inside a statement just like any other function (that is, by invoking the function's name), and can return a scalar value. Specifying a parameter as IN, OUT, or INOUT is valid only for a PROCEDURE. For a FUNCTION, parameters are always regarded as IN parameters. If no keyword is given before a parameter name, it is an IN parameter by default. Parameters for stored functions are not preceded by IN, OUT, or INOUT. All function parameters are treated as IN parameters. To define a stored procedure or function, use CREATE PROCEDURE or CREATE FUNCTION respectively: CREATE PROCEDURE proc_name ([parameters]) [characteristics] routine_body CREATE FUNCTION func_name ([parameters]) RETURNS data_type // diffrent [characteristics] routine_body A MySQL extension for stored procedure (not functions) is that a procedure can generate a result set, or even multiple result sets, which the caller processes the same way as the result of a SELECT statement. However, the contents of such result sets cannot be used directly in expression. Stored routines (referring to both stored procedures and stored functions) are associated with a particular database, just like tables or views. When you drop a database, any stored routines in the database are also dropped. Stored procedures and functions do not share the same namespace. It is possible to have a procedure and a function with the same name in a database. In Stored procedures dynamic SQL can be used but not in functions or triggers. SQL prepared statements (PREPARE, EXECUTE, DEALLOCATE PREPARE) can be used in stored procedures, but not stored functions or triggers. Thus, stored functions and triggers cannot use Dynamic SQL (where you construct statements as strings and then execute them). (Dynamic SQL in MySQL stored routines) Some more interesting differences between FUNCTION and STORED PROCEDURE: (This point is copied from a blogpost.) Stored procedure is precompiled execution plan where as functions are not. Function Parsed and compiled at runtime. Stored procedures, Stored as a pseudo-code in database i.e. compiled form. (I'm not sure for this point.) Stored procedure has the security and reduces the network traffic and also we can call stored procedure in any no. of applications at a time. reference Functions are normally used for computations where as procedures are normally used for executing business logic. Functions Cannot affect the state of database (Statements that do explicit or implicit commit or rollback are disallowed in function) Whereas Stored procedures Can affect the state of database using commit etc. refrence: J.1. Restrictions on Stored Routines and Triggers Functions can't use FLUSH statements whereas Stored procedures can do. Stored functions cannot be recursive Whereas Stored procedures can be. Note: Recursive stored procedures are disabled by default, but can be enabled on the server by setting the max_sp_recursion_depth server system variable to a nonzero value. See Section 5.2.3, “System Variables”, for more information. Within a stored function or trigger, it is not permitted to modify a table that is already being used (for reading or writing) by the statement that invoked the function or trigger. Good Example: How to Update same table on deletion in MYSQL? Note: that although some restrictions normally apply to stored functions and triggers but not to stored procedures, those restrictions do apply to stored procedures if they are invoked from within a stored function or trigger. For example, although you can use FLUSH in a stored procedure, such a stored procedure cannot be called from a stored function or trigger.
MySQL
3,744,209
210
It seems that I may have inadvertently loaded the password validation plugin in MySQL 5.7. This plugin seems to force all passwords to comply to certain rules. I would like to turn this off. I've tried changing the validate_password_length variable as suggested here to no avail. mysql> SET GLOBAL validate_password_length=4; Query OK, 0 rows affected (0.00 sec) mysql> SET PASSWORD FOR 'app' = PASSWORD('abcd'); ERROR 1819 (HY000): Your password does not satisfy the current policy requirements I would like to either unload the plugin or neuter it somehow.
Here is what I do to remove the validate password plugin: Login to the mysql server as root mysql -h localhost -u root -p Run the following sql command: uninstall plugin validate_password; If last line doesn't work (new mysql release), you should execute UNINSTALL COMPONENT 'file://component_validate_password'; I would not recommend this solution for a production system. I used this solution on a local mysql instance for development purposes only.
MySQL
36,301,100
209
I'm trying to run WordPress in my Windows desktop and it needs MySQL. I install everything with Web Platform Installer which is provided by Microsoft. I never set a root password for MySQL and in the final step of installing WordPress, it asks for a MySQL server password. What is the default password for root (if there is one) and how to change it? I tried: mysql -u root password '123' But it shows me: Access denied for user 'root@localhost' (using password:NO) After this I try: mysql -u root -p However, it asks for a password which I don't have. Update: as Bozho suggested, I did the following: I stopped the MySQL Service from Windows services Opened CMD Changed the location to c:\program files\mysql\bin Executed the command below mysqld --defaults-file="C:\\program files\\mysql\\mysql server 5.1\\my.ini" --init-files=C:\\root.txt The command ran with a warning about character set which I mentioned below I start the MySQL service from Windows services I write in the command line mysql -u root -p EnterPassword: 123 // 123 was the password The command line shows the following error Access denied for user 'root@localhost' (using password:**YES**) How do I solve this?
for this kind of error; you just have to set new password to the root user as an admin. follow the steps as follows: [root ~]# mysql -u root ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password:NO) Stop the service/daemon of mysql running [root ~]# service mysql stop mysql stop/waiting Start mysql without any privileges using the following option; This option is used to boot up and do not use the privilege system of MySQL. [root ~]# mysqld_safe --skip-grant-tables & At this moment, the terminal will seem to halt. Let that be, and use new terminal for next steps. enter the mysql command prompt [root ~]# mysql -u root mysql> Fix the permission setting of the root user ; mysql> use mysql; Database changed mysql> select * from user; Empty set (0.00 sec) mysql> truncate table user; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.01 sec) mysql> grant all privileges on *.* to root@localhost identified by 'YourNewPassword' with grant option; Query OK, 0 rows affected (0.01 sec) *if you don`t want any password or rather an empty password mysql> grant all privileges on *.* to root@localhost identified by '' with grant option; Query OK, 0 rows affected (0.01 sec)* mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) Confirm the results: mysql> select host, user from user; +-----------+------+ | host | user | +-----------+------+ | localhost | root | +-----------+------+ 1 row in set (0.00 sec) Exit the shell and restart mysql in normal mode. mysql> quit; [root ~]# kill -KILL [PID of mysqld_safe] [root ~]# kill -KILL [PID of mysqld] [root ~]# service mysql start Now you can successfully login as root user with the password you set [root ~]# mysql -u root -pYourNewPassword mysql>
MySQL
2,995,054
209
I was wondering if there is a way to do this purely in sql: q1 = SELECT campaign_id, from_number, received_msg, date_received FROM `received_txts` WHERE `campaign_id` = '8'; INSERT INTO action_2_members (campaign_id, mobile, vote, vote_date) VALUES(q1.campaign_id, q1.from_number, q1.received_msg, q1.date_received); Note: q1 would return about 30k rows. Is there any way to do what I am attempting above in straight sql? To just pull the data straight from one table (basically a raw data table) and insert into another table (basically a processed data table)?
INSERT INTO action_2_members (campaign_id, mobile, vote, vote_date) SELECT campaign_id, from_number, received_msg, date_received FROM `received_txts` WHERE `campaign_id` = '8'
MySQL
4,241,621
208
I'm a bit confused on how to order by date formats. For the format YYYY-MM-DD you would do this: ...ORDER BY date DESC... How would you order by DD/MM/YYYY? This isn't working: SELECT * FROM $table ORDER BY DATE_FORMAT(Date, '%Y%m%d') DESC LIMIT 14
Guessing you probably just want to format the output date? then this is what you are after SELECT *, DATE_FORMAT(date,'%d/%m/%Y') AS niceDate FROM table ORDER BY date DESC LIMIT 0,14 Or do you actually want to sort by Day before Month before Year?
MySQL
10,637,581
207
The 'id' field of my table auto increases when I insert a row. I want to insert a row and then get that ID. I would do it just as I said it, but is there a way I can do it without worrying about the time between inserting the row and getting the id? I know I can query the database for the row that matches the information that was entered, but there is a high change there will be duplicates, with the only difference being the id.
$link = mysqli_connect('127.0.0.1', 'my_user', 'my_pass', 'my_db'); mysqli_query($link, "INSERT INTO mytable (1, 2, 3, 'blah')"); $id = mysqli_insert_id($link); See mysqli_insert_id(). Whatever you do, don't insert and then do a "SELECT MAX(id) FROM mytable". Like you say, it's a race condition and there's no need. mysqli_insert_id() already has this functionality. Another way would be to run both queries in one go, and using MySQL's LAST_INSERT_ID() method, where both tables get modified at once (and PHP does not need any ID), like: mysqli_query($link, "INSERT INTO my_user_table ...; INSERT INTO my_other_table (`user_id`) VALUES (LAST_INSERT_ID())"); Note that Each connection keeps track of ID separately (so, conflicts are prevented already).
MySQL
897,356
207
I want to install the MySQL client for the command line, not a GUI. I have searched over the web but only found instructions on installing the MySQL server.
install MySQLWorkbench, then export PATH=$PATH:/Applications/MySQLWorkbench.app/Contents/MacOS
MySQL
30,990,488
206
I have two tables in MySQL. Table Person has the following columns: id name fruits The fruits column may hold null or an array of strings like ('apple', 'orange', 'banana'), or ('strawberry'), etc. The second table is Table Fruit and has the following three columns: fruit_name color price apple red 2 orange orange 3 ----------- -------- ------ So how should I design the fruits column in the first table so that it can hold array of strings that take values from the fruit_name column in the second table? Since there is no array data type in MySQL, how should I do it?
The proper way to do this is to use multiple tables and JOIN them in your queries. For example: CREATE TABLE person ( `id` INT NOT NULL PRIMARY KEY, `name` VARCHAR(50) ); CREATE TABLE fruits ( `fruit_name` VARCHAR(20) NOT NULL PRIMARY KEY, `color` VARCHAR(20), `price` INT ); CREATE TABLE person_fruit ( `person_id` INT NOT NULL, `fruit_name` VARCHAR(20) NOT NULL, PRIMARY KEY(`person_id`, `fruit_name`) ); The person_fruit table contains one row for each fruit a person is associated with and effectively links the person and fruits tables together, I.E. 1 | "banana" 1 | "apple" 1 | "orange" 2 | "straberry" 2 | "banana" 2 | "apple" When you want to retrieve a person and all of their fruit you can do something like this: SELECT p.*, f.* FROM person p INNER JOIN person_fruit pf ON pf.person_id = p.id INNER JOIN fruits f ON f.fruit_name = pf.fruit_name
MySQL
17,371,639
206
Does anyone know how to convert JS dateTime to MySQL datetime? Also is there a way to add a specific number of minutes to JS datetime and then pass it to MySQL datetime?
var date; date = new Date(); date = date.getUTCFullYear() + '-' + ('00' + (date.getUTCMonth()+1)).slice(-2) + '-' + ('00' + date.getUTCDate()).slice(-2) + ' ' + ('00' + date.getUTCHours()).slice(-2) + ':' + ('00' + date.getUTCMinutes()).slice(-2) + ':' + ('00' + date.getUTCSeconds()).slice(-2); console.log(date); or even shorter: new Date().toISOString().slice(0, 19).replace('T', ' '); Output: 2012-06-22 05:40:06 For more advanced use cases, including controlling the timezone, consider using http://momentjs.com/: require('moment')().format('YYYY-MM-DD HH:mm:ss'); For a lightweight alternative to momentjs, consider https://github.com/taylorhakes/fecha require('fecha').format('YYYY-MM-DD HH:mm:ss')
MySQL
5,129,624
206
I've searched around but didn't find if it's possible. I've this MySQL query: INSERT INTO table (id,a,b,c,d,e,f,g) VALUES (1,2,3,4,5,6,7,8) Field id has a "unique index", so there can't be two of them. Now if the same id is already present in the database, I'd like to update it. But do I really have to specify all these field again, like: INSERT INTO table (id,a,b,c,d,e,f,g) VALUES (1,2,3,4,5,6,7,8) ON DUPLICATE KEY UPDATE a=2,b=3,c=4,d=5,e=6,f=7,g=8 Or: INSERT INTO table (id,a,b,c,d,e,f,g) VALUES (1,2,3,4,5,6,7,8) ON DUPLICATE KEY UPDATE a=VALUES(a),b=VALUES(b),c=VALUES(c),d=VALUES(d),e=VALUES(e),f=VALUES(f),g=VALUES(g) I've specified everything already in the insert... A extra note, I'd like to use the work around to get the ID to! id=LAST_INSERT_ID(id) I hope somebody can tell me what the most efficient way is.
The UPDATE statement is given so that older fields can be updated to new value. If your older values are the same as your new ones, why would you need to update it in any case? For eg. if your columns a to g are already set as 2 to 8; there would be no need to re-update it. Alternatively, you can use: INSERT INTO table (id,a,b,c,d,e,f,g) VALUES (1,2,3,4,5,6,7,8) ON DUPLICATE KEY UPDATE a=a, b=b, c=c, d=d, e=e, f=f, g=g; To get the id from LAST_INSERT_ID; you need to specify the backend app you're using for the same. For LuaSQL, a conn:getlastautoid() fetches the value.
MySQL
14,383,503
205
Here is what I want to do: current table: +----+-------------+ | id | data | +----+-------------+ | 1 | max | | 2 | linda | | 3 | sam | | 4 | henry | +----+-------------+ Mystery Query ( something like "UPDATE table SET data = CONCAT(data, 'a')" ) resulting table: +----+-------------+ | id | data | +----+-------------+ | 1 | maxa | | 2 | lindaa | | 3 | sama | | 4 | henrya | +----+-------------+ thats it! I just need to do it in a single query, but can't seem to find a way. I am using mySQL on bluehost (I think its version 4.1) Thanks everyone.
That's pretty much all you need: mysql> select * from t; +------+-------+ | id | data | +------+-------+ | 1 | max | | 2 | linda | | 3 | sam | | 4 | henry | +------+-------+ 4 rows in set (0.02 sec) mysql> update t set data=concat(data, 'a'); Query OK, 4 rows affected (0.01 sec) Rows matched: 4 Changed: 4 Warnings: 0 mysql> select * from t; +------+--------+ | id | data | +------+--------+ | 1 | maxa | | 2 | lindaa | | 3 | sama | | 4 | henrya | +------+--------+ 4 rows in set (0.00 sec) Not sure why you'd be having trouble, though I am testing this on 5.1.41
MySQL
4,128,335
205
Can I run a select statement and get the row number if the items are sorted? I have a table like this: mysql> describe orders; +-------------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------------------+------+-----+---------+----------------+ | orderID | bigint(20) unsigned | NO | PRI | NULL | auto_increment | | itemID | bigint(20) unsigned | NO | | NULL | | +-------------+---------------------+------+-----+---------+----------------+ I can then run this query to get the number of orders by ID: SELECT itemID, COUNT(*) as ordercount FROM orders GROUP BY itemID ORDER BY ordercount DESC; This gives me a count of each itemID in the table like this: +--------+------------+ | itemID | ordercount | +--------+------------+ | 388 | 3 | | 234 | 2 | | 3432 | 1 | | 693 | 1 | | 3459 | 1 | +--------+------------+ I want to get the row number as well, so I could tell that itemID=388 is the first row, 234 is second, etc (essentially the ranking of the orders, not just a raw count). I know I can do this in Java when I get the result set back, but I was wondering if there was a way to handle it purely in SQL. Update Setting the rank adds it to the result set, but not properly ordered: mysql> SET @rank=0; Query OK, 0 rows affected (0.00 sec) mysql> SELECT @rank:=@rank+1 AS rank, itemID, COUNT(*) as ordercount -> FROM orders -> GROUP BY itemID ORDER BY rank DESC; +------+--------+------------+ | rank | itemID | ordercount | +------+--------+------------+ | 5 | 3459 | 1 | | 4 | 234 | 2 | | 3 | 693 | 1 | | 2 | 3432 | 1 | | 1 | 388 | 3 | +------+--------+------------+ 5 rows in set (0.00 sec)
Take a look at this. Change your query to: SET @rank=0; SELECT @rank:=@rank+1 AS rank, itemID, COUNT(*) as ordercount FROM orders GROUP BY itemID ORDER BY ordercount DESC; SELECT @rank; The last select is your count.
MySQL
2,520,357
205
After noticing an application tended to discard random emails due to incorrect string value errors, I went though and switched many text columns to use the utf8 column charset and the default column collate (utf8_general_ci) so that it would accept them. This fixed most of the errors, and made the application stop getting sql errors when it hit non-latin emails, too. Despite this, some of the emails are still causing the program to hit incorrect string value errrors: (Incorrect string value: '\xE4\xC5\xCC\xC9\xD3\xD8...' for column 'contents' at row 1) The contents column is a MEDIUMTEXT datatybe which uses the utf8 column charset and the utf8_general_ci column collate. There are no flags that I can toggle in this column. Keeping in mind that I don't want to touch or even look at the application source code unless absolutely necessary: What is causing that error? (yes, I know the emails are full of random garbage, but I thought utf8 would be pretty permissive) How can I fix it? What are the likely effects of such a fix? One thing I considered was switching to a utf8 varchar([some large number]) with the binary flag turned on, but I'm rather unfamiliar with MySQL, and have no idea if such a fix makes sense.
UPDATE to the below answer: The time the question was asked, "UTF8" in MySQL meant utf8mb3. In the meantime, utf8mb4 was added, but to my knowledge MySQLs "UTF8" was not switched to mean utf8mb4. That means, you'd need to specifically put "utf8mb4", if you mean it (and you should use utf8mb4) I'll keep this here instead of just editing the answer, to make clear there is still a difference when saying "UTF8" Original I would not suggest Richies answer, because you are screwing up the data inside the database. You would not fix your problem but try to "hide" it and not being able to perform essential database operations with the crapped data. If you encounter this error either the data you are sending is not UTF-8 encoded, or your connection is not UTF-8. First, verify, that the data source (a file, ...) really is UTF-8. Then, check your database connection, you should do this after connecting: SET NAMES 'utf8mb4'; SET CHARACTER SET utf8mb4; Next, verify that the tables where the data is stored have the utf8mb4 character set: SELECT `tables`.`TABLE_NAME`, `collations`.`character_set_name` FROM `information_schema`.`TABLES` AS `tables`, `information_schema`.`COLLATION_CHARACTER_SET_APPLICABILITY` AS `collations` WHERE `tables`.`table_schema` = DATABASE() AND `collations`.`collation_name` = `tables`.`table_collation` ; Last, check your database settings: mysql> show variables like '%colla%'; mysql> show variables like '%charac%'; If source, transport and destination are utf8mb4, your problem is gone;)
MySQL
1,168,036
205
I would like to know how can I output a number with 2 decimal places, without rounding the original number. For example: 2229,999 -> 2229,99 I already tried: FORMAT(2229.999, 2) CONVERT(2229.999, DECIMAL(4,2))
When formatting number to 2 decimal places you have two options TRUNCATE and ROUND. You are looking for TRUNCATE function. Examples: Without rounding: TRUNCATE(0.166, 2) -- will be evaluated to 0.16 TRUNCATE(0.164, 2) -- will be evaluated to 0.16 docs: https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_truncate With rounding: ROUND(0.166, 2) -- will be evaluated to 0.17 ROUND(0.164, 2) -- will be evaluated to 0.16 docs: https://dev.mysql.com/doc/refman/8.0/en/mathematical-functions.html#function_round
MySQL
11,190,668
204
In short: Is there any way to sort the values in a GROUP_CONCAT statement? Query: GROUP_CONCAT((SELECT GROUP_CONCAT(parent.name SEPARATOR " &raquo; ") FROM test_competence AS node, test_competence AS parent WHERE node.lft BETWEEN parent.lft AND parent.rgt AND node.id = l.competence AND parent.id != 1 ORDER BY parent.lft) SEPARATOR "<br />\n") AS competences I get this row: Crafts » Joinery Administration » Organization I want it like this: Administration » Organization Crafts » Joinery
Sure, see http://dev.mysql.com/doc/refman/...tions.html#function_group-concat: SELECT student_name, GROUP_CONCAT(DISTINCT test_score ORDER BY test_score DESC SEPARATOR ' ') FROM student GROUP BY student_name;
MySQL
995,373
204
I have table - config. Schema: config_name | config_value And I would like to update multiple records in one query. I try like that: UPDATE config SET t1.config_value = 'value' , t2.config_value = 'value2' WHERE t1.config_name = 'name1' AND t2.config_name = 'name2'; but that query is wrong :( Can you help me?
Try either multi-table update syntax UPDATE config t1 JOIN config t2 ON t1.config_name = 'name1' AND t2.config_name = 'name2' SET t1.config_value = 'value', t2.config_value = 'value2'; Here is a SQLFiddle demo or conditional update UPDATE config SET config_value = CASE config_name WHEN 'name1' THEN 'value' WHEN 'name2' THEN 'value2' ELSE config_value END WHERE config_name IN('name1', 'name2'); Here is a SQLFiddle demo
MySQL
20,255,138
203
I am storing the last login time in MySQL in, datetime-type filed. When users logs in, I want to get the difference between the last login time and the current time (which I get using NOW()). How can I calculate it?
USE TIMESTAMPDIFF MySQL function. For example, you can use: SELECT TIMESTAMPDIFF(SECOND, '2012-06-06 13:13:55', '2012-06-06 15:20:18') In your case, the third parameter of TIMSTAMPDIFF function would be the current login time (NOW()). Second parameter would be the last login time, which is already in the database.
MySQL
10,907,750
203
I have a database called nitm. I haven't created any tables there. But I have a SQL file which contains all the necessary data for the database. The file is nitm.sql which is in C:\ drive. This file has size of about 103 MB. I am using wamp server. I have used the following syntax in MySQL console to import the file: mysql>c:/nitm.sql; But this didn't work.
From the mysql console: mysql> use DATABASE_NAME; mysql> source path/to/file.sql; make sure there is no slash before path if you are referring to a relative path... it took me a while to realize that! lol
MySQL
5,152,921
203
I am planning to do a class project and was going through few technologies where I can automate or set the flow of data between systems and found that there are couple of them i.e. Apache NiFi and StreamSets ( to my knowledge ). What I couldn't understand is the difference between them and use-cases where they can be used? I am new to this and if anyone can explain me a bit would be highly appreciated. Thanks
Suraj, Great question. My response is as a member of the open source Apache NiFi project management committee and as someone who is passionate about the dataflow management domain. I've been involved in the NiFi project since it was started in 2006. My knowledge of Streamsets is relatively limited so I'll let them speak for it as they have. The key thing to understand is that NiFi was built to do one really important thing really well and that is 'Dataflow Management'. It's design is based on a concept called Flow Based Programming which you may want to read about and reference for your project 'https://en.wikipedia.org/wiki/Flow-based_programming' There are already many systems which produce data such as sensors and others. There are many systems which focus on data processing like Apache Storm, Spark, Flink, and others. And finally there are many systems which store data like HDFS, relational databases, and so on. NiFi purely focuses on the task of connecting those systems and providing the user experience and core functions necessary to do that well. What are some of those key functions and design choices made to make that effective: 1) Interactive command and control The job of someone trying to connect systems is to be able to rapidly and efficiently interact with the constant streams of data they see. NiFi's UI allows you do just that as the data is flowing you can add features to operate on it, fork off copies of data to try new approaches, adjust current settings, see recent and historical stats, helpful in-line documentation and more. Almost all other systems by comparison have a model that is design and deploy oriented meaning you make a series of changes and then deploy them. That model is fine and can be intuitive but for the dataflow management job it means you don't get the interactive change by change feedback that is so vital to quickly build new flows or to safely and efficiently correct or improve handling of existing data streams. 2) Data Provenance A very unique capability of NiFi is its ability to generate fine grained and powerful traceability details for where your data comes from, what is done to it, where its sent and when it is done in the flow. This is essential to effective dataflow management for a number of reasons but for someone in the early exploration phases and working a project the most important thing this gives you is awesome debugging flexibility. You can setup your flows and let things run and then use provenance to actually prove that it did exactly what you wanted. If something didn't happen as you expected you can fix the flow and replay the object then repeat. Really helpful. 3) Purpose built data repositories NiFi's out of the box experience offers very powerful performance even on really modest hardware or virtual environments. This is because of the flowfile and content repository design which gives us the high performance but transactional semantics we want as data works its way through the flow. The flowfile repository is a simple write ahead log implementation and the content repository provides an immutable versioned content store. That in turn means we can 'copy' data by only ever adding a new pointer (not actually copying bytes) or we can transform data by simply reading from the original and writing out a new version. Again very efficient. Couple that with the provenance stuff I mentioned a moment ago and it just provides a really powerful platform. Another really key thing to understand here is that in the business of connecting systems you don't always get to dictate things like size of data involved. The NiFi API was built to honor that fact and so our API lets processors do things like receive, transform, and send data without ever having to load the full objects in memory. These repositories also mean that in most flows the majority of processors do not even touch the content at all. However, you can easily see from the NiFi UI precisely how many bytes are actually being read or written so again you get really helpful information in establishing and observing your flows. This design also means NiFi can support back-pressure and pressure-release naturally and these are really critical features for a dataflow management system. It was mentioned previously by the folks from the Streamsets company that NiFi is file oriented. I'm not really sure what the difference is between a file or a record or a tuple or an object or a message in generic terms but the reality is when data is in the flow then it is 'a thing that needs to be managed and delivered'. That is what NiFi does. Whether you have lots of really high speed tiny things or you have large things and whether they came from a live audio stream off the Internet or they come from a file sitting on your harddrive it doesn't matter. Once it is in the flow it is time to manage and deliver it. That is what NiFi does. It was also mentioned by the Streamsets company that NiFi is schemaless. It is accurate that NiFi does not force conversion of data from whatever it is originally to some special NiFi format nor do we have to reconvert it back to some format for follow-on delivery. It would be pretty unfortunate if we did that because what this means is that even the most trivial of cases would have problematic performance implications and luckily NiFi does not have that problem. Further had we gone that route then it would mean handling diverse datasets like media (images, video, audio, and more) would be difficult but we're on the right track and NiFi is used for things like that all the time. Finally, as you continue with your project and if you find there are things you'd like to see improved or that you'd like to contribute code we'd love to have your help. From https://nifi.apache.org you can quickly find information on how to file tickets, submit patches, email the mailing list, and more. Here are a couple of fun recent NiFi projects to checkout: https://www.linkedin.com/pulse/nifi-ocr-using-apache-read-childrens-books-jeremy-dyer https://twitter.com/KayLerch/status/721455415456882689 Good luck on the class project! If you have any questions the [email protected] mailing list would love to help. Thanks Joe
StreamSets
36,899,612
43
I was reading articles related to Kafka and StreamSets and my understanding was Kafka acts as a broker between Producer system and subscriber. Producer push the data into Kafka cluster, subscriber pull the data from Kafka StreamsSets is a technology to move data from one source to another through a pipeline Now, below are my questions, Please help to clarify What is the fundamental difference between Kafka and StreamSets? Is that Kafka doesn't move data but StreamSets moves the data? If Kafka doesn't move the data, what is Kafka used for? If it moves data like ETL solutions, how it is different from SSIS, Informatica etc? How is StreamSets different from SSIS, Informatica etc?
StreamSets is a graphical tool that contains components that allow for data movement, which happen to include Kafka producers and consumers, but you're not required to use them. They're complementary, and by using Kafka, you can allow for back-pressure in streaming systems or have non-StreamSets producers/consumers interacting with other Kafka topics. No, Kafka doesn't move the data (except for internal replication), the clients that interact with the brokers do. I've not used Informatica or SSIS, but I'm sure if you contacted someone at StreamSets, they could answer how they compare
StreamSets
56,416,005
10
DELETE B.* FROM m_productprice B INNER JOIN m_product C ON B.m_product_id = C.m_product_id WHERE C.upc = '7094' AND B.m_pricelist_version_id = '1000020' i am getting the following error PostgreSQL 8.2.11 ERROR: syntax error at or near "B" LINE 1: DELETE B.* from m_productprice B INNER JOIN m_product C ON ... i tried giving DELETE B from m_productprice B INNER JOIN m_product C ON B.... ERROR: syntax error at or near "B" i tried giving ERROR: syntax error at or near "INNER" LINE 1: DELETE from m_productprice B INNER JOIN m_product C ON B.m_... what is the problem with my query?
DELETE FROM m_productprice B USING m_product C WHERE B.m_product_id = C.m_product_id AND C.upc = '7094' AND B.m_pricelist_version_id='1000020'; or DELETE FROM m_productprice WHERE m_pricelist_version_id='1000020' AND m_product_id IN (SELECT m_product_id FROM m_product WHERE upc = '7094');
PostgreSQL
11,753,904
320
In MS SQL-Server, I can do: SELECT ISNULL(Field,'Empty') from Table But in PostgreSQL I get a syntax error. How do I emulate the ISNULL() functionality ?
SELECT CASE WHEN field IS NULL THEN 'Empty' ELSE field END AS field_alias Or more idiomatic: SELECT coalesce(field, 'Empty') AS field_alias
PostgreSQL
2,214,525
318
Where can I find a detailed manual about PostgreSQL naming conventions? (table names vs. camel case, sequences, primary keys, constraints, indexes, etc...)
Regarding tables names, case, etc, the prevalent convention is: SQL keywords: UPPER CASE identifiers (names of databases, tables, columns, etc): lower_case_with_underscores For example: UPDATE my_table SET name = 5; This is not written in stone, but the bit about identifiers in lower case is highly recommended, IMO. Postgresql treats identifiers case insensitively when not quoted (it actually folds them to lowercase internally), and case sensitively when quoted; many people are not aware of this idiosyncrasy. Using always lowercase you are safe. Anyway, it's acceptable to use camelCase or PascalCase (or UPPER_CASE), as long as you are consistent: either quote identifiers always or never (and this includes the schema creation!). I am not aware of many more conventions or style guides. Surrogate keys are normally made from a sequence (usually with the serial macro), it would be convenient to stick to that naming for those sequences if you create them by hand (tablename_colname_seq). See also some discussion here, here and (for general SQL) here, all with several related links. Note: Postgresql 10 introduced identity columns as an SQL-compliant replacement for serial.
PostgreSQL
2,878,248
316
Whenever I try to drop database I get the following error: ERROR: database "pilot" is being accessed by other users DETAIL: There is 1 other session using the database. When I use: SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'TARGET_DB'; I terminated the connection from that DB, but if I try to drop database after that somehow someone automatically connects to that database and gives this error. What could be doing that? No one uses this database, except me.
Postgres 13+ Use WITH (force) See https://stackoverflow.com/a/68982312/398670 instead Postgres 12 and older You can prevent future connections with: REVOKE CONNECT ON DATABASE thedb FROM public; (and possibly other users/roles; see \l+ in psql) You can then terminate all connections to this db except your own: SELECT pid, pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = current_database() AND pid <> pg_backend_pid(); On older versions pid was called procpid so you'll have to deal with that. Since you've revoked CONNECT rights, whatever was trying to auto-connect should no longer be able to do so. You'll now be able to drop the DB. This won't work if you're using superuser connections for normal operations, but if you're doing that you need to fix that problem first. After you're done dropping the database, if you create the database again, you can execute below command to restore the access GRANT CONNECT ON DATABASE thedb TO public;
PostgreSQL
17,449,420
313
Upon restarting my Mac I got the dreaded Postgres error: psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? The reason this happened is because my macbook froze completely due to an unrelated issue and I had to do a hard reboot using the power button. After rebooting I couldn't start Postgres because of this error.
WARNING: If you delete postmaster.pid without making sure there are really no postgres processes running you, could permanently corrupt your database. (PostgreSQL should delete it automatically if the postmaster has exited.). SOLUTION: This fixed the issue--I deleted this file, and then everything worked! /usr/local/var/postgres/postmaster.pid -- and here is how I figured out why this needed to be deleted. I used the following command to see if there were any PG processes running. for me there were none, I couldn't even start the PG server: ps auxw | grep post I searched for the file .s.PGSQL.5432 that was in the error message above. i used the following command: sudo find / -name .s.PGSQL.5432 -ls this didn't show anything after searching my whole computer so the file didn't exist, but obviously psql "wanted it to" or "thought it was there". I took a look at my server logs and saw the following error: cat /usr/local/var/postgres/server.log at the end of the server log I see the following error: FATAL: pre-existing shared memory block (key 5432001, ID 65538) is still in use HINT: If you're sure there are no old server processes still running, remove the shared memory block or just delete the file "postmaster.pid". Following the advice in the error message, I deleted the postmaster.pid file in the same directory as server.log. This resolved the issue and I was able to restart. So, it seems that my macbook freezing and being hard-rebooted caused Postgres to think that it's processes were still running even after reboot. Deleting this file resolved. Lots of people have similar issues but most the answers had to do with file permissions, whereas in my case things were different.
PostgreSQL
13,573,204
311
Using Postgres 9.0, I need a way to test if a value exists in a given array. So far I came up with something like this: select '{1,2,3}'::int[] @> (ARRAY[]::int[] || value_variable::int) But I keep thinking there should be a simpler way to this, I just can't see it. This seems better: select '{1,2,3}'::int[] @> ARRAY[value_variable::int] I believe it will suffice. But if you have other ways to do it, please share!
Simpler with the ANY construct: SELECT value_variable = ANY ('{1,2,3}'::int[]) The right operand of ANY (between parentheses) can either be a set (result of a subquery, for instance) or an array. There are several ways to use it: SQLAlchemy: how to filter on PgArray column types? IN vs ANY operator in PostgreSQL Important difference: Array operators (<@, @>, && et al.) expect array types as operands and support GIN or GiST indices in the standard distribution of PostgreSQL, while the ANY construct expects an element type as left operand and can be supported with a plain B-tree index (with the indexed expression to the left of the operator, not the other way round like it seems to be in your example). Example: Index for finding an element in a JSON array None of this works for NULL elements. To test for NULL: Check if NULL exists in Postgres array
PostgreSQL
11,231,544
309
I'm sure this is a duplicate question in the sense that the answer is out there somewhere, but I haven't been able to find the answer after Googling for 10 minutes, so I'd appeal to the editors not to close it on the basis that it might well be useful for other people. I'm using Postgres 9.5. This is my table: Column │ Type │ Modifiers ────────────────────────┼───────────────────────────┼───────────────────────────────────────────────────────────────────────── id │ integer │ not null default nextval('mytable_id_seq'::regclass) pmid │ character varying(200) │ pub_types │ character varying(2000)[] │ not null I want to find all the rows with "Journal" in pub_types. I've found the docs and googled and this is what I've tried: select * from mytable where ("Journal") IN pub_types; select * from mytable where "Journal" IN pub_types; select * from mytable where pub_types=ANY("Journal"); select * from mytable where pub_types IN ("Journal"); select * from mytable where where pub_types contains "Journal"; I've scanned the postgres array docs but can't see a simple example of how to run a query, and StackOverflow questions all seem to be based around more complicated examples.
This should work: select * from mytable where 'Journal'=ANY(pub_types); i.e. the syntax is <value> = ANY ( <array> ). Also notice that string literals in postresql are written with single quotes.
PostgreSQL
39,643,454
308
I have a database schema named: nyummy and a table named cimory: create table nyummy.cimory ( id numeric(10,0) not null, name character varying(60) not null, city character varying(50) not null, CONSTRAINT cimory_pkey PRIMARY KEY (id) ); I want to export the cimory table's data as insert SQL script file. However, I only want to export records/data where the city is equal to 'tokyo' (assume city data are all lowercase). How to do it? It doesn't matter whether the solution is in freeware GUI tools or command line (although GUI tools solution is better). I had tried pgAdmin III, but I can't find an option to do this.
Create a table with the set you want to export and then use the command line utility pg_dump to export to a file: create table export_table as select id, name, city from nyummy.cimory where city = 'tokyo' $ pg_dump --table=export_table --data-only --column-inserts my_database > data.sql --column-inserts will dump as insert commands with column names. --data-only do not dump schema. As commented below, creating a view in instead of a table will obviate the table creation whenever a new export is necessary.
PostgreSQL
12,815,496
301
I have a bunch of rows that I need to insert into table, but these inserts are always done in batches. So I want to check if a single row from the batch exists in the table because then I know they all were inserted. So its not a primary key check, but shouldn't matter too much. I would like to only check single row so count(*) probably isn't good, so its something like exists I guess. But since I'm fairly new to PostgreSQL I'd rather ask people who know. My batch contains rows with following structure: userid | rightid | remaining_count So if table contains any rows with provided userid it means they all are present there.
Use the EXISTS keyword for TRUE / FALSE return: SELECT EXISTS(SELECT 1 FROM contact WHERE id=12)
PostgreSQL
7,471,625
301
I have a table with existing data. Is there a way to add a primary key without deleting and re-creating the table?
(Updated - Thanks to the people who commented) Modern Versions of PostgreSQL Suppose you have a table named test1, to which you want to add an auto-incrementing, primary-key id (surrogate) column. The following command should be sufficient in recent versions of PostgreSQL: ALTER TABLE test1 ADD COLUMN id SERIAL PRIMARY KEY; Older Versions of PostgreSQL In old versions of PostgreSQL (prior to 8.x?) you had to do all the dirty work. The following sequence of commands should do the trick: ALTER TABLE test1 ADD COLUMN id INTEGER; CREATE SEQUENCE test_id_seq OWNED BY test1.id; ALTER TABLE test1 ALTER COLUMN id SET DEFAULT nextval('test_id_seq'); UPDATE test1 SET id = nextval('test_id_seq'); Again, in recent versions of Postgres this is roughly equivalent to the single command above.
PostgreSQL
2,944,499
301
I either forgot or mistyped (during the installation) the password to the default user of PostgreSQL. I can't seem to be able to run it, and I get the following error: psql: FATAL: password authentication failed for user "hisham" hisham-agil: hisham$ psql Is there a way to reset the password or how do I create a new user with superuser privileges? I am new to PostgreSQL and just installed it for the first time. I am trying to use it with Ruby on Rails and I am running Mac OS X v10.7 (Lion).
Find the file pg_hba.conf. It may be located, for example, in /etc/postgresql-9.1/pg_hba.conf. cd /etc/postgresql-9.1/ Back it up cp pg_hba.conf pg_hba.conf-backup Place the following line (as either the first uncommented line, or as the only one): For all occurrence of below (local and host) , except replication section if you don't have any it has to be changed as follow ,no MD5 or Peer authentication should be present. local all all trust Restart your PostgreSQL server (e.g., on Linux:) sudo /etc/init.d/postgresql restart If the service (daemon) doesn't start reporting in log file: local connections are not supported by this build you should change local all all trust to host all all 127.0.0.1/32 trust You can now connect as any user. Connect as the superuser postgres (note, the superuser name may be different in your installation. In some systems it is called pgsql, for example.) psql -U postgres or psql -h 127.0.0.1 -U postgres (note that with the first command you will not always be connected with local host) Reset the password ('replace my_user_name with postgres since you are resetting the postgres user) ALTER USER my_user_name with password 'my_secure_password'; Restore the old pg_hba.conf file as it is very dangerous to keep around cp pg_hba.conf-backup pg_hba.conf Restart the server, in order to run with the safe pg_hba.conf file sudo /etc/init.d/postgresql restart Further reading about that pg_hba file: 19.1. The pg_hba.conf File (official documentation)
PostgreSQL
10,845,998
300
I've recently been playing around with Docker and QGIS and have installed a container following the instructions in this tutorial. Everything works great, although I am unable to connect to a localhost postgres database that contains all my GIS data. I figure this is because my postgres database is not configured to accept remote connections and have been editing the postgres conf files to allow remote connections using the instructions in this article. I'm still getting an error message when I try and connect to my database running QGIS in Docker: could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections to port 5433? The postgres server is running, and I've edited my pg_hba.conf file to allow connections from a range of IP addresses (172.17.0.0/32). I had previously queried the IP address of the docker container using docker ps and although the IP address changes, it has so far always been in the range 172.17.0.x Any ideas why I can't connect to this database? Probably something very simple I imagine! I'm running Ubuntu 14.04; Postgres 9.3
TL;DR Use 172.17.0.0/16 as IP address range, not 172.17.0.0/32. Don't use localhost to connect to the PostgreSQL database on your host, but the host's IP instead. To keep the container portable, start the container with the --add-host=database:<host-ip> flag and use database as hostname for connecting to PostgreSQL. Make sure PostgreSQL is configured to listen for connections on all IP addresses, not just on localhost. Look for the setting listen_addresses in PostgreSQL's configuration file, typically found in /etc/postgresql/9.3/main/postgresql.conf (credits to @DazmoNorton). Long version 172.17.0.0/32 is not a range of IP addresses, but a single address (namly 172.17.0.0). No Docker container will ever get that address assigned, because it's the network address of the Docker bridge (docker0) interface. When Docker starts, it will create a new bridge network interface, that you can easily see when calling ip a: $ ip a ... 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever As you can see, in my case, the docker0 interface has the IP address 172.17.42.1 with a netmask of /16 (or 255.255.0.0). This means that the network address is 172.17.0.0/16. The IP address is randomly assigned, but without any additional configuration, it will always be in the 172.17.0.0/16 network. For each Docker container, a random address from that range will be assigned. This means, if you want to grant access from all possible containers to your database, use 172.17.0.0/16.
PostgreSQL
31,249,112
299
I am testing Postgres insertion performance. I have a table with one column with number as its data type. There is an index on it as well. I filled the database up using this query: insert into aNumber (id) values (564),(43536),(34560) ... I inserted 4 million rows very quickly 10,000 at a time with the query above. After the database reached 6 million rows performance drastically declined to 1 Million rows every 15 min. Is there any trick to increase insertion performance? I need optimal insertion performance on this project. Using Windows 7 Pro on a machine with 5 GB RAM.
See populate a database in the PostgreSQL manual, depesz's excellent-as-usual article on the topic, and this SO question. (Note that this answer is about bulk-loading data into an existing DB or to create a new one. If you're interested DB restore performance with pg_restore or psql execution of pg_dump output, much of this doesn't apply since pg_dump and pg_restore already do things like creating triggers and indexes after it finishes a schema+data restore). There's lots to be done. The ideal solution would be to import into an UNLOGGED table without indexes, then change it to logged and add the indexes. Unfortunately in PostgreSQL 9.4 there's no support for changing tables from UNLOGGED to logged. 9.5 adds ALTER TABLE ... SET LOGGED to permit you to do this. If you can take your database offline for the bulk import, use pg_bulkload. Otherwise: Disable any triggers on the table Drop indexes before starting the import, re-create them afterwards. (It takes much less time to build an index in one pass than it does to add the same data to it progressively, and the resulting index is much more compact). If doing the import within a single transaction, it's safe to drop foreign key constraints, do the import, and re-create the constraints before committing. Do not do this if the import is split across multiple transactions as you might introduce invalid data. If possible, use COPY instead of INSERTs If you can't use COPY consider using multi-valued INSERTs if practical. You seem to be doing this already. Don't try to list too many values in a single VALUES though; those values have to fit in memory a couple of times over, so keep it to a few hundred per statement. Batch your inserts into explicit transactions, doing hundreds of thousands or millions of inserts per transaction. There's no practical limit AFAIK, but batching will let you recover from an error by marking the start of each batch in your input data. Again, you seem to be doing this already. Use synchronous_commit=off and a huge commit_delay to reduce fsync() costs. This won't help much if you've batched your work into big transactions, though. INSERT or COPY in parallel from several connections. How many depends on your hardware's disk subsystem; as a rule of thumb, you want one connection per physical hard drive if using direct attached storage. Set a high max_wal_size value (checkpoint_segments in older versions) and enable log_checkpoints. Look at the PostgreSQL logs and make sure it's not complaining about checkpoints occurring too frequently. If and only if you don't mind losing your entire PostgreSQL cluster (your database and any others on the same cluster) to catastrophic corruption if the system crashes during the import, you can stop Pg, set fsync=off, start Pg, do your import, then (vitally) stop Pg and set fsync=on again. See WAL configuration. Do not do this if there is already any data you care about in any database on your PostgreSQL install. If you set fsync=off you can also set full_page_writes=off; again, just remember to turn it back on after your import to prevent database corruption and data loss. See non-durable settings in the Pg manual. You should also look at tuning your system: Use good quality SSDs for storage as much as possible. Good SSDs with reliable, power-protected write-back caches make commit rates incredibly faster. They're less beneficial when you follow the advice above - which reduces disk flushes / number of fsync()s - but can still be a big help. Do not use cheap SSDs without proper power-failure protection unless you don't care about keeping your data. If you're using RAID 5 or RAID 6 for direct attached storage, stop now. Back your data up, restructure your RAID array to RAID 10, and try again. RAID 5/6 are hopeless for bulk write performance - though a good RAID controller with a big cache can help. If you have the option of using a hardware RAID controller with a big battery-backed write-back cache this can really improve write performance for workloads with lots of commits. It doesn't help as much if you're using async commit with a commit_delay or if you're doing fewer big transactions during bulk loading. If possible, store WAL (pg_wal, or pg_xlog in old versions) on a separate disk / disk array. There's little point in using a separate filesystem on the same disk. People often choose to use a RAID1 pair for WAL. Again, this has more effect on systems with high commit rates, and it has little effect if you're using an unlogged table as the data load target. You may also be interested in Optimise PostgreSQL for fast testing.
PostgreSQL
12,206,600
299
I have a table in PostgreSQL where the schema looks like this: CREATE TABLE "foo_table" ( "id" serial NOT NULL PRIMARY KEY, "permalink" varchar(200) NOT NULL, "text" varchar(512) NOT NULL, "timestamp" timestamp with time zone NOT NULL ) Now I want to make the permalink unique across the table by ALTER-ing the table.
I figured it out from the PostgreSQL docs, the exact syntax is: ALTER TABLE the_table ADD CONSTRAINT constraint_name UNIQUE (thecolumn); Thanks Fred.
PostgreSQL
469,471
298
Entering the following command into a PostgreSQL interactive terminal results in an error: ALTER TABLE tbl_name ALTER COLUMN col_name varchar (11); What is the correct command to alter the data type of a column?
See documentation here: http://www.postgresql.org/docs/current/interactive/sql-altertable.html ALTER TABLE tbl_name ALTER COLUMN col_name TYPE varchar (11);
PostgreSQL
7,162,903
297
In postgresql, how do I replace all instances of a string within a database column? Say I want to replace all instances of cat with dog, for example. What's the best way to do this?
You want to use postgresql's replace function: replace(string text, from text, to text) for instance : UPDATE <table> SET <field> = replace(<field>, 'cat', 'dog') Be aware, though, that this will be a string-to-string replacement, so 'category' will become 'dogegory'. the regexp_replace function may help you define a stricter match pattern for what you want to replace.
PostgreSQL
5,060,526
296
What is the best way to list all of the tables within PostgreSQL's information_schema? To clarify: I am working with an empty DB (I have not added any of my own tables), but I want to see every table in the information_schema structure.
You should be able to just run select * from information_schema.tables to get a listing of every table being managed by Postgres for a particular database. You can also add a where table_schema = 'information_schema' to see just the tables in the information schema.
PostgreSQL
2,276,644
292
Is there a way to create a backup of a single table within a database using postgres? And how? Does this also work with the pg_dump command?
Use --table to tell pg_dump what table it has to backup: pg_dump --host localhost --port 5432 --username postgres --format plain --verbose --file "<abstract_file_path>" --table public.tablename dbname
PostgreSQL
3,682,866
288
How do I create crosstab queries in PostgreSQL? For example I have the following table: Section Status Count A Active 1 A Inactive 2 B Active 4 B Inactive 5 I would like the query to return the following crosstab: Section Active Inactive A 1 2 B 4 5
Install the additional module tablefunc once per database, which provides the function crosstab(). Since Postgres 9.1 you can use CREATE EXTENSION for that: CREATE EXTENSION IF NOT EXISTS tablefunc; Improved test case CREATE TABLE tbl ( section text , status text , ct integer -- "count" is a reserved word in standard SQL ); INSERT INTO tbl VALUES ('A', 'Active', 1), ('A', 'Inactive', 2) , ('B', 'Active', 4), ('B', 'Inactive', 5) , ('C', 'Inactive', 7); -- ('C', 'Active') is missing Simple form - not fit for missing attributes crosstab(text) with 1 input parameter: SELECT * FROM crosstab( 'SELECT section, status, ct FROM tbl ORDER BY 1,2' -- needs to be "ORDER BY 1,2" here ) AS ct ("Section" text, "Active" int, "Inactive" int); Returns: Section | Active | Inactive ---------+--------+---------- A | 1 | 2 B | 4 | 5 C | 7 | -- !! No need for casting and renaming. Note the incorrect result for C: the value 7 is filled in for the first column. Sometimes, this behavior is desirable, but not for this use case. The simple form is also limited to exactly three columns in the provided input query: row_name, category, value. There is no room for extra columns like in the 2-parameter alternative below. Safe form crosstab(text, text) with 2 input parameters: SELECT * FROM crosstab( 'SELECT section, status, ct FROM tbl ORDER BY 1,2' -- could also just be "ORDER BY 1" here , $$VALUES ('Active'::text), ('Inactive')$$ ) AS ct ("Section" text, "Active" int, "Inactive" int); Returns: Section | Active | Inactive ---------+--------+---------- A | 1 | 2 B | 4 | 5 C | | 7 -- !! Note the correct result for C. The second parameter can be any query that returns one row per attribute matching the order of the column definition at the end. Often you will want to query distinct attributes from the underlying table like this: 'SELECT DISTINCT attribute FROM tbl ORDER BY 1' That's in the manual. Since you have to spell out all columns in a column definition list anyway (except for pre-defined crosstabN() variants), it is typically more efficient to provide a short list in a VALUES expression like demonstrated: $$VALUES ('Active'::text), ('Inactive')$$) Or (not in the manual): $$SELECT unnest('{Active,Inactive}'::text[])$$ -- short syntax for long lists I used dollar quoting to make quoting easier. You can even output columns with different data types with crosstab(text, text) - as long as the text representation of the value column is valid input for the target type. This way you might have attributes of different kind and output text, date, numeric etc. for respective attributes. There is a code example at the end of the chapter crosstab(text, text) in the manual. db<>fiddle here Effect of excess input rows Excess input rows are handled differently - duplicate rows for the same ("row_name", "category") combination - (section, status) in the above example. The 1-parameter form fills in available value columns from left to right. Excess values are discarded. Earlier input rows win. The 2-parameter form assigns each input value to its dedicated column, overwriting any previous assignment. Later input rows win. Typically, you don't have duplicates to begin with. But if you do, carefully adjust the sort order to your requirements - and document what's happening. Or get fast arbitrary results if you don't care. Just be aware of the effect. Advanced examples Pivot on Multiple Columns using Tablefunc - also demonstrating mentioned "extra columns" Dynamic alternative to pivot with CASE and GROUP BY \crosstabview in psql Postgres 9.6 added this meta-command to its default interactive terminal psql. You can run the query you would use as first crosstab() parameter and feed it to \crosstabview (immediately or in the next step). Like: db=> SELECT section, status, ct FROM tbl \crosstabview Similar result as above, but it's a representation feature on the client side exclusively. Input rows are treated slightly differently, hence ORDER BY is not required. Details for \crosstabview in the manual. There are more code examples at the bottom of that page. Related answer on dba.SE by Daniel Vérité (the author of the psql feature): How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
PostgreSQL
3,002,499
282
I would like to get the columns that an index is on in PostgreSQL. In MySQL you can use SHOW INDEXES FOR table and look at the Column_name column. mysql> show indexes from foos; +-------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +-------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | foos | 0 | PRIMARY | 1 | id | A | 19710 | NULL | NULL | | BTREE | | | foos | 0 | index_foos_on_email | 1 | email | A | 19710 | NULL | NULL | YES | BTREE | | | foos | 1 | index_foos_on_name | 1 | name | A | 19710 | NULL | NULL | | BTREE | | +-------+------------+---------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ Does anything like this exist for PostgreSQL? I've tried \d at the psql command prompt (with the -E option to show SQL) but it doesn't show the information I'm looking for. Update: Thanks to everyone who added their answers. cope360 gave me exactly what I was looking for, but several people chimed in with very useful links. For future reference, check out the documentation for pg_index (via Milen A. Radev) and the very useful article Extracting META information from PostgreSQL (via Michał Niklas).
Create some test data... create table test (a int, b int, c int, constraint pk_test primary key(a, b)); create table test2 (a int, b int, c int, constraint uk_test2 unique (b, c)); create table test3 (a int, b int, c int, constraint uk_test3b unique (b), constraint uk_test3c unique (c),constraint uk_test3ab unique (a, b)); List indexes and columns indexed: select t.relname as table_name, i.relname as index_name, a.attname as column_name from pg_class t, pg_class i, pg_index ix, pg_attribute a where t.oid = ix.indrelid and i.oid = ix.indexrelid and a.attrelid = t.oid and a.attnum = ANY(ix.indkey) and t.relkind = 'r' and t.relname like 'test%' order by t.relname, i.relname; table_name | index_name | column_name ------------+------------+------------- test | pk_test | a test | pk_test | b test2 | uk_test2 | b test2 | uk_test2 | c test3 | uk_test3ab | a test3 | uk_test3ab | b test3 | uk_test3b | b test3 | uk_test3c | c Roll up the column names: select t.relname as table_name, i.relname as index_name, array_to_string(array_agg(a.attname), ', ') as column_names from pg_class t, pg_class i, pg_index ix, pg_attribute a where t.oid = ix.indrelid and i.oid = ix.indexrelid and a.attrelid = t.oid and a.attnum = ANY(ix.indkey) and t.relkind = 'r' and t.relname like 'test%' group by t.relname, i.relname order by t.relname, i.relname; table_name | index_name | column_names ------------+------------+-------------- test | pk_test | a, b test2 | uk_test2 | b, c test3 | uk_test3ab | a, b test3 | uk_test3b | b test3 | uk_test3c | c
PostgreSQL
2,204,058
282
I do not know the service's name, but would like to stop the service by checking its status. For example, if I want to check if the PostgreSQL service is running or not, but I don't know the service's name, then how could I check its status? I know the command to check the status if the service name is known.
I don't have an Ubuntu box, but on Red Hat Linux you can see all running services by running the following command: service --status-all On the list the + indicates the service is running, - indicates service is not running, ? indicates the service state cannot be determined.
PostgreSQL
18,721,149
280
I have a column of the TIMESTAMP WITHOUT TIME ZONE type and would like to have that default to the current time in UTC. Getting the current time in UTC is easy: postgres=# select now() at time zone 'utc'; timezone ---------------------------- 2013-05-17 12:52:51.337466 (1 row) As is using the current timestamp for a column: postgres=# create temporary table test(id int, ts timestamp without time zone default current_timestamp); CREATE TABLE postgres=# insert into test values (1) returning ts; ts ---------------------------- 2013-05-17 14:54:33.072725 (1 row) But that uses local time. Trying to force that to UTC results in a syntax error: postgres=# create temporary table test(id int, ts timestamp without time zone default now() at time zone 'utc'); ERROR: syntax error at or near "at" LINE 1: ...int, ts timestamp without time zone default now() at time zo...
A function is not even needed. Just put parentheses around the default expression: create temporary table test( id int, ts1 timestamp default (now() at time zone 'utc') -- alternative syntax ts2 timestamp default (timezone('utc', now())), ); NOTE: The SQL standard requires that writing just timestamp be equivalent to timestamp without time zone, and PostgreSQL honors that behavior. timestamptz is accepted as an abbreviation for timestamp with time zone; this is a PostgreSQL extension. https://www.postgresql.org/docs/current/datatype-datetime.html
PostgreSQL
16,609,724
278
Is this proper postgresql syntax to add a column to a table with a default value of false ALTER TABLE users ADD "priv_user" BIT ALTER priv_user SET DEFAULT '0' Thanks!
ALTER TABLE users ADD COLUMN "priv_user" BOOLEAN DEFAULT FALSE; you can also directly specify NOT NULL ALTER TABLE users ADD COLUMN "priv_user" BOOLEAN NOT NULL DEFAULT FALSE; UPDATE: following is only true for versions before postgresql 11. As Craig mentioned on filled tables it is more efficient to split it into steps: ALTER TABLE users ADD COLUMN priv_user BOOLEAN; UPDATE users SET priv_user = 'f'; ALTER TABLE users ALTER COLUMN priv_user SET NOT NULL; ALTER TABLE users ALTER COLUMN priv_user SET DEFAULT FALSE;
PostgreSQL
11,938,621
278
I have the following table: tickername | tickerbbname | tickertype ------------+---------------+------------ USDZAR | USDZAR Curncy | C EURCZK | EURCZK Curncy | C EURPLN | EURPLN Curncy | C USDBRL | USDBRL Curncy | C USDTRY | USDTRY Curncy | C EURHUF | EURHUF Curncy | C USDRUB | USDRUB Curncy | C I don't want there to ever be more than one column for any given tickername/tickerbbname pair. I've already created the table and have lots of data in it (which I have already ensured meets the unique criteria). As it gets larger, though, room for error creeps in. Is there any way to add a UNIQUE constraint at this point?
psql's inline help: \h ALTER TABLE Also documented in the postgres docs (an excellent resource, plus easy to read, too). ALTER TABLE tablename ADD CONSTRAINT constraintname UNIQUE (columns);
PostgreSQL
1,194,438
277
Trying to create this example table structure in Postgres 9.1: CREATE TABLE foo ( name VARCHAR(256) PRIMARY KEY ); CREATE TABLE bar ( pkey SERIAL PRIMARY KEY, foo_fk VARCHAR(256) NOT NULL REFERENCES foo(name), name VARCHAR(256) NOT NULL, UNIQUE (foo_fk,name) ); CREATE TABLE baz( pkey SERIAL PRIMARY KEY, bar_fk VARCHAR(256) NOT NULL REFERENCES bar(name), name VARCHAR(256) ); Running the above code produces an error, which does not make sense to me: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "foo_pkey" for table "foo" NOTICE: CREATE TABLE will create implicit sequence "bar_pkey_seq" for serial column "bar.pkey" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "bar_pkey" for table "bar" NOTICE: CREATE TABLE / UNIQUE will create implicit index "bar_foo_fk_name_key" for table "bar" NOTICE: CREATE TABLE will create implicit sequence "baz_pkey_seq" for serial column "baz.pkey" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "baz_pkey" for table "baz" ERROR: there is no unique constraint matching given keys for referenced table "bar" ********** Error ********** ERROR: there is no unique constraint matching given keys for referenced table "bar" SQL state: 42830 Can anyone explain why this error arises?
It's because the name column on the bar table does not have the UNIQUE constraint. So imagine you have 2 rows on the bar table that contain the name 'ams' and you insert a row on baz with 'ams' on bar_fk, which row on bar would it be referring since there are two rows matching?
PostgreSQL
11,966,420
274
As I can understand documentation the following definitions are equivalent: create table foo ( id serial primary key, code integer, label text, constraint foo_uq unique (code, label)); create table foo ( id serial primary key, code integer, label text); create unique index foo_idx on foo using btree (code, label); However, a note in the manual for Postgres 9.4 says: The preferred way to add a unique constraint to a table is ALTER TABLE ... ADD CONSTRAINT. The use of indexes to enforce unique constraints could be considered an implementation detail that should not be accessed directly. (Edit: this note was removed from the manual with Postgres 9.5.) Is it only a matter of good style? What are practical consequences of choice one of these variants (e.g. in performance)?
I had some doubts about this basic but important issue, so I decided to learn by example. Let's create test table master with two columns, con_id with unique constraint and ind_id indexed by unique index. create table master ( con_id integer unique, ind_id integer ); create unique index master_unique_idx on master (ind_id); Table "public.master" Column | Type | Modifiers --------+---------+----------- con_id | integer | ind_id | integer | Indexes: "master_con_id_key" UNIQUE CONSTRAINT, btree (con_id) "master_unique_idx" UNIQUE, btree (ind_id) In table description (\d in psql) you can tell unique constraint from unique index. Uniqueness Let's check uniqueness, just in case. test=# insert into master values (0, 0); INSERT 0 1 test=# insert into master values (0, 1); ERROR: duplicate key value violates unique constraint "master_con_id_key" DETAIL: Key (con_id)=(0) already exists. test=# insert into master values (1, 0); ERROR: duplicate key value violates unique constraint "master_unique_idx" DETAIL: Key (ind_id)=(0) already exists. test=# It works as expected! Foreign keys Now we'll define detail table with two foreign keys referencing to our two columns in master. create table detail ( con_id integer, ind_id integer, constraint detail_fk1 foreign key (con_id) references master(con_id), constraint detail_fk2 foreign key (ind_id) references master(ind_id) ); Table "public.detail" Column | Type | Modifiers --------+---------+----------- con_id | integer | ind_id | integer | Foreign-key constraints: "detail_fk1" FOREIGN KEY (con_id) REFERENCES master(con_id) "detail_fk2" FOREIGN KEY (ind_id) REFERENCES master(ind_id) Well, no errors. Let's make sure it works. test=# insert into detail values (0, 0); INSERT 0 1 test=# insert into detail values (1, 0); ERROR: insert or update on table "detail" violates foreign key constraint "detail_fk1" DETAIL: Key (con_id)=(1) is not present in table "master". test=# insert into detail values (0, 1); ERROR: insert or update on table "detail" violates foreign key constraint "detail_fk2" DETAIL: Key (ind_id)=(1) is not present in table "master". test=# Both columns can be referenced in foreign keys. Constraint using index You can add table constraint using existing unique index. alter table master add constraint master_ind_id_key unique using index master_unique_idx; Table "public.master" Column | Type | Modifiers --------+---------+----------- con_id | integer | ind_id | integer | Indexes: "master_con_id_key" UNIQUE CONSTRAINT, btree (con_id) "master_ind_id_key" UNIQUE CONSTRAINT, btree (ind_id) Referenced by: TABLE "detail" CONSTRAINT "detail_fk1" FOREIGN KEY (con_id) REFERENCES master(con_id) TABLE "detail" CONSTRAINT "detail_fk2" FOREIGN KEY (ind_id) REFERENCES master(ind_id) Now there is no difference between column constraints description. Partial indexes In table constraint declaration you cannot create partial indexes. It comes directly from the definition of create table .... In unique index declaration you can set WHERE clause to create partial index. You can also create index on expression (not only on column) and define some other parameters (collation, sort order, NULLs placement). You cannot add table constraint using partial index. alter table master add column part_id integer; create unique index master_partial_idx on master (part_id) where part_id is not null; alter table master add constraint master_part_id_key unique using index master_partial_idx; ERROR: "master_partial_idx" is a partial index LINE 1: alter table master add constraint master_part_id_key unique ... ^ DETAIL: Cannot create a primary key or unique constraint using such an index.
PostgreSQL
23,542,794
271
Is there an easy way to see the code used to create a view using the PostgreSQL command-line client? Something like the SHOW CREATE VIEW from MySQL.
Kept having to return here to look up pg_get_viewdef (how to remember that!!), so searched for a more memorable command... and got it: \d+ viewname You can see similar sorts of commands by typing \? at the pgsql command line. Bonus tip: The emacs command sql-postgres makes pgsql a lot more pleasant (edit, copy, paste, command history).
PostgreSQL
14,634,322
271
I need to take the first N rows for each group, ordered by custom column. Given the following table: db=# SELECT * FROM xxx; id | section_id | name ----+------------+------ 1 | 1 | A 2 | 1 | B 3 | 1 | C 4 | 1 | D 5 | 2 | E 6 | 2 | F 7 | 3 | G 8 | 2 | H (8 rows) I need the first 2 rows (ordered by name) for each section_id, i.e. a result similar to: id | section_id | name ----+------------+------ 1 | 1 | A 2 | 1 | B 5 | 2 | E 6 | 2 | F 7 | 3 | G (5 rows) I am using PostgreSQL 8.3.5.
New solution (PostgreSQL 8.4) SELECT * FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY section_id ORDER BY name) AS r, t.* FROM xxx t) x WHERE x.r <= 2;
PostgreSQL
1,124,603
270
I'm getting the following error when running a query on a PostgreSQL db in standby mode. The query that causes the error works fine for 1 month but when you query for more than 1 month an error results. ERROR: canceling statement due to conflict with recovery Detail: User query might have needed to see row versions that must be removed Any suggestions on how to resolve? Thanks
No need to touch hot_standby_feedback. As others have mentioned, setting it to on can bloat master. Imagine opening a transaction on a slave and not closing it. Instead, set max_standby_archive_delay and max_standby_streaming_delay to sane values: # /etc/postgresql/10/main/postgresql.conf on a slave max_standby_archive_delay = 900s max_standby_streaming_delay = 900s This way queries on slaves with a duration less than 900 seconds won't be cancelled. If your workload requires longer queries, just set these options to a higher value. The postgres docs discuss this at some length. Key advice from there is: if the standby server is meant for executing long-running queries, then a high or even infinite delay value [in max_standby_archive_delay and max_standby_streaming_delay] may be preferable and Users should be clear that tables that are regularly and heavily updated on the primary server will quickly cause cancellation of longer running queries on the standby. In such cases the setting of a finite value for max_standby_archive_delay or max_standby_streaming_delay can be considered similar to setting statement_timeout. You can also consider setting vacuum_defer_cleanup_age (on the primary) in combination with the max standby delays. As the docs say: Another option is to increase vacuum_defer_cleanup_age on the primary server, so that dead rows will not be cleaned up as quickly as they normally would be. This will allow more time for queries to execute before they are canceled on the standby, without having to set a high max_standby_streaming_delay
PostgreSQL
14,592,436
269
I am new to message brokers like RabbitMQ which we can use to create tasks / message queues for a scheduling system like Celery. Now, here is the question: I can create a table in PostgreSQL which can be appended with new tasks and consumed by the consumer program like Celery. Why on earth would I want to setup a whole new tech for this like RabbitMQ? Now, I believe scaling cannot be the answer since our database like PostgreSQL can work in a distributed environment. I googled for what problems does the database poses for the particular problem, and I found: polling keeps the database busy and low performing locking of the table -> again low performing millions of rows of tasks -> again, polling is low performing Now, how does RabbitMQ or any other message broker like that solves these problems? Also, I found out that AMQP protocol is what it follows. What's great in that? Can Redis also be used as a message broker? I find it more analogous to Memcached than RabbitMQ. Please shed some light on this!
Rabbit's queues reside in memory and will therefore be much faster than implementing this in a database. A (good)dedicated message queue should also provide essential queuing related features such as throttling/flow control, and the ability to choose different routing algorithms, to name a couple(rabbit provides these and more). Depending on the size of your project, you may also want the message passing component separate from your database, so that if one component experiences heavy load, it need not hinder the other's operation. As for the problems you mentioned: polling keeping the database busy and low performing: Using Rabbitmq, producers can push updates to consumers which is far more performant than polling. Data is simply sent to the consumer when it needs to be, eliminating the need for wasteful checks. locking of the table -> again low performing: There is no table to lock :P millions of rows of task -> again polling is low performing: As mentioned above, Rabbitmq will operate faster as it resides RAM, and provides flow control. If needed, it can also use the disk to temporarily store messages if it runs out of RAM. After 2.0, Rabbit has significantly improved on its RAM usage. Clustering options are also available. In regards to AMQP, I would say a really cool feature is the "exchange", and the ability for it to route to other exchanges. This gives you more flexibility and enables you to create a wide array of elaborate routing typologies which can come in very handy when scaling. For a good example, see: (source: springsource.com) and: http://blog.springsource.org/2011/04/01/routing-topologies-for-performance-and-scalability-with-rabbitmq/ Finally, in regards to Redis, yes, it can be used as a message broker, and can do well. However, Rabbitmq has more message queuing features than Redis, as rabbitmq was built from the ground up to be a full-featured enterprise-level dedicated message queue. Redis on the other hand was primarily created to be an in-memory key-value store(though it does much more than that now; its even referred to as a swiss army knife). Still, I've read/heard many people achieving good results with Redis for smaller sized projects, but haven't heard much about it in larger applications. Here is an example of Redis being used in a long-polling chat implementation: http://eflorenzano.com/blog/2011/02/16/technology-behind-convore/
PostgreSQL
13,005,410
268
I need to know the number of rows in a table to calculate a percentage. If the total count is greater than some predefined constant, I will use the constant value. Otherwise, I will use the actual number of rows. I can use SELECT count(*) FROM table. But if my constant value is 500,000 and I have 5,000,000,000 rows in my table, counting all rows will waste a lot of time. Is it possible to stop counting as soon as my constant value is surpassed? I need the exact number of rows only as long as it's below the given limit. Otherwise, if the count is above the limit, I use the limit value instead and want the answer as fast as possible. Something like this: SELECT text,count(*), percentual_calculus() FROM token GROUP BY text ORDER BY count DESC;
Counting rows in big tables is known to be slow in PostgreSQL. The MVCC model requires a full count of live rows for a precise number. There are workarounds to speed this up dramatically if the count does not have to be exact like it seems to be in your case. (Remember that even an "exact" count is potentially dead on arrival under concurrent write load.) Exact count Slow for big tables. With concurrent write operations, it may be outdated the moment you get it. SELECT count(*) AS exact_count FROM myschema.mytable; Estimate Extremely fast: SELECT reltuples AS estimate FROM pg_class where relname = 'mytable'; Typically, the estimate is very close. How close, depends on whether ANALYZE or VACUUM are run enough - where "enough" is defined by the level of write activity to your table. Safer estimate The above ignores the possibility of multiple tables with the same name in one database - in different schemas. To account for that: SELECT c.reltuples::bigint AS estimate FROM pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace WHERE c.relname = 'mytable' AND n.nspname = 'myschema'; The cast to bigint formats the real number nicely, especially for big counts. Better estimate SELECT reltuples::bigint AS estimate FROM pg_class WHERE oid = 'myschema.mytable'::regclass; Faster, simpler, safer, more elegant. See the manual on Object Identifier Types. Replace 'myschema.mytable'::regclass with to_regclass('myschema.mytable') in Postgres 9.4+ to get nothing instead of an exception for invalid table names. See: How to check if a table exists in a given schema Better estimate yet (for very little added cost) This does not work for partitioned tables because relpages is always -1 for the parent table (while reltuples contains an actual estimate covering all partitions) - tested in Postgres 14. You have to add up estimates for all partitions instead. We can do what the Postgres planner does. Quoting the Row Estimation Examples in the manual: These numbers are current as of the last VACUUM or ANALYZE on the table. The planner then fetches the actual current number of pages in the table (this is a cheap operation, not requiring a table scan). If that is different from relpages then reltuples is scaled accordingly to arrive at a current number-of-rows estimate. Postgres uses estimate_rel_size defined in src/backend/utils/adt/plancat.c, which also covers the corner case of no data in pg_class because the relation was never vacuumed. We can do something similar in SQL: Minimal form SELECT (reltuples / relpages * (pg_relation_size(oid) / 8192))::bigint FROM pg_class WHERE oid = 'mytable'::regclass; -- your table here Safe and explicit SELECT (CASE WHEN c.reltuples < 0 THEN NULL -- never vacuumed WHEN c.relpages = 0 THEN float8 '0' -- empty table ELSE c.reltuples / c.relpages END * (pg_catalog.pg_relation_size(c.oid) / pg_catalog.current_setting('block_size')::int) )::bigint FROM pg_catalog.pg_class c WHERE c.oid = 'myschema.mytable'::regclass; -- schema-qualified table here Doesn't break with empty tables and tables that have never seen VACUUM or ANALYZE. The manual on pg_class: If the table has never yet been vacuumed or analyzed, reltuples contains -1 indicating that the row count is unknown. If this query returns NULL, run ANALYZE or VACUUM for the table and repeat. (Alternatively, you could estimate row width based on column types like Postgres does, but that's tedious and error-prone.) If this query returns 0, the table seems to be empty. But I would ANALYZE to make sure. (And maybe check your autovacuum settings.) Typically, block_size is 8192. current_setting('block_size')::int covers rare exceptions. Table and schema qualifications make it immune to any search_path and scope. Either way, the query consistently takes < 0.1 ms for me. More Web resources: The Postgres Wiki FAQ The Postgres wiki pages for count estimates and count(*) performance TABLESAMPLE SYSTEM (n) in Postgres 9.5+ SELECT 100 * count(*) AS estimate FROM mytable TABLESAMPLE SYSTEM (1); Like @a_horse commented, the added clause for the SELECT command can be useful if statistics in pg_class are not current enough for some reason. For example: No autovacuum running. Immediately after a large INSERT / UPDATE / DELETE. TEMPORARY tables (which are not covered by autovacuum). This only looks at a random n % (1 in the example) selection of blocks and counts rows in it. A bigger sample increases the cost and reduces the error, your pick. Accuracy depends on more factors: Distribution of row size. If a given block happens to hold wider than usual rows, the count is lower than usual etc. Dead tuples or a FILLFACTOR occupy space per block. If unevenly distributed across the table, the estimate may be off. General rounding errors. Typically, the estimate from pg_class will be faster and more accurate. Answer to actual question First, I need to know the number of rows in that table, if the total count is greater than some predefined constant, And whether it ... ... is possible at the moment the count pass my constant value, it will stop the counting (and not wait to finish the counting to inform the row count is greater). Yes. You can use a subquery with LIMIT: SELECT count(*) FROM (SELECT 1 FROM token LIMIT 500000) t; Postgres actually stops counting beyond the given limit, you get an exact and current count for up to n rows (500000 in the example), and n otherwise. Not nearly as fast as the estimate in pg_class, though.
PostgreSQL
7,943,233
267
I have a db table say, persons in Postgres handed down by another team that has a column name say, "first_Name". Now am trying to use PG commander to query this table on this column-name. select * from persons where first_Name="xyz"; And it just returns ERROR: column "first_Name" does not exist Not sure if I am doing something silly or is there a workaround to this problem that I am missing?
Identifiers (including column names) that are not double-quoted are folded to lower case in PostgreSQL. Identifiers created with double quotes retain upper case letters (and/or syntax violations) and have to be double-quoted for the rest of their life: "first_Name" -- upper-case "N" preserved "1st_Name" -- leading digit preserved "AND" -- reserved word preserved But (without double-quotes): first_Name → first_name -- upper-case "N" folded to lower-case "n" 1st_Name → Syntax error! -- leading digit AND → Syntax error! -- reserved word Values (string literals / constants) are enclosed in single quotes: 'xyz' So, yes, PostgreSQL column names are case-sensitive (when double-quoted): SELECT * FROM persons WHERE "first_Name" = 'xyz'; The manual on identifiers. My standing advice is to use legal, lower-case names exclusively, so double-quoting is never required. System catalogs like pg_class store names in case-sensitive fashion - as provided when double-quoted (without enclosing quotes, obviously), or lower-cased if not.
PostgreSQL
20,878,932
265
I have a table to store information about my rabbits. It looks like this: create table rabbits (rabbit_id bigserial primary key, info json not null); insert into rabbits (info) values ('{"name":"Henry", "food":["lettuce","carrots"]}'), ('{"name":"Herald","food":["carrots","zucchini"]}'), ('{"name":"Helen", "food":["lettuce","cheese"]}'); How should I find the rabbits who like carrots? I came up with this: select info->>'name' from rabbits where exists ( select 1 from json_array_elements(info->'food') as food where food::text = '"carrots"' ); I don't like that query. It's a mess. As a full-time rabbit-keeper, I don't have time to change my database schema. I just want to properly feed my rabbits. Is there a more readable way to do that query?
As of PostgreSQL 9.4, you can use the ? operator: select info->>'name' from rabbits where (info->'food')::jsonb ? 'carrots'; You can even index the ? query on the "food" key if you switch to the jsonb type instead: alter table rabbits alter info type jsonb using info::jsonb; create index on rabbits using gin ((info->'food')); select info->>'name' from rabbits where info->'food' ? 'carrots'; Of course, you probably don't have time for that as a full-time rabbit keeper. Update: Here's a demonstration of the performance improvements on a table of 1,000,000 rabbits where each rabbit likes two foods and 10% of them like carrots: d=# -- Postgres 9.3 solution d=# explain analyze select info->>'name' from rabbits where exists ( d(# select 1 from json_array_elements(info->'food') as food d(# where food::text = '"carrots"' d(# ); Execution time: 3084.927 ms d=# -- Postgres 9.4+ solution d=# explain analyze select info->'name' from rabbits where (info->'food')::jsonb ? 'carrots'; Execution time: 1255.501 ms d=# alter table rabbits alter info type jsonb using info::jsonb; d=# explain analyze select info->'name' from rabbits where info->'food' ? 'carrots'; Execution time: 465.919 ms d=# create index on rabbits using gin ((info->'food')); d=# explain analyze select info->'name' from rabbits where info->'food' ? 'carrots'; Execution time: 256.478 ms
PostgreSQL
19,925,641
265
I am looking for some docs and/or examples for the new JSON functions in PostgreSQL 9.2. Specifically, given a series of JSON records: [ {name: "Toby", occupation: "Software Engineer"}, {name: "Zaphod", occupation: "Galactic President"} ] How would I write the SQL to find a record by name? In vanilla SQL: SELECT * from json_data WHERE "name" = "Toby" The official dev manual is quite sparse: http://www.postgresql.org/docs/devel/static/datatype-json.html http://www.postgresql.org/docs/devel/static/functions-json.html Update I I've put together a gist detailing what is currently possible with PostgreSQL 9.2. Using some custom functions, it is possible to do things like: SELECT id, json_string(data,'name') FROM things WHERE json_string(data,'name') LIKE 'G%'; Update II I've now moved my JSON functions into their own project: PostSQL - a set of functions for transforming PostgreSQL and PL/v8 into a totally awesome JSON document store
Postgres 9.2 I quote Andrew Dunstan on the pgsql-hackers list: At some stage there will possibly be some json-processing (as opposed to json-producing) functions, but not in 9.2. Doesn't prevent him from providing an example implementation in PLV8 that should solve your problem. (Link is dead now, see modern PLV8 instead.) Postgres 9.3 Offers an arsenal of new functions and operators to add "json-processing". The manual on new JSON functionality. The Postgres Wiki on new features in pg 9.3. The answer to the original question in Postgres 9.3: For a given table: CREATE TABLE json_tbl (data json); Query: SELECT object FROM json_tbl , json_array_elements(data) AS object WHERE object->>'name' = 'Toby'; Advanced example: Query combinations with nested array of records in JSON datatype For bigger tables you may want to add an expression index to increase performance: Index for finding an element in a JSON array Postgres 9.4 Adds jsonb (b for "binary", values are stored as native Postgres types) and yet more functionality for both types. In addition to expression indexes mentioned above, jsonb also supports GIN, btree and hash indexes, GIN being the most potent of these. The manual on json and jsonb data types and functions. The Postgres Wiki on JSONB in pg 9.4 The manual goes as far as suggesting: In general, most applications should prefer to store JSON data as jsonb, unless there are quite specialized needs, such as legacy assumptions about ordering of object keys. Bold emphasis mine. Also, performance benefits from general improvements to GIN indexes. Postgres 9.5 Complete jsonb functions and operators. Add more functions to manipulate jsonb in place and for display. Major good news in the release notes of Postgres 9.5. Functionality and performance has been improved with every major Postgres version since. It's pretty complete by now (as of Postgres 16). One major, notable addition in ... Postgres 12 ... is the SQL/JSON path language along with operators and functions. The answer to the example in the question can now be, for a given table (with jsonb): CREATE TABLE jsonb_tbl (data jsonb); SELECT jsonb_path_query_first(data, '$[*] ? (@.name == "Toby")') AS object FROM jsonb_tbl WHERE data @> '[{"name": "Toby"}]'; -- optional, for index support Or equivalent: ... WHERE data @@ '$[*].name == "Toby"'; fiddle See: Returning JSON arrary with particular property using Postgres About indexing: Find rows containing a key in a JSONB array of records
PostgreSQL
10,560,394
264
I am using Datagrip for Postgresql. I have a table with a date field in timestamp format (ex: 2016-11-01 00:00:00). I want to be able to: apply a mathematical operator to subtract 1 day filter it based on a time window of today-130 days display it without the hh/mm/ss part of the stamp (2016-10-31) Current starting query: select org_id, count(accounts) as count, ((date_at) - 1) as dateat from sourcetable where date_at <= now() - 130 group by org_id, dateat The ((date_at)-1) clause on line 1 results in: [42883] ERROR: operator does not exist: timestamp without time zone - integer Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts. Position: 69 The now() clause spawns a similar message: [42883] ERROR: operator does not exist: timestamp with time zone - integer Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts. Position: ... Online guides to type casts are singularly unhelpful. Input is appreciated.
Use the INTERVAL type to it. E.g: --yesterday SELECT NOW() - INTERVAL '1 DAY'; --Unrelated: PostgreSQL also supports some interesting shortcuts: SELECT 'yesterday'::TIMESTAMP, 'tomorrow'::TIMESTAMP, 'allballs'::TIME AS aka_midnight; You can do the following then: SELECT org_id, count(accounts) AS COUNT, ((date_at) - INTERVAL '1 DAY') AS dateat FROM sourcetable WHERE date_at <= now() - INTERVAL '130 DAYS' GROUP BY org_id, dateat; TIPS Tip 1 You can append multiple operands. E.g.: how to get last day of current month? SELECT date_trunc('MONTH', CURRENT_DATE) + INTERVAL '1 MONTH - 1 DAY'; Tip 2 You can also create an interval using make_interval function, useful when you need to create it at runtime (not using literals): SELECT make_interval(days => 10 + 2); SELECT make_interval(days => 1, hours => 2); SELECT make_interval(0, 1, 0, 5, 0, 0, 0.0); More info: Date/Time Functions and Operators datatype-datetime (Especial values).
PostgreSQL
46,079,791
263
When I try to test any app with command (I noticed it when I tried to deploy myproject using fabric, which uses this command): python manage.py test appname I get this error: Creating test database for alias 'default'... Got an error creating the test database: permission denied to create database Type 'yes' if you would like to try deleting the test database 'test_finance', or 'no' to cancel syncdb command seems to work. My database settings in settings.py: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'finance', # Or path to database file if using sqlite3. 'USER': 'django', # Not used with sqlite3. 'PASSWORD': 'mydb123', # Not used with sqlite3. 'HOST': '127.0.0.1', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } }
When Django runs the test suite, it creates a new database, in your case test_finance. The postgres user with username django does not have permission to create a database, hence the error message. When you run migrate or syncdb, Django does not try to create the finance database, so you don't get any errors. You can add the createdb permission to the django user by running the following command in the postgres shell as a superuser (hat tip to this stack overflow answer). => ALTER USER django CREATEDB; Note: The username used in the ALTER USER <username> CREATEDB; command needs to match the database user in your Django settings files. In this case, the original poster, had the user as django the above answer.
PostgreSQL
14,186,055
263
Hello I want to delete all data in my postgresql tables, but not the table itself. How could I do this?
Use the TRUNCATE TABLE command.
PostgreSQL
13,223,820
263
How do I convert an integer to string as part of a PostgreSQL query? So, for example, I need: SELECT * FROM table WHERE <some integer> = 'string of numbers' where <some integer> can be anywhere from 1 to 15 digits long.
Because the number can be up to 15 digits, you'll need to cast to an 64 bit (8-byte) integer. Try this: SELECT * FROM table WHERE myint = mytext::int8 The :: cast operator is historical but convenient. Postgres also conforms to the SQL standard syntax myint = cast ( mytext as int8) If you have literal text you want to compare with an int, cast the int to text: SELECT * FROM table WHERE myint::varchar(255) = mytext
PostgreSQL
13,809,547
262
For pagination purposes, I need a run a query with the LIMIT and OFFSET clauses. But I also need a count of the number of rows that would be returned by that query without the LIMIT and OFFSET clauses. I want to run: SELECT * FROM table WHERE /* whatever */ ORDER BY col1 LIMIT ? OFFSET ? And: SELECT COUNT(*) FROM table WHERE /* whatever */ At the same time. Is there a way to do that, particularly a way that lets Postgres optimize it, so that it's faster than running both individually?
Yes. With a simple window function. Add a column with the total count SELECT *, count(*) OVER() AS full_count FROM tbl WHERE /* whatever */ ORDER BY col1 OFFSET ? LIMIT ? Be aware that the cost will be substantially higher than without the total number. Postgres has to actually count all qualifying rows either way, which imposes a cost depending on the total number. See: Best way to get result count before LIMIT was applied Two separate queries (one for the result set, one for the total count) may or may not be faster. But the overhead of executing two separate queries and processing results often tips the scales. Depends on the nature of the query, indexes, resources, cardinalities ... However, as Dani pointed out, when OFFSET is at least as great as the number of rows returned from the base query, no rows are returned. So we get no full_count, either. If that's a rare case, just run a second query for the count in this case. If that's not acceptable, here is a single query always returning the full count, with a CTE and an OUTER JOIN. This adds more overhead and only makes sense for certain cases (expensive filters, few qualifying rows). WITH cte AS ( SELECT * FROM tbl WHERE /* whatever */ -- ORDER BY col1 -- ① ) SELECT * FROM ( TABLE cte ORDER BY col1 LIMIT ? OFFSET ? ) sub RIGHT JOIN (SELECT count(*) FROM cte) c(full_count) ON true; ① Typically it does not pay to add (the same) ORDER BY in the CTE. That forces all rows to be sorted. With LIMIT, typically only a small fraction has to be sorted (with "top-N heapsort"). You get one row of null values, with the full_count appended if OFFSET is too big. Else, it's appended to every row like in the first query. If a row with all null values is a possible valid result you have to check offset >= full_count to disambiguate the origin of the empty row. This still executes the base query only once. But it adds more overhead to the query and only pays if that's less than repeating the base query for the count. Either way, the total count is returned with every row (redundantly). Doesn't add much cost. But if that's an issue, you could instead ... Add a row with the total count The added row must match the row type of the query result, and the count must fit into the data type of one of the columns. A bit of a hack. Like: WITH cte AS ( SELECT col1, col2, int_col3 FROM tbl WHERE /* whatever */ ) SELECT null AS col1, null AS col2, count(*)::int AS int_col3 -- maybe cast the count FROM cte UNION ALL ( -- parentheses required TABLE cte ORDER BY col1 LIMIT ? OFFSET ? ); Again, sometimes it may be cheaper to just run a separate count (still in a single query!): SELECT null AS col1, null AS col2, count(*)::int AS int_col3 FROM tbl WHERE /* whatever */ UNION ALL ( -- parentheses required SELECT col1, col2, int_col3 FROM tbl WHERE /* whatever */ ORDER BY col1 LIMIT ? OFFSET ? ); About the syntax shortcut TABLE tbl: Is there a shortcut for SELECT * FROM?
PostgreSQL
28,888,375
260
Could you tell me how to check what indexes are created for some table in postgresql ?
The view pg_indexes provides access to useful information about each index in the database, e.g.: select * from pg_indexes where tablename = 'test' The pg_index system view contains more detailed (internal) parameters, in particular, whether the index is a primary key or whether it is unique. Example: select c.relnamespace::regnamespace as schema_name, c.relname as table_name, i.indexrelid::regclass as index_name, i.indisprimary as is_pk, i.indisunique as is_unique from pg_index i join pg_class c on c.oid = i.indrelid where c.relname = 'test' See examples in db<>fiddle.
PostgreSQL
37,329,561
259
I would like to generate an entity-relationship diagram (ERD) from an existing PostgreSQL database. What is the recommended approach to do this? Are there any built-in tools to do it? Or third-party alternatives?
You can use DBeaver Community to do this. It's really easy... on the left just open one of your Databases: Click on Schemas -> Public -> Tables On Tables right click, and look for "View Diagram" It also allows you to print your ER diagram and export as an image (png, gif, bmp formats) or as a file in GraphML format. You can check the official DBeaver documentation on ER diagrams here.
PostgreSQL
3,474,389
259
I've got two postgresql tables: table name column names ----------- ------------------------ login_log ip | etc. ip_location ip | location | hostname | etc. I want to get every IP address from login_log which doesn't have a row in ip_location. I tried this query but it throws a syntax error. SELECT login_log.ip FROM login_log WHERE NOT EXIST (SELECT ip_location.ip FROM ip_location WHERE login_log.ip = ip_location.ip) ERROR: syntax error at or near "SELECT" LINE 3: WHERE NOT EXIST (SELECT ip_location.ip` I'm also wondering if this query (with adjustments to make it work) is the best performing query for this purpose.
There are basically 4 techniques for this task, all of them standard SQL. NOT EXISTS Often fastest in Postgres. SELECT ip FROM login_log l WHERE NOT EXISTS ( SELECT -- SELECT list mostly irrelevant; can just be empty in Postgres FROM ip_location WHERE ip = l.ip ); Also consider: What is easier to read in EXISTS subqueries? LEFT JOIN / IS NULL Sometimes this is fastest. Often shortest. Often results in the same query plan as NOT EXISTS. SELECT l.ip FROM login_log l LEFT JOIN ip_location i USING (ip) -- short for: ON i.ip = l.ip WHERE i.ip IS NULL; EXCEPT Short. Not as easily integrated in more complex queries. SELECT ip FROM login_log EXCEPT ALL -- "ALL" keeps duplicates and makes it faster SELECT ip FROM ip_location; Note that (per documentation): duplicates are eliminated unless EXCEPT ALL is used. Typically, you'll want the ALL keyword. If you don't care, still use it because it makes the query faster. NOT IN Only good without null values or if you know to handle null properly. I would not use it for this purpose. Also, performance can deteriorate with bigger tables. SELECT ip FROM login_log WHERE ip NOT IN ( SELECT DISTINCT ip -- DISTINCT is optional FROM ip_location ); NOT IN carries a "trap" for null values on either side: Find records where join doesn't exist Similar question on dba.SE targeted at MySQL: Select rows where value of second column is not present in first column
PostgreSQL
19,363,481
258
I have PSQL running, and am trying to get a perl application connecting to the database. Is there a command to find the current port and host that the database is running on?
SELECT * FROM pg_settings WHERE name = 'port';
PostgreSQL
5,598,517
258
I have a simple SQL query in PostgreSQL 8.3 that grabs a bunch of comments. I provide a sorted list of values to the IN construct in the WHERE clause: SELECT * FROM comments WHERE (comments.id IN (1,3,2,4)); This returns comments in an arbitrary order which in my happens to be ids like 1,2,3,4. I want the resulting rows sorted like the list in the IN construct: (1,3,2,4). How to achieve that?
You can do it quite easily with (introduced in PostgreSQL 8.2) VALUES (), (). Syntax will be like this: select c.* from comments c join ( values (1,1), (3,2), (2,3), (4,4) ) as x (id, ordering) on c.id = x.id order by x.ordering
PostgreSQL
866,465
257
I would like to have PostgreSQL return the result of a query as one JSON array. Given create table t (a int primary key, b text); insert into t values (1, 'value1'); insert into t values (2, 'value2'); insert into t values (3, 'value3'); I would like something similar to [{"a":1,"b":"value1"},{"a":2,"b":"value2"},{"a":3,"b":"value3"}] or {"a":[1,2,3], "b":["value1","value2","value3"]} (actually it would be more useful to know both). I have tried some things like select row_to_json(row) from (select * from t) row; select array_agg(row) from (select * from t) row; select array_to_string(array_agg(row), '') from (select * from t) row; And I feel I am close, but not there really. Should I be looking at other documentation except for 9.15. JSON Functions and Operators? By the way, I am not sure about my idea. Is this a usual design decision? My thinking is that I could, of course, take the result (for example) of the first of the above 3 queries and manipulate it slightly in the application before serving it to the client, but if PostgreSQL can create the final JSON object directly, it would be simpler, because I still have not included any dependency on any JSON library in my application.
TL;DR SELECT json_agg(t) FROM t for a JSON array of objects, and SELECT json_build_object( 'a', json_agg(t.a), 'b', json_agg(t.b) ) FROM t for a JSON object of arrays. List of objects This section describes how to generate a JSON array of objects, with each row being converted to a single object. The result looks like this: [{"a":1,"b":"value1"},{"a":2,"b":"value2"},{"a":3,"b":"value3"}] 9.3 and up The json_agg function produces this result out of the box. It automatically figures out how to convert its input into JSON and aggregates it into an array. SELECT json_agg(t) FROM t There is no jsonb (introduced in 9.4) version of json_agg. You can either aggregate the rows into an array and then convert them: SELECT to_jsonb(array_agg(t)) FROM t or combine json_agg with a cast: SELECT json_agg(t)::jsonb FROM t My testing suggests that aggregating them into an array first is a little faster. I suspect that this is because the cast has to parse the entire JSON result. 9.2 9.2 does not have the json_agg or to_json functions, so you need to use the older array_to_json: SELECT array_to_json(array_agg(t)) FROM t You can optionally include a row_to_json call in the query: SELECT array_to_json(array_agg(row_to_json(t))) FROM t This converts each row to a JSON object, aggregates the JSON objects as an array, and then converts the array to a JSON array. I wasn't able to discern any significant performance difference between the two. Object of lists This section describes how to generate a JSON object, with each key being a column in the table and each value being an array of the values of the column. It's the result that looks like this: {"a":[1,2,3], "b":["value1","value2","value3"]} 9.5 and up We can leverage the json_build_object function: SELECT json_build_object( 'a', json_agg(t.a), 'b', json_agg(t.b) ) FROM t You can also aggregate the columns, creating a single row, and then convert that into an object: SELECT to_json(r) FROM ( SELECT json_agg(t.a) AS a, json_agg(t.b) AS b FROM t ) r Note that aliasing the arrays is absolutely required to ensure that the object has the desired names. Which one is clearer is a matter of opinion. If using the json_build_object function, I highly recommend putting one key/value pair on a line to improve readability. You could also use array_agg in place of json_agg, but my testing indicates that json_agg is slightly faster. There is no jsonb version of the json_build_object function. You can aggregate into a single row and convert: SELECT to_jsonb(r) FROM ( SELECT array_agg(t.a) AS a, array_agg(t.b) AS b FROM t ) r Unlike the other queries for this kind of result, array_agg seems to be a little faster when using to_jsonb. I suspect this is due to overhead parsing and validating the JSON result of json_agg. Or you can use an explicit cast: SELECT json_build_object( 'a', json_agg(t.a), 'b', json_agg(t.b) )::jsonb FROM t The to_jsonb version allows you to avoid the cast and is faster, according to my testing; again, I suspect this is due to overhead of parsing and validating the result. 9.4 and 9.3 The json_build_object function was new to 9.5, so you have to aggregate and convert to an object in previous versions: SELECT to_json(r) FROM ( SELECT json_agg(t.a) AS a, json_agg(t.b) AS b FROM t ) r or SELECT to_jsonb(r) FROM ( SELECT array_agg(t.a) AS a, array_agg(t.b) AS b FROM t ) r depending on whether you want json or jsonb. (9.3 does not have jsonb.) 9.2 In 9.2, not even to_json exists. You must use row_to_json: SELECT row_to_json(r) FROM ( SELECT array_agg(t.a) AS a, array_agg(t.b) AS b FROM t ) r Documentation Find the documentation for the JSON functions in JSON functions. json_agg is on the aggregate functions page. Design If performance is important, ensure you benchmark your queries against your own schema and data, rather than trust my testing. Whether it's a good design or not really depends on your specific application. In terms of maintainability, I don't see any particular problem. It simplifies your app code and means there's less to maintain in that portion of the app. If PG can give you exactly the result you need out of the box, the only reason I can think of to not use it would be performance considerations. Don't reinvent the wheel and all. Nulls Aggregate functions typically give back NULL when they operate over zero rows. If this is a possibility, you might want to use COALESCE to avoid them. A couple of examples: SELECT COALESCE(json_agg(t), '[]'::json) FROM t Or SELECT to_jsonb(COALESCE(array_agg(t), ARRAY[]::t[])) FROM t Credit to Hannes Landeholm for pointing this out
PostgreSQL
24,006,291
256
In PostgreSQL 8 is it possible to add ON DELETE CASCADES to the both foreign keys in the following table without dropping the latter? # \d scores Table "public.scores" Column | Type | Modifiers ---------+-----------------------+----------- id | character varying(32) | gid | integer | money | integer | not null quit | boolean | last_ip | inet | Foreign-key constraints: "scores_gid_fkey" FOREIGN KEY (gid) REFERENCES games(gid) "scores_id_fkey" FOREIGN KEY (id) REFERENCES users(id) Both referenced tables are below - here: # \d games Table "public.games" Column | Type | Modifiers ----------+-----------------------------+---------------------------------------------------------- gid | integer | not null default nextval('games_gid_seq'::regclass) rounds | integer | not null finished | timestamp without time zone | default now() Indexes: "games_pkey" PRIMARY KEY, btree (gid) Referenced by: TABLE "scores" CONSTRAINT "scores_gid_fkey" FOREIGN KEY (gid) REFERENCES games(gid) And here: # \d users Table "public.users" Column | Type | Modifiers ------------+-----------------------------+--------------- id | character varying(32) | not null first_name | character varying(64) | last_name | character varying(64) | female | boolean | avatar | character varying(128) | city | character varying(64) | login | timestamp without time zone | default now() last_ip | inet | logout | timestamp without time zone | vip | timestamp without time zone | mail | character varying(254) | Indexes: "users_pkey" PRIMARY KEY, btree (id) Referenced by: TABLE "cards" CONSTRAINT "cards_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "catch" CONSTRAINT "catch_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "chat" CONSTRAINT "chat_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "game" CONSTRAINT "game_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "hand" CONSTRAINT "hand_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "luck" CONSTRAINT "luck_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "match" CONSTRAINT "match_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "misere" CONSTRAINT "misere_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "money" CONSTRAINT "money_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "pass" CONSTRAINT "pass_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "payment" CONSTRAINT "payment_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "rep" CONSTRAINT "rep_author_fkey" FOREIGN KEY (author) REFERENCES users(id) TABLE "rep" CONSTRAINT "rep_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "scores" CONSTRAINT "scores_id_fkey" FOREIGN KEY (id) REFERENCES users(id) TABLE "status" CONSTRAINT "status_id_fkey" FOREIGN KEY (id) REFERENCES users(id) And also I wonder if it makes sense to add 2 index'es to the former table? UPDATE: Thank you, and also I've got the advice at the mailing list, that I could manage it in 1 statement and thus without explicitly starting a transaction: ALTER TABLE public.scores DROP CONSTRAINT scores_gid_fkey, ADD CONSTRAINT scores_gid_fkey FOREIGN KEY (gid) REFERENCES games(gid) ON DELETE CASCADE;
I'm pretty sure you can't simply add on delete cascade to an existing foreign key constraint. You have to drop the constraint first, then add the correct version. In standard SQL, I believe the easiest way to do this is to start a transaction, drop the foreign key, add a foreign key with on delete cascade, and finally commit the transaction Repeat for each foreign key you want to change. But PostgreSQL has a non-standard extension that lets you use multiple constraint clauses in a single SQL statement. For example alter table public.scores drop constraint scores_gid_fkey, add constraint scores_gid_fkey foreign key (gid) references games(gid) on delete cascade; If you don't know the name of the foreign key constraint you want to drop, you can either look it up in pgAdminIII (just click the table name and look at the DDL, or expand the hierarchy until you see "Constraints"), or you can query the information schema. select * from information_schema.key_column_usage where position_in_unique_constraint is not null
PostgreSQL
10,356,484
256
Very simple example - one table, one index, one query: CREATE TABLE book ( id bigserial NOT NULL, "year" integer, -- other columns... ); CREATE INDEX book_year_idx ON book (year) EXPLAIN SELECT * FROM book b WHERE b.year > 2009 gives me: Seq Scan on book b (cost=0.00..25663.80 rows=105425 width=622) Filter: (year > 2009) Why it does NOT perform index scan instead? What am I missing?
If the SELECT returns more than approximately 5-10% of all rows in the table, a sequential scan is much faster than an index scan. This is because an index scan requires several IO operations for each row (look up the row in the index, then retrieve the row from the heap). Whereas a sequential scan only requires a single IO for each row - or even less because a block (page) on the disk contains more than one row, so more than one row can be fetched with a single IO operation. Btw: this is true for other DBMS as well - some optimizations as "index only scans" taken aside (but for a SELECT * it's highly unlikely such a DBMS would go for an "index only scan")
PostgreSQL
5,203,755
256
How do I find the maximum (or minimum) of two integers in Postgres/SQL? One of the integers is not a column value. I will give an example scenario: I would like to subtract an integer from a column (in all rows), but the result should not be less than zero. So, to begin with, I have: UPDATE my_table SET my_column = my_column - 10; But this can make some of the values negative. What I would like (in pseudo code) is: UPDATE my_table SET my_column = MAXIMUM(my_column - 10, 0);
Have a look at GREATEST and LEAST. UPDATE my_table SET my_column = GREATEST(my_column - 10, 0);
PostgreSQL
2,936,348
256
I'm coming to Postgres from Oracle and looking for a way to find the table and index size in terms of bytes/MB/GB/etc, or even better the size for all tables. In Oracle I had a nasty long query that looked at user_lobs and user_segments to give back an answer. I assume in Postgres there's something I can use in the information_schema tables, but I'm not seeing where.
Try the Database Object Size Functions. An example: SELECT pg_size_pretty(pg_total_relation_size('"<schema>"."<table>"')); For all tables, something along the lines of: SELECT table_schema || '.' || table_name AS table_full_name, pg_size_pretty(pg_total_relation_size('"' || table_schema || '"."' || table_name || '"')) AS size FROM information_schema.tables ORDER BY pg_total_relation_size('"' || table_schema || '"."' || table_name || '"') DESC; Edit: Here's the query submitted by @phord, for convenience: SELECT table_name, pg_size_pretty(table_size) AS table_size, pg_size_pretty(indexes_size) AS indexes_size, pg_size_pretty(total_size) AS total_size FROM ( SELECT table_name, pg_table_size(table_name) AS table_size, pg_indexes_size(table_name) AS indexes_size, pg_total_relation_size(table_name) AS total_size FROM ( SELECT ('"' || table_schema || '"."' || table_name || '"') AS table_name FROM information_schema.tables ) AS all_tables ORDER BY total_size DESC ) AS pretty_sizes; I've modified it slightly to use pg_table_size() to include metadata and make the sizes add up.
PostgreSQL
2,596,624
255